Category: Student Use of AI

  • 64% of Teens Use AI Chatbots. Here’s What Schools Owe Them.

    According to recent research from the Pew Research Centre, 64% of American teens report using AI chatbots. Yet in most schools, that use goes unguided. Students are experimenting with powerful AI tools at home, on their phones, and in study groups—with virtually no structured instruction on critical evaluation, responsible use, or ethical decision-making. This isn’t a small gap. It’s a defining equity issue of our time.

    If prohibition were a viable strategy, schools might simply ban AI and call the problem solved. But students don’t stop using AI because their school blocks it. They simply use it without guidance. And this creates a stark divide: affluent students, whose parents can afford private tutoring and tech-savvy mentorship, are building AI fluency at home. Meanwhile, students in under-resourced schools are left to figure it out alone—or not at all.

    The real question isn’t whether to allow AI in schools. It’s whether we’ll prepare every student to use it wisely.

    The Guidance Gap

    The data tells a troubling story. While teens are embracing AI, educators are struggling to keep pace. The

    Day of AI initiative has reached over 2 million students, with 93% of teachers rating the materials as Good or Excellent. Yet this represents a fraction of K-12 enrollment. Meanwhile, unvetted AI content proliferates. YouTube and other platforms are flooded with ‘AI slop’—videos with inaccurate information, sometimes even dangerous messaging—that kids consume without any quality filters or critical framing.

    In some states, the response has been restriction. New York’s A.9190 has proposed banning AI use below 9th grade. Tennessee’s HB 2393 goes further, targeting an outright ban on K-5 digital devices. Kansas, Missouri, Virginia, and West Virginia are exploring similar measures. The impulse is understandable—but the strategy is flawed. Prohibition doesn’t prevent use; it prevents preparation.

    What Researchers Have Learned

    Evidence suggests that structured AI literacy works. Stanford researchers partnered with SchoolAI to study how 5,500 K-12 educators used AI assistants in their practice. The insights weren’t just about productivity—they revealed how professional guidance transforms understanding.

    Internationally, the field is coalescing around what K-12 AI education should look like. The Computer Science Teachers Association (CSTA) and AI4K12 have articulated five learning priorities that span knowledge, skills, and dispositions. And in 2029, PISA—the Programme for International Student Assessment—will assess AI literacy as a core competency alongside reading, math, and science. This isn’t fringe thinking anymore. AI literacy is becoming a baseline expectation.

    What Comprehensive K-12 AI Literacy Looks Like

    STRIDE’s framework for K-12 AI literacy is built on six domains—the STRIDE Framework—each progressively developing student understanding:

    • Sense: Understanding what AI is, how it works, and the role of data. Students learn to recognize AI in their daily lives and grasp the basics of machine learning.
    • Think: Developing critical evaluation skills. How do we assess AI outputs for accuracy? What are common biases? Where might AI fail?
    • Relate: Exploring the human, social, and ethical dimensions. What are the impacts of AI on different communities? How do identity and power shape AI systems?
    • Innovate: Building with AI. From prompt engineering to basic AI experimentation, students learn to create, not just consume.
    • Decide: Making responsible choices. How do we use AI ethically? What are our responsibilities when deploying AI tools?
    • Empower: Taking action for change. Students develop voice and agency—using AI literacy to advocate for equitable, responsible AI systems.

    These domains are threaded through three meta-competencies: Critical Thinking, Creativity, and Collective Judgment. The goal isn’t to produce AI engineers or coders (though some will become both). It’s to grow informed AI users—young people who can think critically about what they encounter, create thoughtfully with these tools, and judge their use in context.

    The Equity Imperative

    Underlying all of this is a fundamental question of justice. AI literacy shouldn’t be a luxury good—something available only to students with well-resourced schools or educated parents who can guide them. Yet without intentional policy and curriculum, that’s exactly what happens. Students in well-funded districts get structured AI courses and mentorship. Students in under-resourced schools get TikTok tutorials and unvetted content.

    This gap will widen every year AI becomes more central to work, civic life, and creativity. The students who lack AI literacy today will face real consequences tomorrow. They’ll be less able to verify information, more vulnerable to manipulation, and less equipped to shape the AI systems that will affect their futures.

    A Path Forward

    The question isn’t whether to let AI into schools. AI is already in students’ lives. The question is whether schools will be places where students learn to use it well—with guidance, reflection, and purpose.

    This requires curriculum architecture that’s research-grounded, equity-centered, and practical. It requires teacher professional development so educators feel confident facilitating these conversations. It requires moving beyond prohibition to preparation. And it requires a commitment to ensuring that every student—not just the fortunate few—gets the structured AI literacy they deserve.

    64% of teens are already using AI. The question for schools is: what will they do about it?

    To learn more about STRIDE, visit https://stridek12.org/

    Sources