Blog

  • 64% of Teens Use AI Chatbots. Here’s What Schools Owe Them.

    According to recent research from the Pew Research Centre, 64% of American teens report using AI chatbots. Yet in most schools, that use goes unguided. Students are experimenting with powerful AI tools at home, on their phones, and in study groups—with virtually no structured instruction on critical evaluation, responsible use, or ethical decision-making. This isn’t a small gap. It’s a defining equity issue of our time.

    If prohibition were a viable strategy, schools might simply ban AI and call the problem solved. But students don’t stop using AI because their school blocks it. They simply use it without guidance. And this creates a stark divide: affluent students, whose parents can afford private tutoring and tech-savvy mentorship, are building AI fluency at home. Meanwhile, students in under-resourced schools are left to figure it out alone—or not at all.

    The real question isn’t whether to allow AI in schools. It’s whether we’ll prepare every student to use it wisely.

    The Guidance Gap

    The data tells a troubling story. While teens are embracing AI, educators are struggling to keep pace. The

    Day of AI initiative has reached over 2 million students, with 93% of teachers rating the materials as Good or Excellent. Yet this represents a fraction of K-12 enrollment. Meanwhile, unvetted AI content proliferates. YouTube and other platforms are flooded with ‘AI slop’—videos with inaccurate information, sometimes even dangerous messaging—that kids consume without any quality filters or critical framing.

    In some states, the response has been restriction. New York’s A.9190 has proposed banning AI use below 9th grade. Tennessee’s HB 2393 goes further, targeting an outright ban on K-5 digital devices. Kansas, Missouri, Virginia, and West Virginia are exploring similar measures. The impulse is understandable—but the strategy is flawed. Prohibition doesn’t prevent use; it prevents preparation.

    What Researchers Have Learned

    Evidence suggests that structured AI literacy works. Stanford researchers partnered with SchoolAI to study how 5,500 K-12 educators used AI assistants in their practice. The insights weren’t just about productivity—they revealed how professional guidance transforms understanding.

    Internationally, the field is coalescing around what K-12 AI education should look like. The Computer Science Teachers Association (CSTA) and AI4K12 have articulated five learning priorities that span knowledge, skills, and dispositions. And in 2029, PISA—the Programme for International Student Assessment—will assess AI literacy as a core competency alongside reading, math, and science. This isn’t fringe thinking anymore. AI literacy is becoming a baseline expectation.

    What Comprehensive K-12 AI Literacy Looks Like

    STRIDE’s framework for K-12 AI literacy is built on six domains—the STRIDE Framework—each progressively developing student understanding:

    • Sense: Understanding what AI is, how it works, and the role of data. Students learn to recognize AI in their daily lives and grasp the basics of machine learning.
    • Think: Developing critical evaluation skills. How do we assess AI outputs for accuracy? What are common biases? Where might AI fail?
    • Relate: Exploring the human, social, and ethical dimensions. What are the impacts of AI on different communities? How do identity and power shape AI systems?
    • Innovate: Building with AI. From prompt engineering to basic AI experimentation, students learn to create, not just consume.
    • Decide: Making responsible choices. How do we use AI ethically? What are our responsibilities when deploying AI tools?
    • Empower: Taking action for change. Students develop voice and agency—using AI literacy to advocate for equitable, responsible AI systems.

    These domains are threaded through three meta-competencies: Critical Thinking, Creativity, and Collective Judgment. The goal isn’t to produce AI engineers or coders (though some will become both). It’s to grow informed AI users—young people who can think critically about what they encounter, create thoughtfully with these tools, and judge their use in context.

    The Equity Imperative

    Underlying all of this is a fundamental question of justice. AI literacy shouldn’t be a luxury good—something available only to students with well-resourced schools or educated parents who can guide them. Yet without intentional policy and curriculum, that’s exactly what happens. Students in well-funded districts get structured AI courses and mentorship. Students in under-resourced schools get TikTok tutorials and unvetted content.

    This gap will widen every year AI becomes more central to work, civic life, and creativity. The students who lack AI literacy today will face real consequences tomorrow. They’ll be less able to verify information, more vulnerable to manipulation, and less equipped to shape the AI systems that will affect their futures.

    A Path Forward

    The question isn’t whether to let AI into schools. AI is already in students’ lives. The question is whether schools will be places where students learn to use it well—with guidance, reflection, and purpose.

    This requires curriculum architecture that’s research-grounded, equity-centered, and practical. It requires teacher professional development so educators feel confident facilitating these conversations. It requires moving beyond prohibition to preparation. And it requires a commitment to ensuring that every student—not just the fortunate few—gets the structured AI literacy they deserve.

    64% of teens are already using AI. The question for schools is: what will they do about it?

    To learn more about STRIDE, visit https://stridek12.org/

    Sources

  • NYC Just Released Its AI Playbook for Schools. Here’s What It Gets Right — and What It’s Missing.

    After nearly three years of uncertainty following the ChatGPT ban of 2023, New York City’s Department of Education has finally released preliminary guidance on artificial intelligence in schools. The framework uses a straightforward “traffic light” system—green, yellow, red—to categorize AI use cases for teachers and administrators. On the surface, it looks reasonable. But a deeper look reveals what NYC got right, what’s conspicuously missing, and why guardrails alone won’t prepare students for an AI-driven world.

    What NYC Got Right

    The Department of Education deserves credit for several smart moves. First, the traffic light system is intuitive and actionable. Teachers immediately understand that they can use AI for brainstorming, organizing lesson materials, and drafting communications—all green-light activities. Second, the policy correctly identifies student data protection as non-negotiable: student information can never be entered into unapproved AI tools, and student data cannot be used to train commercial AI models or generate revenue. That’s not just good policy; it’s ethically essential.

    Third, the framework draws clear lines on high-stakes decisions. Teachers cannot use AI for grading or discipline—two areas where bias and lack of nuance can cause real harm. These guardrails reflect an understanding that certain decisions require human judgment and accountability.

    Fourth, the city is aware of the urgency. The fact that this guidance came after three years suggests NYC learned the lesson from the ChatGPT ban: leaving schools in regulatory limbo is worse than providing thoughtful guidance. The 45-day feedback window before finalizing the full “Playbook” in June shows a commitment to genuine stakeholder input.

    What’s Conspicuously Missing

    But the guidance also leaves critical gaps—holes large enough to undermine effective implementation.

    No AI Literacy Curriculum

    The traffic light system tells educators what they can’t do with AI. It doesn’t tell students what AI is, how it works, or how to interact with it responsibly. Where is the structured curriculum on AI literacy? What do K-12 students learn about machine learning, training data, bias, or the ethical frameworks behind AI decision-making? NYC’s guidance assumes teachers and students already understand the technology. They don’t.

    No Guidance for K-8

    The guidance applies primarily to high schools (9th grade and up). But AI literacy shouldn’t start in 9th grade. Children in elementary and middle school are already using AI-powered tools—from recommendation algorithms on YouTube to voice assistants at home. Without developmentally appropriate curriculum and guardrails for younger students, NYC is creating a two-tier system where younger children remain passive consumers of AI rather than informed users.

    The Homework Question Left Unresolved

    Perhaps the most revealing omission: the guidance is silent on student homework use. Can students use ChatGPT to brainstorm essay topics? To help debug their code in a computer science class? To translate a passage in Spanish class for deeper understanding? The policy doesn’t say—which means individual teachers will make inconsistent decisions, and students will be confused about what’s permissible. This isn’t a minor detail. Homework is where students spend the most time engaging with new tools.

    Personal Chatbot Accounts

    Many students now have personal ChatGPT or Google Gemini accounts. The city’s guidance doesn’t address whether schools should be monitoring, restricting, or leveraging these accounts. It’s a regulatory blind spot that could prove problematic as student use of personal AI tools accelerates.

    The Bigger Problem: Guardrails Aren’t Strategy

    Here’s the hardest truth: guardrails are necessary but not sufficient. Telling teachers what they can’t do with AI is risk management, not leadership. Effective AI integration in schools requires three things:

    • A coherent, research-backed AI literacy curriculum that teaches students to understand, evaluate, and create with AI at every grade level
    • Privacy-first technology that lets schools deploy AI tools for instruction and assessment without exposing student data
    • Aligned policies that clearly define what students and teachers can do with AI across all grade levels

    NYC’s guidance addresses piece three. Pieces one and two are absent.

    A Comprehensive Answer: The STRIDE Framework

    This is why districts need a comprehensive strategy, not just rules. The STRIDE Framework—developed by K-12 educators and grounded in research—provides exactly what NYC’s guidance is missing: a structured, equity-centered curriculum that teaches AI literacy across six domains (Sense, Think, Relate, Innovate, Decide, Empower) alongside three meta-competencies (Critical Thinking, Creativity, and Collective Judgment). When paired with LIA2, a privacy-first AI platform built for schools, districts gain both the curriculum and the technology to implement AI literacy safely.

    The STRIDE Framework doesn’t just teach students about AI. It teaches them to use AI responsibly, critically, and creatively. It’s designed for K-12—meaning every student gets age-appropriate AI literacy, not just high schoolers. And because LIA2 is privacy-first, schools never have to choose between innovation and student data protection.

    What a Truly Comprehensive AI Strategy Looks Like

    As NYC finalizes its AI Playbook over the next 45 days, here’s what a complete district strategy should include:

    • Curriculum standards for AI literacy at every grade level (K-12), not just high school
    • Clear policies on homework and personal tools that give students and teachers consistent guidance
    • Privacy-first technology that schools trust and control
    • Professional development for teachers so they can teach about AI and teach with AI
    • Equity-centered design that ensures all students—regardless of ZIP code or socioeconomic status—develop critical AI literacy

    The Road Ahead

    NYC’s new AI guidelines are a start. The traffic light system will help teachers navigate immediate questions about permissible use. But rules without education are incomplete. As the city receives feedback over the next 45 days and moves toward its June finalization, we urge decision-makers to ask a harder question: What do we want our students to know and be able to do with AI? The answer to that question should drive the strategy—and the policy should support it, not replace it. The schools that will thrive in an AI-driven future aren’t the ones with the strictest guardrails. They’re the ones with the strongest curricula and the clearest vision of what it means to be AI-literate. NYC is ready to write that vision into policy. We hope they do.

    To learn more about STRIDE’s AI Literacy Curriculum and resources, visit https://stridek12.org/

    Sources

    Chalkbeat: What NYC’s new AI school rules say

    Chalkbeat: Schools develop AI policies awaiting city guidance

    NYC DOE: Guidance on Artificial Intelligence

    Amsterdam News: NYC releases guidelines, raises more questions

    Chalkbeat: NYC proposes AI-focused high school

  • 52 Bills, 25 States: The AI Literacy Legislative Wave That Will Reshape K-12 Education

    If you work in K-12 education, you’ve likely felt the pressure building. The question isn’t anymore whether your district needs an AI literacy strategy—it’s whether you can afford to wait. Across America’s statehouses, a legislative wave is reshaping what schools must teach, how they implement AI tools, and what safeguards must be in place. The numbers are staggering. According to FutureEd, 52 bills addressing AI in classroom instruction are moving through legislatures in 25 states during the 2026 session alone. These aren’t scattered, isolated efforts. They represent a fundamental reckoning with the role of artificial intelligence in K-12 education—and districts that haven’t yet acted are running out of time.

    What’s driving this urgency? Federal tailwinds are amplifying the signal. The Trump administration issued an executive order elevating AI literacy as a national priority, while the U.S. Department of Education designated AI as a grantmaking priority. These federal endorsements have unlocked momentum at the state level. Legislators are hearing from educators, parents, and employers alike: our students must understand AI, not fear it. But alongside that imperative comes a parallel concern—protection. How do we ensure that AI enhances learning without replacing teacher judgment? How do we protect students from algorithmic bias and data exploitation?

    The legislative landscape reveals three distinct models emerging across states—and understanding them matters for your district’s planning.

    Model 1: Protective and Precautionary

    South Carolina’s H.B. 5253 represents the strongest protections yet enacted. It requires written parental opt-in before students can use AI tools in instruction, prohibits AI from replacing teachers in any capacity, and critically, bans AI-driven high-stakes decisions (like attendance or discipline determinations) without human oversight. This model prioritizes guardrails and transparency. It’s the approach favored in states wrestling with digital privacy and parental control concerns.

    New York’s A.9190 takes precaution even further, prohibiting most classroom AI use below 9th grade. Tennessee’s HB 2393 goes further still, banning K-5 digital devices entirely. These states are betting that AI exposure should be delayed until students have stronger critical thinking skills and digital literacy foundations.

    Model 2: Mandatory Implementation with Guardrails

    Ohio, Oklahoma, and several others are taking a different tack: they’re requiring AI policy adoption without waiting for federal guidance. Ohio is mandating that all public school districts adopt formal AI policies by July 1, 2026—just three months away. Oklahoma’s Responsible Technology in Schools Act requires all districts to adopt written AI policies before the 2027-28 school year. These mandates push implementation forward while leaving some flexibility in how districts get there.

    The crucial detail: these policies must be written, vetted, and stakeholder-aligned. Vague guidance won’t pass scrutiny. Legislators expect districts to articulate what AI tools they’re using, why, and what safeguards are in place. One-size-fits-all solutions are already being rejected in state policy debates.

    Model 3: Active Curriculum Integration

    Arizona, Hawaii, and Utah are prescribing what AI education should look like. Arizona’s H.B. 4005 requires instruction on the ethical, moral, and educational uses of AI. Hawaii’s H.B. 2466 directs the state to develop a statewide K-12 social media and AI literacy curriculum. Utah’s H.B. 218 establishes a required digital skills course for grades 7-8 that explicitly includes AI literacy. These states are saying: don’t just manage AI, teach it.

    This model aligns with global momentum. The OECD’s PISA assessment—the international gold standard for student learning—will assess AI literacy for the first time in 2029. Students in countries with strong AI literacy curricula today will outpace their peers internationally. The National Science Foundation underscored this priority by awarding $11 million to the Computer Science Teachers Association (CSTA) for AI Professional Development Weeks, signaling that teacher capacity is essential to implementation.

    Why Districts Can’t Wait

    Here’s the crux: districts waiting for state mandates to clarify before acting are already behind. The 25 states with pending legislation represent roughly 80 million K-12 students. Even if your state hasn’t yet moved, the probability is rising rapidly. More critically, waiting for perfect state guidance means missing the window to build teacher capacity, pilot implementation, and identify the gaps before legal requirements arrive.

    Consider the timeline. Ohio’s mandate is in effect in three months. Oklahoma’s requires adoption within 18 months. Districts in those states are scrambling right now to develop policies they didn’t anticipate needing so soon. They’re asking urgent questions: Which AI tools do we approve? What’s our data governance approach? How do we ensure equity? What training do teachers need? How do we communicate with families?

    These are the exact questions that require not a quick fix, but a coherent framework. Rushing to policy compliance without a pedagogical foundation leads to poor implementation—either overly restrictive policies that don’t serve learning, or vague approvals that expose districts to legal and ethical risk.

    The STRIDE Framework: Implementation-Ready and Adaptable

    This is where the STRIDE Framework proves its value. Regardless of which legislative model your state adopts—protective, mandatory with guardrails, or curriculum-integrated—STRIDE provides a research-grounded, equity-centered foundation that works in any configuration.

    The STRIDE Framework’s six domains (Sense, Think, Relate, Innovate, Decide, Empower) and three C3 meta-competencies (Critical Thinking, Creativity, Collective Judgment) are intentionally designed to bridge both instruction and policy. Teachers use the framework to design AI literacy curriculum and learning experiences. Leaders use it to structure their AI governance, evidence their compliance with state mandates, and communicate value to families.

    South Carolina schools adopting STRIDE can confidently implement the ‘Decide’ domain—teaching students how to exercise ethical judgment about AI—while meeting the state’s parental opt-in and teacher-oversight requirements. Ohio districts can map STRIDE domains to their required written policies, creating a compliance document that’s pedagogically coherent, not just bureaucratically complete. Schools in states emphasizing curriculum (like Hawaii or Arizona) have a ready-made K-12 arc that meets all ethical and educational standards.

    Paired with LIA2, our privacy-first AI platform for schools, the framework extends into implementation. Teachers can pilot STRIDE-aligned lessons using a platform that meets the strictest state data protections. Leaders have evidence of student learning and can demonstrate compliance in audit or legislative review.

    What Happens Next

    The 52 bills in 25 states will pass—some with amendments, some with compromises, but the direction is clear. AI literacy is no longer optional. The question for district leaders is: do you want to design your response, or react to a mandate?

    Districts that act now—beginning with the STRIDE Framework—will have implementation experience, teacher buy-in, and evidence of impact before state compliance deadlines arrive. They’ll move from scrambling to confident implementation. They’ll shift the conversation from ‘How do we comply?’ to ‘How do we scale what’s working?’

    The legislative wave is coming. It’s already here in 25 states. The question isn’t whether your district needs an AI literacy strategy. The question is whether you’ll be ready when the mandate arrives—or whether you’ll have been implementing and learning all along.

    STRIDE Innovation Labs is here to help you build that readiness today. Visit our website to explore the STRIDE Framework, request a demo of LIA2, or schedule a conversation with one of our education specialists. The time to act is now.

    Learn more at: https://stridek12.org/

    Sources

  • The White House Wants to Protect Kids from AI. Schools Need to Go Further.

    On March 20, 2026, the White House unveiled its National AI Legislative Framework, drawing a clear line in the sand: artificial intelligence platforms used by children must be safer. The administration is calling on Congress to mandate parental oversight of privacy settings, screen time, and content exposure. It wants age-assurance requirements for AI platforms accessed by minors. It insists on limits to data collection and behavioral advertising targeting children. It demands that AI platforms implement guardrails against sexual exploitation and self-harm.

    These are all necessary steps. Parents deserve visibility into what AI systems their children interact with. Children deserve protection from predatory data practices and algorithmic harms. Federal standards are overdue.

    But protection alone is not preparation.

    The White House framework is premised on a fundamentally defensive logic: shield children from AI. But that approach misses something crucial about the world these children will inherit. AI is not going away. By 2030, the tools and workflows that students will use in college, in careers, and in civic life will be saturated with AI. According to Pew Research, 64% of American teenagers already use AI chatbots. Protecting them from that reality through regulations and guardrails is like banning calculators from math class—it addresses the symptom while ignoring the condition.

    What students actually need is something far more powerful: structured AI literacy that enables them to navigate, critically evaluate, and responsibly harness AI systems. They need to understand how these tools work, what biases they carry, what risks they pose, and how to use them as instruments of insight rather than convenience. They need to grapple with the ethics of AI use. They need to see AI not as a black box or a threat, but as a technology they can and should learn to think with—deliberately, carefully, and collectively.

    That is the work of an authentic AI literacy curriculum.

    Why Shielding Isn’t Enough

    Take It Down Act, signed into law in May 2025, which criminalizes AI-generated deepfake nudes—a concrete response to a concrete harm.

    None of this is wrong. But consider the limits: regulations constrain bad actors on the margins. They cannot teach students how to be thoughtful, discerning users of technology. A privacy setting does not confer agency. A parental control does not develop critical judgment. And as the framework itself acknowledges, there are significant pressures in Congress to preempt state-level AI laws with federal standards—which raises its own concerns about whose interests federal standards ultimately serve.

    The deeper issue is this: students are already living inside AI systems. They are using AI to write essays, generate images, solve problems, and navigate social relationships. Some teachers ban it. Some schools restrict it. But the technology is not going anywhere, and restriction without education is just a temporary reprieve.

    What if, instead, schools built robust AI literacy into the curriculum—not as a new subject, but as a foundational competency woven across disciplines?

    A Better Path: AI Literacy as Child Protection

    STRIDE Innovation Labs was founded on a simple but radical premise: teaching K-12 students to think critically about AI is a form of child protection. It is the opposite of a shield. It is a toolkit.

    Empower: understanding how AI can amplify human agency and imagination.

    These domains are woven together with three meta-competencies: Critical Thinking, Creativity, and Collective Judgment. The curriculum is not about banning AI. It is about teaching students to use it wisely, to question it rigorously, and to recognize that using AI is ultimately a human choice.

    This approach directly addresses the concerns raised by the White House. When students understand how data collection works, they are better equipped to recognize the harms of unregulated surveillance. When they grapple with algorithmic bias, they become more critical consumers of AI-generated content. When they study the ethics of AI use, they develop internalized guardrails—not because a parent set a restriction, but because they have thought through the implications themselves.

    Privacy and Preparation: Both

    Schools do not have to choose between protecting student privacy and teaching AI literacy. They can do both—and they should.

    That is where tools like LIA2 come in. STRIDE’s privacy-first AI platform is purpose-built for schools. It allows students to engage with AI—to experiment, to learn, to create—without harvesting their data, without creating permanent records for behavioral advertising, without feeding their interactions into corporate training datasets. It is proof that you can embrace AI in education while maintaining rigorous privacy standards.

    Combined with a structured AI literacy curriculum, this approach offers something the White House framework alone cannot: it turns protection into preparation. Students get to practice AI literacy in a safe environment. They learn to think critically about the technology. They develop judgment about when and how to use it. And when they leave school, they have both the competence and the caution to navigate a world shaped by AI.

    What Schools Should Do Now

    The White House has issued a call to action. Congress will likely respond. Regulations will tighten. But schools do not have to wait for federal mandates to act. Districts and classroom leaders can begin now:

    • Audit your current AI use. Where are students encountering AI? What are they learning or not learning about how it works?
    • Invest in teacher professional development on AI literacy. If teachers do not understand these tools, they cannot help students navigate them.
    • Adopt a structured curriculum framework that integrates AI literacy across disciplines—not as an add-on, but as a core competency.
    • Choose tools and platforms that prioritize student privacy. Regulation is coming; start now with platforms designed for schools, not for data extraction.
    • Create opportunities for students to engage with AI ethically and experimentally. Literacy requires practice. And it requires judgment. Both are learned by doing.

    The White House is right that children deserve protection from AI harms. But the real work of preparation—of equipping students with the thinking tools to navigate an AI-saturated world—falls to educators. That work is urgent, necessary, and ultimately more powerful than any regulation can be.

    STRIDE Innovation Labs is the premiere source of AI literacy curriculum and tools for K-12 education. Learn more at stridek12.org

    Sources

    White House: National AI Legislative Framework

    K-12 Dive: White House urges Congress to protect children on AI platforms

    Daily Signal: White House AI Framework Requires Measures to Protect Kids

    Crowell & Moring: White House Framework Calls for Preempting State Laws

    Pew Research Center: Teen AI chatbot usage data (64% of American teens use AI chatbots)