What an Eight-Year-Old Can Teach Higher Ed About AI Literacy

Nearly half of Gen Alpha is already using AI, according to national parent surveys from the Walton Family Foundation. While adults in higher ed are still debating policies, governance, and “what this means for learning,” kids are quietly building worlds, relationships, and reasoning frameworks that will define the next decade of student expectations.

If you want to understand the future of digital learning, advising, and student experience, don’t start with the research papers written for us, start with the ones written about them. And even then, the clearest window into Gen Alpha’s AI fluency isn’t in the data.

It’s in how an imaginative eight-year-old — Cora Budd, my creative AI-collaborator for nearly two years — explains, without hesitation, how she manages an AI that occasionally turns a rainbow into a character with sunglasses, sneakers, and rabbit ears.

Listen to Part 1 of my conversation with Cora on my Enrollify podcast, Higher Ed Pulse.

Gen Alpha is not waiting for us to define how AI should work. They’re already defining it for themselves.


1. Gen Alpha Treats AI as an Assistant, Not a Threat

Fast Company recently described Gen Alpha as the first cohort to grow up with AI as “ambient infrastructure.” It’s a background condition of their world, not simply another tool.

When I asked Cora who’s in charge, she shrugged:

“I feel like I’m the boss of AI.”

That instinct matters. While many campus leaders still frame AI as a disruptive force requiring careful containment, Gen Alpha frames it as a responsive system they’re meant to direct.

This approach flips the adult narrative:

  • AI is not frightening; it’s normal.

  • AI is not autonomous; it’s customizable.

  • AI is not in control; they are.

For institutions, this raises a challenge: Are we designing AI-driven learning and support systems for students who need reassurance or students who expect agency?

The incoming generation won’t wait for institutional permission to use AI the way they want to use it.

2. Errors Don’t Undermine Trust, But They Must Be Fixable

One of the strongest signals from recent research, including a 2024 Google + UC Berkeley study exploring children’s mental models of AI, is that kids see errors not as failures but as expected behavior. They adjust. They iterate. They move on.

Cora explained it this way:

“All the little AI oopsies, as I like to call them, I just laugh it off later. And sometimes if it's really wrong, I ask it to redo it.”

She has already built:

  • A classification system for types of errors

  • A remediation strategy

  • An emotional boundary

  • An iterative workflow

Higher ed, by contrast, tends to design AI systems with a defensive posture: avoid mistakes, minimize risk, distribute disclaimers.

Gen Alpha is asking for an adaptive system: Give me tools that learn with me, not tools that pretend they can’t fail.

3. Their AI Literacy Is Already More Advanced Than We Assume

The dominant higher-ed narrative is: “We need to teach students how to verify AI.” But Gen Alpha is already practicing verification, unprompted.

As Cora put it:

“I always fact check AI no matter what… If it doesn’t have the right topic, I check it on Microsoft Edge.”

Notably:

  • She distinguishes between creative and factual tasks.

  • She knows when AI’s “main idea” drifts.

  • She rejects outputs that conflict with her existing knowledge.

  • She cross-checks before accepting any academic information.

This aligns directly with findings in the Google/Berkeley research: children demonstrate early reasoning about AI’s reliability and rapidly develop heuristics for detecting incorrect outputs.

The strategic implication: Incoming learners will not be empty vessels. They will bring years of AI habits (good and bad) with them.

Institutions must design AI instruction and policy with this starting point in mind.

4. They Expect Transparency, Even Before We Teach It

When I asked whether AI should have to show its work the way she does in math class, Cora didn’t hesitate:

“I think it should.”

And she’s already operating that way:

“I actually clicked through one.”

She expects evidence, sources, and the ability to verify.

The first survey mentioned above echoes this: children who use AI more often become more critical of the answers they receive, not less.

This is an important trend for universities designing AI-enabled learning experiences: Opaque systems won’t meet the expectations of a generation raised to double-check.

5. Their Creativity Is Not Escapism

The Google/Berkeley study found that when kids use visual AI, they rarely ask for familiar, real-world objects. They ask for things that don’t exist. They test boundaries and explore possibility space.

Cora is a perfect example:

“When I use AI, I ask it for the impossible, what real life can't happen.”

Early-stage design thinking? Absolutely. In the past two years, we’ve built a fictional universe based on my dog Blossom, complete with:

  • Characters (based on both real and fictious pets) with distinct arcs

  • Comics, quizzes, and storylines

  • A podcast

  • Merch concepts

Gen Alpha is using AI to extend their creativity.

When they arrive on campus, the question will be: How do we give them tools worthy of the creative capacity they've already developed?

6. And Despite All This, They Still Want Humans at the Center

One of the most optimistic findings in the research (and in my conversations with Cora) is that kids trust AI, but they do not want it to replace teachers, friends, or mentors.

When I asked Cora if she’d want an AI teacher, she shut it down immediately:

“No. They should just fire the AI teacher and get a real one.”

This is also consistent with emerging data: AI can support learning, but it cannot replace the social, emotional, and relational components that kids (and adults) rely on.

Higher ed should take note: The future is not AI instead of humans. It’s AI with humans.

The Strategic Question for Leaders

Gen Alpha is already setting the norms for what AI should be.

Are we building AI-enabled learning environments that match the sophistication of the students who will use them?

The kids making universes out of thin air today will be arriving on your campus soon enough with expectations shaped long before your policies are drafted.

Previous
Previous

Marketing Only Wins When It Shortens the Trust Curve

Next
Next

Innovation Isn’t a Feature Upgrade: Lessons From the $82.7B Netflix-Warner Deal