There is a version of AI education that goes like this: parents are confused about AI. Children will grow up fluent in it. Therefore, children should be taught about AI by professionals, and parents should be reassured that the gap is being managed.
We think this is wrong — not in a minor way, but structurally. And the reason it is wrong tells you something important about what AI literacy actually is and why it cannot be outsourced.
The problem with the handoff model
Most educational content — courses, programmes, explainer videos, apps for children — works on what we might call the handoff model. The child is handed to the educational experience. Something is put into them. They are handed back, better informed.
For many subjects, this works fine. Phonics is phonics. Multiplication tables are multiplication tables. The knowledge is settled, the methods are established, and there is no fundamental reason for a parent to learn them at the same time as the child.
AI is not like that.
AI raises questions that are not primarily technical. They are ethical, social, and political. Who owns the data an AI was trained on? Should a machine make decisions that affect your life without explaining why? If an AI is accurate on average but wrong for specific groups of people, is that acceptable? These questions do not have correct answers. They have better and worse arguments. They require the ability to reason carefully under uncertainty, consider multiple perspectives, and remain comfortable with questions that stay open.
You cannot learn that kind of thinking alone. You learn it by watching other people think — including people you trust, who disagree with you, and who are willing to say "I don't know."
A parent is the most important person in that landscape. Not because they know more about AI than a teacher or a programme does, but because they are present at dinner, in the car, on a walk — in all the places where thinking actually happens.
What parents bring that programmes don't
There is a persistent myth in technology education that parents are liabilities in this process. They don't understand the tech. They have outdated views. They might accidentally teach their children the wrong things.
This misses something important about what parents actually contribute to their children's understanding of difficult things.
Parents carry institutional memory. They have watched previous technologies arrive with enormous promise and produce mixed results. They remember when the internet was going to fix education, when social media was going to connect everyone benevolently, when big data was going to make everything more efficient and fair. They have seen the distance between what technology companies promise and what actually happens. That scepticism — earned, not generic — is not a liability. It is one of the most valuable things they can pass on.
Parents also carry ethical intuition grounded in lived experience. A thirty-five-year-old parent who has navigated a workplace, raised a child, and watched institutions fail and recover has something a twelve-year-old does not: a working understanding of how power operates in practice, and what it feels like when a system gets you wrong. That is not abstract knowledge. It is the kind of moral knowledge that makes ethical reasoning about AI real rather than theoretical.
The modelling argument
There is a second argument for co-learning that is less about content and more about process — and we think it is the more important one.
Children learn how to think by watching the people around them think. Not by being told how to think, but by observing — seeing someone encounter a difficult question, pause, consider multiple angles, express uncertainty, update their view when they hear a good argument. This is how intellectual habits form.
When a parent encounters an AI question they do not know the answer to, and they say — genuinely, not performatively — "I'm not sure about that, let me think," they are modelling something more valuable than any correct answer they could give. They are demonstrating that uncertainty is not shameful, that it is the normal condition of thoughtful people encountering genuinely hard questions.
When AI gives a wrong answer and a parent says "wait, is that right? How would we check?" they are modelling healthy scepticism of AI outputs — a skill that will be practically useful to their child for their entire life.
This kind of modelling can only happen in co-learning. It cannot happen when the child is handed to a programme and the parent waits outside.
The generational asymmetry argument
There is a counter-argument worth taking seriously. Children today use AI tools routinely. They are building intuitions about how these systems behave, what they are good for, and where they fail — through direct experience, every day. Many children genuinely do know more about specific AI tools than their parents. Is it not condescending to the child to frame the parent as a co-learner rather than the student?
No — and here is why.
Knowing how to use a tool and understanding what the tool is doing are different things. A child who uses a chatbot every day for homework may be skilled at prompting it, good at identifying when its outputs are plausible, and confident in its interface. But they may have very little framework for asking: who owns the data this was trained on? What values were built into the responses it gives? Who decided it should refuse certain questions and answer others?
These are not technical questions. They do not require knowing how transformers work or what a parameter is. They require the kind of structural, political, and ethical thinking that develops over time — and that parents, with their longer view and their direct experience of how institutions actually operate, are often better positioned to model.
The child knows how to use the tool. The parent knows what questions to ask about who built it and why. Neither is complete without the other.
What co-learning actually looks like
It is worth being concrete about this, because "co-learning" can sound like a structured activity that requires preparation and effort. It does not.
Co-learning about AI looks like: a parent reading something about AI that week and mentioning it at dinner. A child using a chatbot for something and noticing it got something strange, and describing it. A question neither of them can answer, that they both think about for a few days. A news story that one of them reads and brings to the other. An activity that produces an unexpected result, and a conversation about why.
It is not a lesson. It is a habit of attention — the habit of noticing when AI is involved in something, asking what it is doing and for whom, and being willing to sit with the question rather than immediately closing it.
That habit is more valuable than any specific piece of AI knowledge — because it is transferable to whatever AI looks like in five years, which will be different from what it looks like now.
The longer view
We built Curio AI because we believe the most powerful unit of AI education is not the child alone, or the parent alone, but the two of them together — a parent's lived experience and earned scepticism alongside a child's openness and willingness to engage with something new on its own terms.
Neither is complete without the other. That is not a weakness — it is the whole model.
Five minutes a week is not very much. But five minutes of genuinely thinking together, regularly, over three years — that is how a family builds a shared language for something that will shape the rest of their lives.
Five minutes. Together. Every Monday.
Curio AI is built for two people: a parent and a child, figuring it out alongside each other. One real AI story, one activity, one dinner table question — every week. Your first 30 days are free.
See plans & pricing