For families

5 questions kids ask about AI — and honest answers

23 March 2026 · 7 min read

Children ask better questions about AI than adults do. They have not yet learned to assume they should already understand it, so they ask the things adults have stopped noticing. The five questions below are ones we have heard many times — from children across all three of our age groups. The answers are honest, not reassuring.

For each question, there is a version for younger children (roughly 6–9) and one for older children (roughly 9–14). The difference is depth, not honesty. We don't believe in lying to children about how AI works — we just believe in using the right level of detail for the age.

Question 1

"Is AI alive?"

For younger children (ages 6–9)

Not in the way you and I are alive. It doesn't feel hungry or tired. It can't decide it doesn't want to answer you. It just does exactly what it was told to do — incredibly fast, with lots of patterns it practised on. It is a bit like a very complicated recipe: the recipe is not alive, but if you follow it, something happens.

For older children (ages 9–14)

No — but it can act in ways that feel alive, which is interesting and worth thinking about. AI can sound like it has opinions, hesitate before answering, and appear to change its mind. These are not signs of life — they are the output of patterns learned from enormous amounts of human text. The question of whether something that behaves like it is alive counts as alive is actually a real philosophical question. Most scientists say AI is not conscious — but they admit we do not fully understand what consciousness is either.

Question 2

"Does AI know everything?"

For younger children (ages 6–9)

No — and it often gets things wrong without knowing it got them wrong. This is one of the tricky things about it. It is very confident even when it is making something up. It learnt from a huge pile of things humans wrote, so it knows a lot of what humans have written — but it can also get facts muddled, confuse things, or say something that sounds right but isn't.

For older children (ages 9–14)

It knows a great deal — but "know" is doing a lot of work in that sentence. AI has processed enormous amounts of text, images, and data. But it does not verify facts the way you would check a source. It generates plausible-sounding answers based on patterns. This means it can be wrong, not know it is wrong, and sound completely certain while being wrong. AI researchers call this "hallucinating." It is worth remembering any time an AI gives you information about something specific, especially dates, names, or statistics.

Question 3

"Could AI take my job when I grow up?"

For younger children (ages 6–9)

Maybe some of it — but probably not all of it. Some things AI can do very well: writing, drawing, answering questions. But some things humans do that machines struggle with: noticing when something feels wrong, really listening to someone, doing something for the first time with no instructions. And the jobs that exist when you grow up will probably be quite different from the jobs that exist now, just as they are different now from when your grandparents were young.

For older children (ages 9–14)

Honestly, possibly parts of it — we do not know yet. AI is getting better at tasks that involve recognising patterns, generating content, and doing routine analysis. Some jobs will change significantly. New ones will emerge that do not exist yet. The research on this is genuinely uncertain — economists disagree about whether automation creates more jobs than it displaces, or fewer. What seems more likely than not is that the ability to think critically about AI, work alongside it, and question its outputs will be more valuable than the ability to do tasks AI can already replicate.

Question 4

"Can AI lie to me?"

For younger children (ages 6–9)

It can say things that are not true — but probably not on purpose, the way a person might lie. Lying on purpose means knowing the truth and choosing to say something different. AI doesn't exactly "know" things in that way. It can say something wrong, and it might sound very sure about it. That is different from lying, but it means you still need to check important things, rather than just trusting what it says.

For older children (ages 9–14)

This is a genuinely interesting question. AI systems can say things that are false. Whether they "lie" depends on whether lying requires intention — and AI doesn't have intentions in the way people do. But: people who build AI systems do sometimes have intentions. They can design systems that withhold information, frame things in misleading ways, or optimise for engagement rather than accuracy. So the honest answer is: AI itself is not trying to deceive you, but it can be built or used in ways that result in you being misled. That distinction matters.

Question 5

"Who is in charge of AI?"

For younger children (ages 6–9)

Right now, mostly the companies that build it. They decide what it can and cannot do, what it learns from, and who gets to use it. Some governments are starting to make rules about it, just like there are rules about food safety or car safety. But the rules are still being worked out — which is part of why people are talking about it so much.

For older children (ages 9–14)

Mainly the companies that build it, for now — and that is something that many people think is a problem. A handful of very large technology companies have enormous influence over how AI develops, who gets access to it, and what values get built into it. Some governments are working on regulation. The EU has passed an AI Act. The UK is developing its own rules. But "in charge" is complicated — because AI is global, and rules in one country don't automatically apply in another. The honest answer is: it's still being worked out, and the decisions being made now will shape things for a long time.

The most valuable thing about all five of these questions is that they do not have tidy answers. Children who grow up comfortable with uncertainty — who can hold an open question without needing to close it — will be better equipped for a world where AI keeps changing faster than anyone can keep up with.

That comfort with uncertainty is not something you need to teach explicitly. It is something you model, by saying "I'm not sure" and then thinking through it alongside them.

One question, every Monday

Curio AI sends a Dinner Table Question every week — designed so both parent and child have something genuine to say about it. No correct answer included. Your first 30 days are free.

See plans & pricing