The computer that learned from your drawings
For families with children aged 6–9. Activities are screen-optional and designed to read aloud together.
Some computers that make pictures learned how to do it by secretly looking at millions of drawings — including ones made by children just like you.
AI picture-making tools were taught by looking at huge collections of images from the internet — photos, paintings, and children's drawings that people had shared online. Nobody asked the artists if that was okay. This week, lots of people found out and said: "That's not fair — you should have asked." It's a bit like someone copying your homework without saying please, then handing in something that looks a bit like it.
Draw a picture of something you love — your pet, your favourite food, your bedroom. When you're done, ask each other: if a computer could look at this drawing to help it make pictures, what would you want it to say to you first? Take turns coming up with what the computer should say.
If something you made helped teach someone else a new skill — without you knowing — did they take something from you?
Who owns what an AI learned from you
For families with children aged 9–12. Goes one layer deeper — how AI actually works, and where the lines are.
Popular AI image generators — the tools that can draw almost anything you describe — were built by training on billions of images scraped from the internet, including artwork made by children that was shared on family websites and creative platforms. No one asked for permission.
Here's how AI image tools work: they study enormous collections of images to learn patterns — colours, shapes, styles, how shadows fall. This collection of images is called training data. The problem is that a lot of that training data was gathered by automatically copying images from the web — including artwork by real people who never agreed to be part of it. When artists (and parents of young artists) found out, many fought back: some removed their work from sites, others demanded the companies retrain their AI without using it. The companies mostly said the use was legal. But legal and fair are not the same thing.
Try this thought experiment together: You create a playlist of your 50 favourite songs. A music app quietly uses your playlist to train its AI on "what good music sounds like." The app gets better. You get nothing. Is that okay? What if the app became worth a billion pounds? Change one detail at a time and see where your answer changes.
If a company builds something valuable using your creativity, do they owe you something — even if what they made doesn't look anything like what you made?
Trained on you, without asking
For families with teenagers aged 13–16. Real ethical stakes, contested arguments, no easy answers — by design.
The AI systems generating images, music, and text were built on training data scraped from the internet without consent — including the work of artists, writers, and children who had no idea their creations were being used to build commercially valuable tools.
Generative AI is only as good as what it was trained on. The companies building these systems needed massive datasets — billions of images, texts, and audio files. To get them, they crawled the internet and took what they found. This is technically legal under current law in most countries, which treats using publicly available data for research purposes as fair use. But "publicly available" and "consented to be used for commercial AI training" are very different things. A family sharing a child's drawing on a blog did not agree to that drawing becoming part of a dataset that generates revenue for a multibillion-pound company. The artists who pushed back framed it as a property rights issue. The companies framed it as a research and innovation issue. Both framings are self-serving. The harder question is: what should the rule be going forward, and who gets to decide?
Pick a side and defend it — then swap. One of you argues that AI companies should be legally required to get explicit consent before using creative work in training data. The other argues that requiring consent would make building powerful AI impossible and would entrench the dominance of companies that already have the largest private datasets. After five minutes each, ask: what would a genuinely fair system look like?
When a new technology creates something valuable by combining things that already existed — who owns what was created, and who should benefit from it?