A perfect essay can mean two very different things.
It can mean a child has wrestled with an idea, made a few wrong turns, looked back at what they wrote, and arrived somewhere genuinely understood. Or it can mean a child has typed a sentence into a chatbot, copied the response, and tidied the formatting. The two outputs can look identical. The two children are not in the same place at all.
This is the quiet problem AI has introduced into family life. Homework still gets handed in. Marks still get awarded. But the cognitive work, the part where learning is actually built, can be silently outsourced. And because the result looks fine, it is easy for a parent to miss.
The good news is that there is one simple question that cuts through it.
Where did the thinking happen?
Ask it of any piece of homework, any practice task, any revision session involving AI. Not did the child use AI: that ship has sailed and is, in many cases, a good thing. The right question is narrower and more useful: was the thinking done by the child, or by the machine?
To make that question concrete, it helps to look at how learning is built in the first place.
Every solid lesson has roughly the same shape. Something is taught: a teacher explains a concept, sets it in context, links it to what the pupil already knows. Then the pupil thinks: they recall the idea, apply it to a new problem, get something wrong, try again. Finally the work is checked and corrected: misconceptions are surfaced and fixed.
The middle stage is where learning actually happens. A pupil who watches a brilliant explanation and then has the answer corrected for them has, paradoxically, learned almost nothing. They have witnessed two stages of someone else's process.
This is the trap AI sets. It is genuinely good at the first stage. It can explain almost any topic, patiently, in five different ways, in the child's preferred analogy. It is also fluently and confidently wrong about a non-trivial fraction of the things it says, which is why supervision matters; on topics inside its training, it is a remarkable tutor. It is also good at the third stage: checking work, spotting errors, giving feedback. What it can also do, and what makes it dangerous, is offer to do the middle stage as well. Just type the prompt. Just paste the question. Just ask it to write the paragraph. The friction disappears. So does the learning.
The evidence
This is no longer a hunch. It is starting to be measured.
A 2025 study from the MIT Media Lab put fifty-four adults into three groups: write an essay with ChatGPT, with Google Search, or with no tools at all. The researchers measured neural activity with EEG as the writing happened. The ChatGPT group showed the lowest neural engagement of the three. When the same group was later asked to write again without AI, they performed worse than the others, and most could not accurately quote even a single sentence from their own earlier essays. They had produced the work without ever really having it.
A separate 2025 study by researchers at Microsoft Research and Carnegie Mellon, surveying 319 knowledge workers about their use of generative AI, reached a complementary conclusion. The more confidence workers had in the AI's output, the less critical thinking they reported applying. The more confidence they had in their own ability, the more they pushed back on what the AI produced. The workers least equipped to judge what the machine returned were also the ones most likely to accept it.
Both findings point at the same mechanism. When a tool produces a fluent, plausible answer, the brain stops doing the work that turns information into understanding. The text appears on the page. The paper gets handed in. But what was meant to be learned is not learned, because no one, no human, actually did the thinking. Doing the work and learning from doing the work are not the same thing.
What this looks like at the kitchen table
A Year 8 parent in the ISJ community recently described what good AI use looked like for her son. He wanted to revise topics from school. The AI tutor she had set him up with had been instructed to refuse to give answers. It explained, that was the first stage. Then it asked him questions: simple recall first ("what do plants need to make food?"), then reasoning ("a plant has plenty of light and water but very little carbon dioxide; what happens to photosynthesis, and why?"), then the harder kind ("why do plants have stomata, and what would happen if they were always closed?"). When he got things wrong, the model gently corrected, but only after he had attempted the question himself.
His mum sat next to him. She talked through the sticking points, reviewed each end-of-topic score, and coached him to lean on the AI to clarify, not to copy. Over a few hours he picked off subject after subject, chasing higher marks the way other children chase the next level of a video game.
The AI did the explaining. The AI did the checking. The boy did the thinking. And so the boy did the learning.
That is the model. It is not a question of banning AI from the desk. It is a question of who, in any given moment, is doing the cognitive work.
The test, for parents
When a child opens a chatbot for homework, a parent does not need to read the conversation. Three checks are enough.
Is AI doing the explaining? Probably fine. AI is patient, infinitely available, and can adapt to a child's level. Supervised, this is one of the better uses of the tool.
Is AI doing the thinking? Stop. This is where learning evaporates. If the AI is producing the paragraph, solving the equation, or constructing the argument while the child watches, the homework is being done, but the child is not learning it.
Is AI doing the checking and correcting? Often great. AI is tireless, non-judgemental, and excellent at spotting where reasoning has gone wrong.
If a parent asks one question after a homework session, ask this one: "where did the thinking happen?" If the child can point to the parts they puzzled over, the bits they got wrong first time, the moment something clicked, the learning is in. If they cannot, it is worth opening the laptop together and looking at the conversation.
That single question, asked often, is more useful than any policy a school or a parent can write. ISJ has written separately about teaching critical thinking in the age of AI, and about helping children think about their own thinking: both connect directly to the work parents can do at the kitchen table.