How this work began
For nearly a decade, I have watched children inside a science museum.
I run Experimentorium, an interactive science museum in Tbilisi. There is an exhibit there where children place small balls on a rotating disk and watch how they move. At first, many of them expect the adult to explain the rule. Where should I put it? What is the right place? What is supposed to happen? They have learned, somewhere along the way, that the world arrives pre-answered, and that their job is to receive what is given.
But if no one gives the answer too quickly, something changes. The child moves the ball. Watches. Tries a different point. Notices a pattern. Gets it wrong. Tries again. They begin to ask their own questions, not because they were told to, but because the disk is doing something they did not expect, and they want to know why.
I have watched this moment happen thousands of times. It is the moment a child becomes the author of their own thinking. It is also the moment that AI, used carelessly, can quietly take away.
The Human Intelligence Method is what I have learned about how to protect that moment — not by keeping AI out of education, but by designing learning that keeps the child awake while they use it.
What I am defending
I am not defending the essay. I am not defending the test. I am not even defending knowledge in the narrow sense in which schools have used the word.
I am defending a child's own intelligence — as a set of seven living habits, grown one situation at a time: to observe, to ask, to attempt, to verify, to doubt, to choose, and to take responsibility for a conclusion.
This work sits at the intersection of education, museums, parenting, and emerging technology — and the human development that runs underneath all of them.
How I write
I write from years of watching real children in a real museum, from conversations with parents who are quietly afraid for their children, and from teachers who feel the ground shifting under what used to be a stable profession.
I write as someone who uses AI every day, in real work, and knows both what it can do and what it cannot. I am not against AI. I do not want to protect children from AI. I want to protect the part of them that must remain awake while they use it.