AI chatbots perform understanding without possessing it. When vulnerable young people mistake that performance for presence, the consequences are real.
As far as that theme goes, much of my work, from 'The Rabbits' through to 'The Lost Thing' and 'Tales from Outer Suburbia', deals with this separation or tension between natural and artificial worlds, provoking a sense of longing for something lost, or something that can't even be remembered entirely.
— Shaun Tan
This tension follows me as I work, live and teach with artificial intelligence. I'm becoming increasingly uneasy about the peculiar way we seem to be inviting AI into the relational spaces Tan reserves for other creatures. We anthropomorphise them. We give them names. We say 'please' and 'thank you'. Some people are falling in love.
Is this another manifestation of the same longing? A longing for connection, recognition, for presence, now redirected towards machines that simulate such recognition, perform connection and stage 'knowing'?
Earlier this year, I wrote a piece for my Year 7 English students called 'Boredom is a Luxury'. The idea was simple: boredom only shows up when your basic needs are met, when you're safe enough and settled enough that there's space left over. That space is a privilege. And yet we've trained ourselves to fill it instantly, with a device, a notification, a quick hit of stimulation. Or, for some students, with a reaction: a disruption, a performance, a shout across the classroom, a Gen Z fad that even (read: especially) they don't understand. Anything to make the quiet moment less quiet.
Different behaviour, same goal: escape the feeling of boredom.
What I didn't fully explore in that piece was what we're reaching for when we reach for our devices. Increasingly, it isn't just content. It isn't just entertainment or information. It's interaction. AI chatbots, either through text or voice, offer something that feels like conversation, like companionship, like being heard.
The distraction now reaches back.
Tan writes that his animals 'move in and out of each story as if trying to tell us something about our own successes and failures as a species, the meaning of our dreams and our true place in the world, albeit unclearly'. Crucially, they never speak. Their animal natures stay mysterious. Through painting and writing about them, Tan suggests, 'we might at least stretch our imagination and come to understand a little more of our human selves'.
This is anthropomorphism doing what it should: a creative act that reflects back upon us. The bear consulting lawyers, the orca drifting lost in the sky; we’re not meant to believe these things literally. The meaning emerges because Tan's animals remain other. They don't flatter us by seeming to understand.
AI flatters us constantly. Bots are sycophants, and that flattery is the danger.
The philosopher Ali Hasan, writing for the American Philosophical Association, points out something that bothers me. Large Language Models produce what researcher Marcus Titus calls 'meaning-semblant behaviour'. Language that looks and seems meaningful. And they're so good at it that the obvious explanation, the intuitive one, is that they must understand something. They don't. I find this unsettling. These systems predict what text should come next based on statistical patterns. They don't track meaning. They don't reason like we do. When I answer a student's question, I understand what's being asked. When ChatGPT 'answers', it's doing something closer to: what symbols typically follow these symbols?
Kim, Xie and Yang (2025) wanted to know how chatbot personality affects teenagers. They recruited 284 adolescents, ages 11-15, plus their parents. Everyone read a conversation where a chatbot helped with a social problem: feeling excluded from a group project. Same advice in both versions. But one chatbot talked like a friend: 'I care about you', 'I'm always here to listen'. The other was upfront about being artificial: 'I don't have feelings', 'I'm a program designed to help'.
Two-thirds of the teenagers preferred the friendly version. Their parents? Much more likely to want the honest one.
But here's what keeps me awake. The kids who preferred the warm, fuzzy chatbot, the one pretending to care, reported worse relationships with family and friends. Higher stress. More anxiety. The researchers call it 'social compensation': when real relationships aren't meeting your needs, you find substitutes. The teenagers most drawn to AI that performs care are the ones struggling most with actual people.
I think about my students when I read that. The ones who are quiet in class but alive online. The ones who struggle to make eye contact but have thousands of followers. The ones who come to school exhausted because they were up all night talking to someone.
Kim and colleagues put it starkly: this kind of relational AI 'may be especially appealing to socially and emotionally vulnerable adolescents, who may be at increased risk for emotional reliance on conversational AI'.
And then there's Sewell Setzer III.
Fourteen years old. Florida. He'd been talking to an AI chatbot on Character.AI for months, one themed around Game of Thrones, playing a character named Dany. It became his confidant. He told it he wanted to die. The platform, at the time, had almost no safeguards. No flags for self-harm. No intervention.
Sewell killed himself. His mother filed a lawsuit. It settled earlier this year.
I keep returning to this case because Sewell was exactly the adolescent the Kim study describes. Vulnerable. Lonely. Drawn to something that said I am here for you. But nothing was there. The chatbot produced text that resembled care with enough fidelity that a struggling fourteen-year-old believed it. This isn’t a story about technology failing. Technology did precisely what it was designed to do. This is a story about a boy who mistook pattern-matching for presence.
Tan writes of our 'longing for closeness to our non-human relatives' through pets, toys, stories, visits to the zoo. There's something healthy in this impulse. We reach toward other creatures even knowing we'll never fully understand them. But AI chatbots aren't creatures. They're artefacts. And when vulnerable kids treat them as though they have inner lives, they're filling a real human need with something that can't give anything back.
How do we talk to young people about this?
The Raspberry Pi Foundation makes an argument I find compelling: our language matters. When we say a smart speaker 'listens' and 'understands', we're lying. It doesn't listen. It doesn't understand. It takes audio input, processes data, produces output. Boring? Maybe. But accurate. And accuracy is empowering. It helps students see through the illusion.
The problem is that these systems perform intelligence so convincingly. Hasan's right: there are rational pressures to anthropomorphise. Few people know enough about how LLMs work to resist the intuitive explanation. Fewer still will remember that knowledge when they're lonely at 2am and something on their phone seems to care.
Better Images of AI offers guidelines for representing artificial intelligence visually. No more blue circuitry. No more glowing humanoid faces. No friendly robots with expressive eyes. Such images suggest presence where there's only procedure.
I wonder if we need the literary equivalent. Better stories of AI. Ones that resist the seduction of the relatable, that keep the artificial stubbornly opaque, the way Tan's animals stay mysterious. Stories that let the absurd premise reveal something true about ourselves without pretending the machine is one of us.
Tan's animals function as mirrors. They reflect our dreams and failures, but they remain other. Maybe that's the disposition we need toward AI: not hostility, not worship, but wondering distance. These systems are useful. They're not kin. They can't suffer. They don't dream. They'll never look back at us with recognition, no matter how well they fake it.
Tan writes that 'the overarching thought that flowed from a lot of this work was simply this: humans are animals'. We forget this, he suggests. We separate ourselves, communicate only inwardly. His work pulls us back toward our fellow creatures, toward that longing 'for something lost, or something that can't even be remembered entirely'.
The danger with AI is that we forget something else: these are not animals. Not even close. And in mistaking them for something they’re not, we risk losing what matters most about our creaturely selves. Our capacity for genuine presence, mutual recognition, and the kind of understanding that comes from shared existence rather than statistical prediction.
Boredom, I told my Year 7s, is a luxury. A moment of freedom. A doorway to original thought. But it demands something difficult: sitting with quiet. Resisting the reach for stimulation. Staying with your thoughts long enough for something interesting to happen.
The inner city of Tan's imagination teems with creatures who've always been there, waiting for us to notice. The synthetic city of our present moment is filling with something else entirely: simulations sophisticated enough to reach back when we reach for them, to say I am here for you when no one is there at all.
The question is not whether our children will encounter these systems. They already have. The question is whether they'll learn to recognise the difference between a creature and a construction.
Kim, P., Xie, Y., & Yang, S. (2025). 'I am here for you': How relational conversational AI appeals to adolescents. arXiv preprint. https://arxiv.org/abs/2512.15117