‘Humans are storytellers’: the power of stories in language development of children and AI models
What do ten-year-old children and chatbots have in common? PhD researcher Bram van Dijk studied language development in both children and AI language models. ‘It’s actually quite practical that we attribute human traits to a chatbot.’
‘Although children and language models learn language differently, there are striking similarities,’ says Van Dijk, a researcher at LIACS. ‘Both recognise patterns in language. Children learn from their parents’ language use, while language models are trained on online texts. Both children and language models pick up on more complex structures from this language input: it’s not just about predicting the next word but also about understanding relationships between words at a higher level.’
Van Dijk’s research initially focused on children’s stories. He wanted to find out whether the way children describe characters reveals something about their capacity for empathy. ‘The more a story provides insight into a character’s thoughts and feelings, the more a child demonstrates empathy “in action”. A more complex and varied vocabulary is often linked to more developed characters who explicitly think or feel things.’ Stories challenge children not only to use their language skills to the fullest but also to see the world through someone else’s eyes.
‘We wanted to see whether language models can handle the elements that make a story meaningful.’
Talking to a chatbot like it’s a real person
In 2022, everything changed with the arrival of ChatGPT. ‘It was one of the first language models that allowed us to communicate fluently and naturally, as though we were talking to a real person,’ Van Dijk explains. One of the questions raised by international media and researchers was: ‘If these models are so adept at language, could they also have developed other skills, such as empathy?’
Van Dijk decided to compare how children and language models understand stories. Together with his colleagues at LIACS, he administered psychological tests to both groups. These tests assessed whether they could grasp the inner states of story characters — such as their thoughts, desires, and emotions. ‘We wanted to see whether language models can handle the elements that make a story meaningful.’
Humans versus machines
The results were remarkable. Only the largest language models often outperformed children. ‘This is partly because, in any conversation, you aim to be as constructive as possible. That’s true for both humans and machines. To communicate effectively, you need to anticipate what the other person means,’ Van Dijk explains. The largest language models, like ChatGPT, are optimised for this.
The next question is: do these models genuinely understand what people want? Van Dijk refers to the Chinese Room, a thought experiment from 1980 in which a person inside a room translates Chinese symbols using a large rulebook, without understanding their meaning. ‘To us, the output seems logical, but the person in the room has no knowledge nor intentions with the messages, so the experiment purports to show the person doesn’t understand anything,’ he explains.
‘However, we humans only have our own experience of knowing and feeling, and it’s hard to imagine that others don’t experience this in the same way. For this reason — and because it’s incredibly practical — we attribute intentions and meanings to others, even though we can never fully know them. We interact with a language model as if it thinks like a human. And it doesn’t necessarily matter whether it truly does — or whether it’s a person or a machine.’ According to Van Dijk, this is the best way to work with chatbots.
‘That little word “was” signals that their imaginary world is a different reality.’
Stories remain essential
Van Dijk believes stories are indispensable to humans. ‘At school, at home, while playing with friends or in a video game — stories are everywhere. Even when we look back on our lives, we do so in narrative form. Yet, stories are receiving less attention, both in education and beyond.’ Van Dijk attributes this decline partly to the fact that the positive effects of stories are not easily measurable, making them less valued in an era of quantifiable learning objectives.
Why his fascination with storytelling? ‘With language, we can signal that we’re stepping out of the ‘here and now’ into another, safe world. Stories embody this perfectly.’ Children engaging in pretend play, for instance, might say: I was the police officer, and you were the thief. No one teaches them to use the past tense in such contexts, yet they do. That little word “was” signals that their imaginary world is a different reality.’
‘I don’t feel like an outsider’
Van Dijk has a background in social sciences but feels at home among the computer scientists at LIACS. ‘I certainly don’t feel like an outsider. I believe a mixed-methods approach has great potential for the future. It’s valuable to see how methods from one field can be meaningfully applied to another. I’m very glad that Max and LIACS provided the space for this.’
Now, after completing his PhD, Van Dijk continues working with one of his supervisors, Marco Spruit, who is also affiliated with LUMC. At LUMC, Van Dijk now studies, among other things, the language use of older adults to predict their mental health. ‘It’s essentially a continuation of what I was already doing: working with people to collect language samples that I can analyse.’
PhD defense
Bram van Dijk defended his dissertation, Theory of Mind in Language, Minds and Machines: a Multidisciplinary Approach, on 17 January 2025 at the Academy Building. His supervisors were Assistant Professor Max van Duijn and Professor Marco Spruit.