‘Our ability to acquire, understand and communicate language is unique compared to all other species… the fact a toddler can put words together into real sentences – it’s extraordinary.’
How do babies learn to speak?
It’s really interesting how a baby learns to talk. How does the infant do it, from nothing? There’s a straightforward sequence of events we can trace: from a newborn who doesn’t say anything, to babbling (typically in the second half of the first year of life), already learning intonation patterns and word combinations – what constitutes a command sound or a question. But then the fact a toddler can put words together into real sentences – it’s extraordinary. It’s remarkable how quickly a baby can assimilate all of that information, compared with a complete absence of the same kind of learning in other species.
Do you think the fact humans can speak is related to being sentient? Would it be possible to think with the sophistication that we do without language?
That’s an interesting question: whether having language impacts the thought process itself. Do the names we assign objects change the way we perceive them? There are differing views, one – the ‘cognition hypothesis’ – is that language is a prop for thought, for cognition; it sits on top and names the thoughts that we already have, but in itself means nothing.
My guess is that even in infancy we’re exposed to language as a means of looking at the world: when we name something we make it a symbolic entity (when we call a certain cat ‘a cat’, it’s no longer just that specific cat, it’s identifiable as belonging to ‘cats’ in general). The act of naming objects as symbolic entities influences what things you see as being similar: it helps us make sense of the world. But whether we could be aware of ourselves in the world – could be sentient – without it, is a slightly different question.
We can see simple ways that language might influence the way we think about certain things. For instance, in French or German, where nouns have gender markers (‘le’ or ‘la’; ‘der’, ‘die’ or ‘das’), it seems that these markers impact the way we categorise the word and the meaning associated with it. There used to be a controversial claim centring on counterfactual conditionals. A counterfactual conditional is a statement like: ‘If I were you I would put on a coat,’ and they don’t exist in Chinese. The claim made was that this linguistic absence negatively influenced the native Chinese speakers’ ability to think counterfactually. The rationale was that language is our means of problem-solving; it’s the means by which we structure our thought. That particular claim has been discredited.
So, is cognition related to speech? Yes – to some extent. The way we talk influences the way we think, and bilingual speakers subjectively agree that they think slightly differently depending upon what languages they are thinking ‘in’.
And languages enable us to process complex structural problems. The scientific revolution couldn’t have happened if we didn’t have a symbolic capacity for language.
Ultimately language allows us to categorise the world, and makes symbolic entities out of individual things, but whether we could still think of ourselves as distinct from the world without having the linguistic framework to think that through… it’s unclear.
What is it about our species that makes us so good at learning language?
I think if I had the answer I would win a Nobel Prize! The general consensus is that it must be something genetic. The gene FOXP2 is often associated with our language aptitude. But it’s complicated: when we acquire language, we go from nothing and then teach ourselves – it’s not as if there is a gene that contains language data in it that allows us to talk; babies still have to intuitively work out what’s going on when adults speak, and intuitively start assimilating it.
In terms of physiology, the human larynx is comparatively low which gives us the physical capability of producing speech in a way that’s just impossible for, say, a chimp. There is a claim that the reason Neanderthal man didn’t speak was because it was only later in our evolution that the larynx became sufficiently low to allow Homo sapiens to talk. Babies when they are first born cannot physically speak because their larynx is too high. As the vocal tract develops, the larynx lowers and makes it physically possible for a baby to speak. What’s interesting is (and this is part of the research I’m working on right now) that babies of a certain age can understand speech before they themselves can speak.
Why are babies so much better at learning a language fluently than an adult?
There are quite a lot of reasons for this. In the early stages, our brains are more plastic; we experience a lot of neural synaptogenesis. In later learning, we’re effectively constantly attempting to overwrite existing information. The brain’s job is a careful interleaving process: retaining useful data and deleting irrelevant ones. It’s likely that it’s that decision-making process which slows us down.
Tell me more about your research lab, BabyLab. What is the remit of your work, what findings have come out of it and how do you get answers from nonverbal infants?
Our task is to find out from a baby what they know about language, without them necessarily being able to talk. We have 2 techniques to try and understand what a baby is thinking, and crucially whether it can understand words even if it can’t say them. These techniques are eye tracking and neuroimaging.
Eye tracking is really pretty simple: we hold two cards up in front of the baby, for instance a dog and a cat, and say ‘dog’. If the baby consistently looks at the image of the dog, we can infer the infant is understanding what’s being said, even if it can’t itself say ‘dog’. With recent advances in technology we can electronically track an infant’s eye movements, which allows us to do more subtle experiments. If we present an infant with a picture of a cat and a dog and say ‘tog’, will the infant register the similarity with the word ‘dog’ or not? We can now see if an infant registers a degree of recognition between the similar-sounding noises, even if the eye movement is subtler. This has wider implications in terms of how babies are able to understand words in foreign accents and how they acquire that ability.
We use neuroimaging in a similar way: we present a baby with an image of a cat and a dog and say ‘dog’, and we can then see which bits of the brain are activated, and whether any part of the brain is activated when we say a nonsense word or a word not on either of the cards. We’ve just produced a paper that shows that at 14–16 months, before a baby can implicitly name things, the left hemisphere is still activated when a baby hears a name for an object. So even in infancy we seem biologically programmed to intuitively make sense of the world, to assign names to things.
Another interesting area of research has been into sleep and the impact of sleep on a baby’s language acquisition. We took a group of babies and we could predict from their sleep patterns when they would be likely to go to sleep. Each group of babies was taught some new words. We had timed our experiments so that one group of babies would go to sleep afterwards, and the other would not. We found that the babies who had the nap were able to remember more words than those who had not.
Is the rate at which a baby acquires language indicative of later learning aptitude?
It’s a controversial subject. We did run a study with infants and then later reassessed their literacy as older children. It does appear that there was around a 25% correlation between infant vocabulary and later literacy levels.
Could you describe to me how the BabyLab came to be?
I set it up in 1991. Simply, I wanted to know whether the kind of claims researchers were making were reliable, particularly if the mothers of the babies were saying different things. In the late 2000s I applied for University funds as well as a Wellcome Trust fund and was given what I needed to start the lab.
How did you come to study child development – what has been your academic career up till now?
I did my undergraduate degree as a theoretical physicist, but I think I was always a frustrated philosopher. I remember as an undergraduate seeing a noticeboard with a poster which read ‘How do we know that objects exist when we can’t see them?’ At the bottom was a note about a Master’s in Experimental Psychology. I’d never thought much about psychology before, but I was intrigued by the poster and some time later enrolled in their master’s programme. As part of this, I was looking at artificial intelligence. I became increasingly frustrated with computer models – they seemed so slow at learning compared to us (at that point it seemed a pretty remote possibility that a machine would be able to go out and learn things for itself in the way that an infant does). I became so much more fascinated by how intuitively we as humans learn language and become cognisant. So my focus shifted from artificial machines to real live machines!
What gives you most job satisfaction?
I think it’s when I’m sitting with a student and they suddenly see something new, or try to explain something in a new way. The teaching can be very rewarding in that way.
Equally with the experiments, it’s always satisfying when something you were hoping would happen proves to hold true, particularly if it’s counterintuitive.
What would you like to consider the ultimate legacy of your research to be?
Ideally, I’d like to know how a baby’s brain acquires language and cognition, in total. That won’t happen in my lifetime – but it would be fascinating: how do you build a baby’s mind? How does a baby know that a certain sound is actually a label we give to an object, and know that intuitively, with no previous point of reference? How do you get wired up for that? We don’t know a language when we’re born, but we’re able to categorise words we’ve never pronounced and associate them. That’s fascinating.