'Baby sibs' struggle to integrate audio, visual speech cues
Infants at high risk for autism have difficulty synthesizing information from their vision and hearing, according to a study published 15 May in PLoS One1. Researchers found that 9-month-old infant siblings of children with autism don’t seem as interested in a video of a person speaking with an audio track of mismatched syllables as controls are.
The findings, which come from the British Autism Study of Infant Siblings (BASIS) network, are the first to detect trouble with sensory integration this early in development in infants at high risk for the disorder. Studying these so-called ‘baby sibs,’ who are at nearly 20-fold higher risk of developing autism than the general population, could help identify the earliest signs of the disorder.
Difficulties with language and communication are a hallmark of autism. And some previous studies have shown that people with autism have trouble integrating information from two different senses — though not specifically in relation to language. “This is one of the first studies that has actually linked together these two areas of difficulty,” says lead investigator Mark Johnson, director of the Centre for Brain and Cognitive Development at the University of London.
A second study, published online in Cerebral Cortex on 24 May, suggests that children with high-functioning autism integrate audiovisual information less effectively than controls do2.
Although it is not one of the core deficits associated with autism, trouble with sensory integration has been linked to the disorder for decades. “There are a lot of anecdotal reports, both from clinicians and from individuals with autism, that they experience things in a very segregated way,” says Sophie Molholm, associate professor of pediatrics at Albert Einstein College of Medicine in New York, who led the Cerebral Cortex study. “But it’s only in the last ten years or so that people have begun to empirically test this notion.”
In the baby sib study, researchers studied 31 at-risk infants and 18 control infants, who have at least one full sibling without autism and no one with autism in their immediate family. The infants all viewed videos depicting two faces side by side. One face mouths a syllable — ‘ba’ or ‘ga’ — that matches the video’s soundtrack. The other face mouths a different syllable. Previous research has found that typically developing infants can detect mismatched sound and lip movements by 5 months of age3.
When older children or adults view a video in which the mouth forms ‘ba’ but the soundtrack says ‘ga,’ they usually report hearing ‘bga.’ Infants can’t report on what they hear, but eye tracking studies show that they tend to look longer at things that are new or unusual. Because ‘bga’ isn’t a syllable that exists in English, it’s likely to be a new combination of sounds for the babies in the study.
The BASIS team found that the low-risk control infants look longer at the mismatched ‘bga’ faces than at any of the other faces, “which probably indicates their increased interest in social cues they’ve never seen before,” says Elena Kushnerenko, a research fellow at the University of East London and a member of the research team.
By contrast, the baby sibs spend roughly the same amount of time looking at each type of face. “Infants at risk may have difficulty attending to two simultaneous streams, like visual and auditory, and putting them together,” Kushnerenko says. The findings suggest the baby sibs don’t hear the ‘bga’ sound at all.
Another possibility is that baby sibs notice the syllable but aren’t as interested in this novel social information as controls are. “We don’t know what the infants are experiencing,” notes Molholm, who was not involved in the BASIS study. “But it’s very interesting that we can see these differences at such an early stage.”
In Molholm’s study, the researchers asked children between 7 and 16 years of age to push a button when they heard a tone, saw a disk on a computer screen, or both. “It’s the simplest of tasks,” she says, and one that helps the researchers parse out the brain’s response to audiovisual stimuli that occur together compared with those that occur separately.
When they receive both audio and visual information at one time, controls press the button faster, but children with high-functioning autism don’t get the same boost, the study found. What’s more, their brains have different patterns of activity on an electroencephalograph in response to this multisensory information compared with controls.
One potential limitation in the BASIS study is that the researchers had to exclude five baby sibs but only one control from the analysis, because they only looked at one of the two faces in the videos. “That’s a bit of a concern for me — is there something about the five babies that were excluded that might change the results?” says David Lewkowicz, professor of psychology at Florida Atlantic University in Boca Raton, Florida, who was not involved in the research.
In turn, those results might eventually provide a promising target for therapy.
“Most of these infants will go on to a typical outcome despite being at risk or showing this early endophenotype,” Johnson says. “This gives us great hope that there might be some natural, spontaneous mechanisms of recovery which we might be able to tap into when we’re trying to design future intervention studies.”
1: Guiraud J.A. et al. PLoS One 7, e36428 (2012) PubMed
2: Brandwein A.B. et al. Cereb. Cortex Epub ahead of print (2012) PubMed
3: Kushnerenko E. et al. Proc. Natl. Acad. Sci. U. S. A. 105, 11442-11445 PubMed