Skip to main content

Spectrum: Autism Research News

Researchers seek patterns in the sounds of autism

by  /  15 March 2010
THIS ARTICLE IS MORE THAN FIVE YEARS OLD

This article is more than five years old. Autism research — and science in general — is constantly evolving, so older articles may contain information or theories that have been reevaluated since their original publication date.

Not just babble: Researchers are analyzing the grunts, squeals and early chatter of children with autism.

Scientists have created machines to detect distinctive speech patterns in children with autism that go unnoticed by the naked ear.

A Colorado-based nonprofit, LENA Foundation, is marketing one such system as a screen for autism in toddlers. Several academic research groups are also dissecting the complex vocalizations of children with the disorder.

Because the LENA screen hasn’t been rigorously tested in independent studies, some experts are skeptical that the technology can reliably detect autism-specific patterns. But its creators predict that it will soon be ready for the clinic.

“For the last several years, I’ve been quite persuaded that this will become a part of the standard screening and diagnosis not just in autism but in other clinical domains,” says D. Kimbrough Oller, a scientific adviser for the LENA Foundation and professor of audiology and speech-language pathology at University of Memphis in Tennessee.

Even if the system is not ready for the clinic, some researchers say it is useful for studying speech development.

“I think that as a scientific tool, [the LENA system] really does move the field forward,” says Catherine Lord, director of the University of Michigan Autism and Communication Disorders Center.

Since the 1960s, several studies have identified vocal differences in people with autism. For example, compared with healthy individuals, those on the autism spectrum show a wider range of voice intensity and pitch1. What’s more, some speech oddities — such as stressing the wrong syllable or having a nasal-sounding voice — appear most often in children with poor scores of social and communicative abilities2.

According to a study published in the March issue of the Journal of Intellectual Disability Research3, 18-month old babies who later get diagnosed with autism already tend to have higher-frequency cries than do babies who develop normally.

Scientists don’t fully know why the sounds children with autism make are different. Some have proposed that motor impairments or delays in the brain’s ability to process sounds could influence a child’s vocalizations.

To learn more about autism-specific speech patterns, Oller and researchers at the foundation used a patented recording device and software algorithm to measure the number and characteristics of sounds emitted from children with autism and their parents.

Oller’s research shows that infants make precursors to speech — such as squeals, growls and vowel-like sounds — but by the end of the first year, the sounds come closer to resembling speech. Several years ago, he helped LENA engineers incorporate his findings on speech development into the LENA system’s algorithms.

Since 2006, the organization has sold the product, called LENA Pro, to researchers, speech language pathologists and doctors. LENA also sells other, more basic versions of its system to parents and teachers. All of the systems — which range in price from $200 to $8,400 — can distinguish adult and child speech from background noise, such as a television, according to LENA officials.

The team recorded all sounds uttered in a 12-hour period by 26 children with autism between 16 and 48 months old. Each child wore specially designed overalls, vests or shirts that hold the digital recorder in a front pocket.

The researchers found that children with autism have 26 percent fewer back-and-forth vocalizations with adults than do healthy children, and those bouts of communication are about four seconds shorter. The study was published in the Journal of Autism and Developmental Disorders4.

Study investigator Steven Warren says he was most surprised to find that when children with autism make chatter, they often don’t direct it to anyone in particular, as if they were alone on an island. He presented the results last October at a symposium in California.

“The parents came up afterwards and said, ‘Yes, that’s exactly right. Those vocal islands, we see those all the time,'” notes Warren, a professor of applied behavioral science at University of Kansas.

One limitation of the study is that the researchers did not collect data from healthy children at the same time. Rather, control data was pulled from a database of older experiments that had been conducted by LENA researchers using similar methods, so the number and time span of recordings varies between groups.

What’s more, the initial analysis did not distinguish between simple and complex utterances. For instance, “Ba” and “Mommy, I want a cracker” could both count as a single vocalization.

“The new study was kind of skimming the surface of what the technology can do,” says study investigator Jill Gilkerson, director of child language research at the foundation.

The group has further investigated the children’s speech patterns by comparing vowel and consonant sounds, but those results are unpublished.

Counting sounds:

Before automated tools like LENA, many studies relied on researchers to observe and painstakingly write down a child’s every utterance. For example, in a 2000 study based on videotapes of toddlers playing, researchers found that children with autism emit more squeals, growls and yells than do children with developmental delays or healthy controls5.

Such methods are still common, and are laborious because they depend on well-trained researchers and lengthy transcription, says Oller, who has been studying the field since 1971.

“I didn’t believe when I started my career many years ago that in my lifetime there would be practical, automated vocal analysis for any purposes like these, but it’s definitely happening,” he notes.

LENA’s product can cut back on the need to videotape children in their homes — which can be quite invasive, Lord says — and reduce the time needed to analyze tapes.

Using a LENA Pro system, Lord’s group plans, for instance, to investigate whether the frequency and duration of vocalizations from autistic toddlers 14 to 20 months old increase after behavioral therapy. The scientists are also taping children in their home twice a week to analyze visual aspects of communication, such as pointing and paying attention to objects.

Other researchers are developing their own speech analysis technologies, focusing on infant and toddler speech rather than toddler-adult interactions.

For example, Partha Mitra, a professor of biomathematics at Cold Spring Harbor Laboratory in New York, is extracting measurable parts of speech from videotaped interactions of young children.

So far, he has gathered videotape recordings from three families of healthy children ranging from 6 to 18 months old. He also plans to analyze videotaped interviews of children who had been recorded during the one-hour autism diagnostic process.

Using his background in human speech and bird song analysis, Mitra plans to construct audio and video processing algorithms for each group of children. But the resulting analysis won’t be completely automated. Instead, the algorithms will be used to highlight interesting bits of the recording. “What is more likely is that we’ll greatly speed up the human annotation process,” he says.

Autism waves:

In the past year, LENA has taken their analyses of children with autism a step further than that described in the new study. Last fall, the foundation introduced an automated autism screening service that, for $200, estimates a child’s risk of developing autism on a seven-point scale.

LENA researchers first described the screening algorithm in a conference report published last year. Based on an analysis of 34 children with autism spectrum disorder, 30 with language delay and 76 healthy controls, the screen correctly identifies 85 percent to 90 percent of children on the spectrum6.

Since then, updating the sample to include 75 children with autism spectrum disorder, 34 with language delay and 81 healthy controls, LENA researchers say that the screen correctly identifies 89 percent of children on the spectrum. Those results are unpublished.

LENA’s voice analyses note much more than squeals and grunts. Researchers collect the sound waves of the children or adults being studied and compare their patterns. These patterns — which are not easily detected by the human ear, and require massive computing power to calculate — are distinct in children with autism, the company claims.

The screen has not been validated by independent researchers, and Lord says that more research is needed before it is marketed to parents and clinicians. “I was concerned when they started putting out announcements that this was a way of diagnosing autism,” she says.

Warren, who is a scientific adviser for LENA, admits that it should be applied in many more circumstances by more researchers before it becomes part of screening and diagnosis. “But the tool might be really valuable in a clinical sense,” he says.

References: