Better tools needed to assess clinical trials
“There’s been a big focus on basic science research, but not on quality clinical research,” says Randall Carpenter, president of Seaside Therapeutics, a company in Cambridge, Massachusetts, that is running several clinical trials. “That’s an impediment now because we have all these targets but don’t have the tools to properly do clinical testing.”
The ideal solution, of course, would be a biomarker such as a pattern of brain activity that can be used to quantitate response to treatment. Although the hunt is on for such a biomarker, it’s unlikely to be available in the near future.
“In the short term, will have to work with tools that are already in use,” says Carpenter. Given that the autism field has seen few placebo-controlled, blinded and randomized drug trials, “there is not much known about how to measure efficacy in clinical trials of people with autism,” he says.
The best-studied tools for assessing autism were developed primarily for diagnosing the disorder, not measuring response to treatment.
“Existing tools were perfected to capture “trait variables” — relatively constant features of an individual over time — but they are not well-designed to capture shifts with respect to autism symptoms,” says John Constantino, professor of psychiatry and pediatrics at Washington University in St. Louis. They are also often cumbersome and time-consuming, he says, not ideal qualities for use in large-scale clinical trials.
“For anything else — the core symptoms — we have nothing that has been tested before,” says Luca Santarelli, head of neuroscience at the pharmaceutical company Roche, headquartered in Basel, Switzerland.
Companies running clinical trials must define an endpoint before the start of a trial. But most researchers collect additional data along the way, in case the designated endpoint fails but other indicators show an effect. (The new endpoint would then need to be tested in a subsequent trial.)
Roche, which is running clinical trials of an mGluR5 antagonist for fragile X, includes a hypothesis-generating arm in its clinical program to test endpoints that might be sensitive to a given therapeutic approach, says Santarelli. ‘There is no certainty in terms of which will turn out to be successful.”
Based on the emerging data, Roche is building a new instrument tailor-made for use in people with fragile X.
The process for getting new tools approved for clinical trials by the U.S. Food and Drug Administration (FDA), however, can be both time-consuming and expensive.
“The level of data you have to have to validate these tools for acceptance by the FDA is much higher than [what] you need to publish in an academic journal,” says Carpenter.
Carpenter and his collaborators have instead opted to modify an existing tool, which he discussed at the Translational Neuroscience Symposium in Switzerland in April. They developed a new algorithm, specialized for people with fragile X, for analyzing the results of the ABC.
For example, the lethargy/withdrawal subscale of the ABC includes a number of questions that focus on lethargic behavior. Children with fragile X, however, tend to be hyperactive, so the new algorithm focuses on questions relating to social withdrawal. “We are now validating it in clinical trials to see if it’s sensitive to change,” Carpenter says.
Autism Speaks, a research and advocacy organization, aims to provide some guidance for those embarking on clinical trials. It has convened two working groups to analyze tests that assess social communication deficits, anxiety, and restrictive and repetitive behavior. The results of those efforts have not yet been made public, but are expected to be published soon.
Joseph Horrigan, Autism Speaks’ assistant vice president, says the organization is willing to discuss different clinical outcome measures with researchers.
One of the aspects the organization is evaluating is how well tests developed for other disorders can be used to assess changes in autism symptoms. The Yale-Brown Obsessive Compulsive Scale, for example, is a checklist developed to assess people with obsessive-compulsive disorders. It is sometimes used to measure repetitive behaviors in autism.
“While there are some similarities between the behaviors in those two disorders, there are some significant differences as well,” says Srinivas Rao, chief executive officer of California-based Kyalin Biosciences, a biotech company that is developing an oxytocin-based drug for autism.
Building better tests will, in part, require a better understanding of the natural history of autism.
“Few measures that quantify autism states have been measured in the normal population, so we don’t know how much of a difference is really significant,” says Constantino.
His team is planning to follow both typically developing toddlers and those with autism, using a number of tools to identify changes over time, as well as the most realistic indices to measure those changes.
He and his collaborators are also looking for more efficient and practical ways of evaluating children with autism. They have a paper in press that evaluates a new version of the Childhood Autism Rating Scale, showing that even clinicians with minimal training can reliably score symptom severity based on a 15-minute video of a child1.
The potential benefits aren’t limited to drug testing. “For young children, a massive amount of resources are invested in early intervention,” says Constantino, “and there are almost no standardized tools used systematically to track changes over time.”
For more reports from the 2012 Roche Translational Neuroscience Symposium, please click here.