2019: FOCUS in Sound – 21

2019 Career Awards at the Scientific Interface

Dr. Ariana Anderson

Welcome to FOCUS In Sound, the podcast series from the FOCUS newsletter published by the Burroughs Wellcome Fund.  I’m your host, science writer Ernie Hood.

In this edition of FOCUS In Sound, we focus on a dynamic young scientist from UCLA who has been recognized in the past by the Burroughs Wellcome Fund, and we’ll see how that recognition has had a profound effect on her work, her career, and her scientific contributions.

Dr. Ariana Anderson is an Assistant Professor in the Department of Psychiatry and Biobehavioral Sciences at UCLA, where she also received her BS and PhD degrees.   She is also Principal Investigator and Director of the UCLA Laboratory of Computational Neuropsychology. In 2014, she received the Burroughs Wellcome Fund Career Award at the Scientific Interface, a $500,000 grant to fund her work over a five-year period.  She was given the CASI specifically to support her research on the placebo effect.  We will hear all about that, but Dr. Anderson has used the funding, along with her K25 Career Award from the National Institute on Aging, to pursue a variety of important scientific endeavors.

Ariana Anderson, welcome to Focus In Sound…

Thank you, it’s a pleasure to be here with you.

I know you are the proud mother of four children, but I’d like to start our conversation with the venture you have referred to as your fifth child, the free app you’ve developed and released called ChatterBaby, which is available at Chatterbaby.org.  Tell us about this app designed to measure and interpret infants’ cries…

ChatterBaby is an app that we developed for two purposes.  The first purpose of the app is to help parents understand what their baby needs.  Now there’s a long history of scientific literature going back about fifty years that looks at the differences in infant cries associated with, not just with different states, so for example a baby in pain cries differently, but also looking at markers of neurodevelopmental disorders.  So for example, some of the earliest work found that babies with bacterial meningitis, babies with Down syndrome, babies with epilepsy, may show different patterns of their cries than babies that are neurocognitively intact.  What we wanted to do was to see whether or not we could develop an app that would first of all help parents predict what was wrong with their child at that moment.  Is baby fussy, hungry, or in pain?  But second of all, collect infant cry data for another purpose, which is to see whether long term, babies with abnormal cry patterns become more likely to get a later developmental disorder such as autism.

So how did you develop and train the AI-powered algorithm?

Well, like any algorithm, what we needed was lots of data.  So we collected almost 2,000 total audio samples of babies.  Now these babies were either laughing, neutral, or they were crying from a stimulus, which was labeled by the mother and also by an expert mom panel who went and checked it over.  So for example, we got painful cries from babies who were either getting vaccinated or getting their ears pierced.  The other cries, like hungry or fussy or scared or tired, were things that were nominated by the parents.  And then we had a mom panel go through all of those other cries and say, uh, that baby doesn’t sound very hungry to me.  So if all of the mom panel did not agree on the label of the cry, it was excluded from our study.  With those labeled cries, we used standard speech recognition technology, so we extracted about 6,000 different acoustic features from each cry.  So these were things like the energy, the frequency, the different melodies and prosodic patterns that were existing, and we used these to classify and predict on new cries what the baby’s cry reason was.

I see, very good, that must have been a very exciting undertaking.  How can both deaf and hearing parents use ChatterBaby to help understand what their babies are trying to tell them in their vocalizations?

ChatterBaby is a free app that’s available on Google Play and also from the Apple Store.  When you download ChatterBaby, you send a five-second audio sample to our servers where we run our machine algorithms.  It returns to you a probability of fussy, hungry, or pain, much like a weather report, and it allows you then to interpret, based on the results, the most likely reason for your baby’s cry.

And what’s the application for, particularly for deaf parents?

Deaf parents and hearing parents both have the same need.  Why is my baby crying?  So it actually works exactly the same.  Now we’d like to continue in the future to make this something that we can use for remote monitoring.  We’re trying to figure out now how to, for example, set this up similar to an Alexa, where it hears a baby crying, it will be able to notify the parent in a different area of the house.  So we’re working now on expanding our algorithms, and also integrating them into remote monitoring systems, so that deaf parents can be notified.  For example, when their baby is in another room, whether or not their baby is crying, and if so, why.

Ariana, you’ve had this project going since 2013…is the artificial intelligence learning more and more as it goes?

Absolutely.  So what we do with this is that our app is also a method of collecting data.  So before, when we were collecting data manually, we only had a smaller sample to work with, but now that we a large data sample, we have a few hundred-thousand baby cries to work on, we’re using deep learning algorithms to identify what the patterns really are.  The deep learning just gives us a better idea of how to classify these babies, and whether or not the baby cries depends on things, like the age or the nationality, or any other underlying medical condition that the baby might have.  So because we have a big data set we’re able to identify better what the baby needs, and we’re also able to use more sophisticated deep learning algorithms to pursue these objectives.

Tell us about the application you’ve been working on to use ChatterBaby to help identify infants who may be at risk of being on the autism spectrum…

We’ve seen some really wonderful small-sample studies show that babies who are at risk for autism, so babies who have an older sibling with autism, they show different cry patterns.  So if a baby is a year old, or even 18 months old, someone can listen to that baby cry and say, that doesn’t sound right, it just sounds irregular, it sounds a little bit off, it doesn’t have the same tone as a typical baby.  Now, these are wonderful studies, they’re very strong evidence, but they’re based on small samples.  What we’re doing with ChatterBaby is, we’re not just collecting infant cries, but we’re actually collecting extensive developmental history.  So we ask parents questions about the pregnancy, about any sort of genetic risk, whether or not there is a sibling in the family with autism, whether or not the baby had a difficult delivery — a number of risk factors that we know may be associated with increased autism risk.  After that, we follow the babies for six years.  Starting at age two, we send them screeners for autism using standard instruments that calculate whether or not their child is at higher risk for autism.  We continue to follow them until they’re six years old.  Then we’ll be able to go back and look at all of this wealth of data we’ve collected, and identify whether or not the children who got later diagnosed with autism had the abnormal vocal patterns early.  Now we don’t just have to look at vocal patterns. We’re also looking at other things, for example, whether or not there was a problem with the baby’s delivery, whether or not the baby was premature, whether or not they spent time in the NICU, whether Mom used drugs during pregnancy.  We’re collecting a variety of risk factors that we can use later on, and the vocal patterns are going to be just one of the many clues we’re able to assess.

That sounds very exciting, and I’m sure it’s going to yield some important information going forward.  How successful has ChatterBaby been up to this point?

We’ve been very successful in attracting a wide user base. We have been featured in major media outlets in all countries around the world.  For example, just recently we were in the biggest newspaper in Lebanon. Now because of this, we’re able to get a variety of data sources, we’re able to get a variety of participants across the world that we wouldn’t be able to get if we were running a local study within our lab at UCLA.  We think the main advantage of being able to implement the study by launching an app is attracting a large user base.  We’re providing a free service for people across the world who would never have access to come into a UCLA lab otherwise.

Ariana, in your capacity as PI and Director of the Laboratory of Computational Neuropsychology, you’ve led several other important projects over the last few years.  I’d like to hear about all of them, starting with your research related to the placebo effect.  That’s a fascinating area that has been crying out for elucidation, and I understand your research is designed to also aid the drug development process.  Tell us more…

The placebo response is one of the biggest problems in developing new drugs, and the reason for this is that when drug trials are instituted, everyone gets a pill but they don’t know what it is.  So that means the placebo response is actually operating within people receiving a medication.  So you have this very powerful effect that is trying to compete with the effects of an active drug, and it’s also very noisy.  So it’s hard for us to tell whether a change is due to a drug or the placebo effect within people, and it’s hard to discriminate between active and inactive medications because of that.

What we are trying to do is we are trying to use multiple measurements to assess and identify and control the effects of the placebo.  Now we are doing this in a few ways.  The first way we’re doing it is we’re using brain imaging.  We are looking at drug studies of people who have received medication, before and after.  The medication they might have received might be an active medication or it might be a sham one, and we are trying to identify whether or not there are these brain changes that are specific to receiving a placebo pill, and whether or not there’s what changes look like they happen just because someone is getting treated in general.  If we can measure these different components of the placebo response, then we can identify whether or not these placebo components are affecting the drug outcome.  And that’s what we are trying to do in our brain imaging research.

You’ve also had great success in using electronic medical records and data mining to create new detection algorithms for diabetes screening.  How does that work?

Normally diabetes risk assessment only looks at a few different pieces of information to identify whether or not people might be high risk.  So these may be things like age, it might be your BMI; it might be your gender and perhaps ethnicity.  However, we know that there is a wealth of information that’s collected when you go to the doctor.  We have, for example, how long you’ve been a patient, how many medications you’re taking, what other diagnoses you might already have, whether or not you have hypertension — this variety of information that we believe can help better assess and better predict whether or not someone is likely to have diabetes.  Now this is an important problem because one in four people with diabetes don’t know that they have it, and oftentimes they don’t figure out they have it until they have some horrible complication of it.  For example, someone might have just blurry eyes and tingling skin, and they will completely ignore it.  Then they might go to the doctor later because they have a sore on their foot that doesn’t heal, and they’ll figure out that they have some form of gangrene, because it’s a complication of diabetes.  So oftentimes, people with diabetes don’t find out they have it until they have some major complication or require hospitalization.  One of the most expensive parts of diabetes isn’t actually treating the disease; it’s treating the complications that come from it, especially when it’s not managed.  And you can’t manage disease that you don’t know that you have.

What we are trying to do is trying to use electronic medical records to automatically calculate risk of diabetes.  So clinicians can know whether or not that person needs to be screened using all of the available information.  Now our past worked showed that people who are at risk for diabetes have different features in electronic medical records that go far beyond the basic information.  So for example, if you have a history of high blood pressure, that is a risk factor.  But then there’s also a bunch of other risk factors that you would expect.  So for example, if you have bacterial infections, that might be another risk factor for diabetes that we didn’t know about before, that we could use to increase your risk of needing a diabetes screening.  Now what we’re actually doing with this project, since we published our first paper, is we’re trying to extend this to major psychiatric disorders.  So for example, people with schizophrenia are more likely to have diabetes, based on both genetic risk and the medications they are taking.  So what we’re trying to do with our medical records here at UCLA and throughout the UC system, is we are creating new screening algorithms for diabetes that are intended for people with psychiatric disorders.  These people are the ones who are at the highest risk for psychiatric disorders from many different factors, but also they are the people who are most likely to not have reliable contact with the medical community other than seeing a psychiatrist.  So we’re making a tool that psychiatrists can use to automatically assess whether or not the person needs to be screened for diabetes, given that they’re on a variety of medications for mood, and given that they probably have genetic likelihood of already having it.

Well Ariana, it’s so interesting that your work in computational neuropsychology seems to focus on being able to detect and predict various conditions early on.  Another example is your work on early prediction of cognitive decline in Alzheimer’s disease, vascular dementia, and other neurocognitive disorders.  Fill us in on how you’ve been developing that aspect, which is what actually helped you get the NIA grant I mentioned…

When people think of dementia, they often think of Alzheimer’s disease.  However, there is vascular dementia, which is the second-leading cause of memory impairment in older adults.  Now vascular dementia can mean that you have risk factors for strokes, but it also means that there is problems with your vascular system, your vascular compliance, that leads to you basically being a bit slower in processing and responding to information.  So what we are doing is we are using functional MRI to look at the hemodynamic response — how did your blood flow respond when you, for example, are thinking of something, when you’re seeing an image of something?  And we’re finding out that the pattern of this response can predict whether or not you’re having memory issues above and beyond, for example, how many years you went to school, or your age or ethnicity or socioeconomic status.  So these actual patterns you see in vascular responses may indicate that you’re having already some sort of cognitive problems that are caused not by, for example, the typical plaques and tangles, but just caused by vascular issues.  So vascular health can determine cognitive ability early on.  It’s an early marker for cognitive problems.

Tell us about your project related to prison violence…

At UCLA, we are also interested in the social outcomes.  So for example, many people who have mental health issues might end up in the prison system.  We’re interested in finding out how we can look at different interventions, and whether or not they might be effective, for example, for reducing violence in prison and for reducing recidivism.  So I work closely with an organization called BetaGov.  It’s a collaboration between NYU and UCLA where we’re looking at how to do these real-time interventions.  How do you implement these trials to judge whether or not these interventions that are being implemented in prisons actually are effective in reducing violence and helping outcomes in reducing stress among prison staff, for example.

Last but not least, Ariana, I wanted to be sure you ask you about the CASI award from the Burroughs Wellcome Fund.  What has been the lasting impact of receiving that award back in 2014?

The CASI award, for me, has been the freedom to pursue projects that I believe are high impact that might not yet have funding implemented.  So for example, there are many projects that we have to do or that we want to do as scientists, but when we want to do it, we have to write a grant to do it.  It’ll take two years or three years to not just write the grant but to get it accepted and have funding in the bank, because the grant cycle is so slow.  Those are three years we could have spent writing the paper.  We could have had the work done in the first year if we only had the funding to begin it.  Because of the Burroughs funding, I’ve been able to hire staff to help me with these projects, to get out all of these studies and these ideas that I have, and be one of the first to actually implement them.  Without this funding, we wouldn’t have the flexibility to pursue the variety of research projects, and we would have a backlog, because we would be waiting for the projects to get funded before the work could actually begin.

So it’s been kind of an accelerator, then.

Absolutely.  I’m not spending three years trying to get money to start the work; I can go ahead and do it right away.  It’s been a much more efficient use of my time to pursue these projects this way, and that’s why we’re able to get this work done quickly.

Ariana, it’s been great speaking to you, and please keep up the very impressive body of work you’re engaged in.  Thanks for joining us here on FOCUS In Sound.

Thank you very much for having me, it’s been a pleasure.

We hope you’ve enjoyed this edition of the FOCUS In Sound podcast.  Until next time, this is Ernie Hood.  Thanks for listening!