FOCUS in Sound - Maria Geffen

Exploring the way the brain encodes information

Transcript:  FOCUS In Sound #13: Maria N. Geffen

Welcome to FOCUS In Sound, the podcast series from the FOCUS newsletter published by the Burroughs Wellcome Fund.  I’m your host, science writer Ernie Hood.

On this edition of FOCUS In Sound, we welcome a scientist who explores the way the brain encodes information about the world around us – she combines computational and biological approaches to study the mechanisms behind the transformation of sensory representations. 

Dr. Maria N. Geffen is an Assistant Professor in the Departments of Otorhinolaryngology: Head and Neck Surgery and Neuroscience at the University of Pennsylvania.  She received her bachelor’s degree from Princeton, her Ph.D. in Biophysics at Harvard, and did her post-doc at the Center for Studies in Physics and Biology at Rockefeller University in New York.  In 2008, when she was still at Rockefeller, she received the Burroughs Wellcome Fund Career Award at the Scientific Interface, a $500,000 grant designed to help bridge advanced postdoctoral training and the first three years of faculty service. 

Maria, welcome to FOCUS In Sound…

Thanks for having me.

Maria, I imagine the world looks and sounds much different to you here in 2013 than in did in 2008 when you received the Burroughs Wellcome Fund award…

Oh yes, this has been a really interesting five years for me.  I started out being a postdoctoral fellow in the physics center at Rockefeller University, and now I lead an independent research lab at the University of Pennsylvania.  I have to say that in the focus of my research, there is a definite trajectory that I can trace, but we do use very many novel techniques, and we have really been pushing our studies to address new questions. 

Maria, in order to put the details in context as we explore your research, would you paint the big picture for us of your pursuits?  What are the overall goals of your Laboratory of Auditory Coding?

We have really served two big goals that we’re trying to address simultaneously.  One is that we try to understand how networks of neurons in the brain encode information about complex auditory environments, and we do that in the context of the natural experience that either us humans or animals would experience in the natural world.  And that’s a complex question, because while it’s known how single, individual neurons respond to different auditory stimuli, how they work together in ensembles is only beginning to be understood now.  The reason for that is the optogenetic revolution that has taken place in the last five years, where we have gotten new tools that allow us to study the behavior of cells, not just individually in isolation, but really as ensembles.  And furthermore we can do that in the awake brain, which gives us the ability to manipulate the behavioral tasks in which we can place the animal while performing our experiments and analyzing the brain activity. 

That’s one big question that we’re trying to understand, is really the function of the ensembles of networks in the brain.  Another more general question is trying to understand how sound processing takes shape.  How is it that we hear?  How is it that as you’re listening to my voice, your brain receives information about the mechanical vibrations of the sound waveform, yet eventually your brain translates that into words and you can comprehend words?  And so we study both the structure of sounds, the structure of speech, and we try to connect that to what we know about neural processing in the brain, to identify how sound representation gets transformed and how neurons are able to construct this very complicated representation of the auditory environments. 

That’s all very interesting, Maria.  So tell us a bit more about some of the methods that you mentioned, and how they interact to generate new knowledge…

We try to take what’s called a systems neuroscience approach to this.  In systems neuroscience, we approach these questions at sort of all different levels.  So on the one hand, we perform sound analysis to try to generate different types of sound stimuli, where we modulate the natural sounds in some specific ways.  During our experiments, we engage the animal in a specific behavioral task, or the humans engage in a psychophysical task, such that they’re not just listening passively to the stimuli, but they have some form of knowledge or some form of learning and memory that they need to engage in during the experiment. 

What we’re of course ultimately interested in understanding is the neuronal activity, and so to understand neuronal activity, we record the signals that the neurons send to each other, using electrodes, which are tiny probes that measure the electric potentials between the cells.  And we also use pharmacological tools to modulate the activity of neurons overall in the brain.  But also we use optogenetic tools, which allow us to change the activity of individual neuronal cell types, on a really high resolution. 

In a typical experiment we combine all these techniques together, and what this allows us to do is to study the brain function as it happens in the real world, where we’re always learning new information.  The animal is performing a behavioral task while listening to these mathematically generated sound stimuli, while we’re recording the activity of those neurons and simultaneously perturbing the activity of some of the neurons, letting some of those neurons respond to the stimuli as they would under natural conditions. 

It’s very exciting that you’re using so many state-of-the-art tools and emerging tools in combination to answer such complex questions.

We’re really benefited from the recent growth in the overall techniques.  When I was starting to study systems neuroscience in my graduate work, I was actually interested in very similar research questions, even though I was working in the salamander retina, and what we tried to understand was how ensembles of neurons function together.  And at that point, under natural conditions, it was only possible to study the system in the isolated neuronal tissue.  Now the retina could live in the dish for many days, and we could run very involved, complicated experiments manipulating the activity levels of individual neuronal cell types and recording the activity from populations of neurons, and again, using computational techniques.  And this was of course, in vision, but ultimately we were restricted by the fact that that’s isolated tissue, and I always wanted to move into the cerebral cortex, which is a really important part of our brain.  People think that that’s what really makes us human.  This was only possible recently, where we wouldn’t have to sacrifice the control over the state of the animal, and we could integrate it with a behavioral task.  So it’s a much richer repertoire of tools within which we can study the function of the brain. 

This year, this systems neuroscience approach to map the neuronal brain function was recognized by President Obama and the NIH as their top priority, which you might have heard from the Brain Initiative. 

Maria, I know that you and your postdoc, Mark Aizenberg, recently published a study in Nature Neuroscience with some pretty amazing findings regarding associations between emotions and the ability to discriminate sounds, shedding new light on some previous, seemingly contradictory findings.  Would you tell us more about that research?

This research was again in this context of trying to understand how the brain encodes sounds in the real world where we constantly have to learn to discriminate between different types of sounds.  For an animal, for example, it’s very important to be able to detect behaviorally relevant, very specific sounds.  For example, the sound of an owl flying overhead, or the sound of the feet of the predator.  To us, of course, there is a huge variety of sounds that we constantly need to pay attention to, which our brain learns to associate with danger sound, for example, the sound of an alarm or a siren. What we tested was how well can our brain learn to discriminate.  Once our brain has learned to associate a specific emotional value with a particular sound, how well does that affect our ability to in general discriminate between different sounds?  Does becoming afraid of something, for example, does that change our ability to tell apart different sensory stimuli? 

There was some work that actually resulted in somewhat controversial findings.  This was done using this model of what’s called aversive learning, where we learn to become afraid of something that has previously been neutral.  In one study there had been a finding that if you learn that some sound is followed by something unpleasant, by an unpleasant stimulus, then your brain actually becomes less sensitive to the difference between different types of sounds.  So it’s as if, if you had a really great sense of pitch, it actually decreases if you become afraid of one of those sounds that you’re listening to.  This was explained by the idea that it might be actually beneficial for the organism, when exposed to something aversive, to kind of generalize that sense of aversion to other similar sounds.  And this doesn’t have to be restricted to sounds; this can of course go to all different perceptual senses.

On the other hand, when a similar study was conducted using two different scents, and these were scents that people couldn’t really tell apart before, although they were two slightly different chemicals.  When people were trained to associate one of those chemicals with a negative stimulus, then they actually perceptually were able to tell apart the two odors from each other.  So in a way their sense of how well they can tell apart different sensory cues, in this case, increased as a result of a very similar type of learning, what we call emotional learning. 

My post-doc noticed that there was actually something that differed between these two studies, and that was what was required during the emotional learning.  In one study, the emotional learning was restricted to something that was perceptually obvious.  The subjects in that study were trained on two tones, where the tones were perceptually very far apart, and so it was easy to discriminate one tone from the other and to associate one of the tones with a negative stimulus.  Whereas in the study that tested odor perception, the two odors were really close together, and that led to the opposite effect.  Our hypothesis was that the relationship between how precise the emotional learning needs to be is closely linked to the resulting changes in how tight our sensory acuity becomes. What that meant was that if during the emotional learning, the sounds that are used are very close together, then we predicted actually this would not only translate in much more precise emotional learning, but it would also translate into changes in our sensory discrimination; changes in the sense of touch of the animals that we’re testing.  If we didn’t ask for a very precise emotional learning of the animals, they will not develop such a precise emotional response, and this will translate into an actual worsening of sensory acuity. 

Well it’s very impressive that you’ve actually been able to identify the brain mechanisms that underlie these activities.  And I understand that there are some actual implications for understanding, or getting a better perception of conditions such as PTSD and anxiety.  Could you tell us about the translational potential? 

One of the things that happens in post-traumatic stress disorder is that a fearful emotional experience becomes translated into fear that the patient develops in response to everyday sensory stimuli.  So for example a veteran who was traumatized in combat by the sound of bombs exploding, when they come home there are many different types of sounds, such as the sound of thunder, that can trigger a very strong emotional response.  So that means that in a way this is sort of like generalizing from one sound to another in their emotional learning, and that’s why we used emotional learning.  It’s in a way a model for developing anxiety or more specifically post-traumatic stress disorder.  But what’s interesting is that some of the veterans develop PTSD and others who have been in the same exact combat situations, have the same exact training, do not.  In our experiments, we also see that there is a huge variability in the range that the effects of the individual animals exhibit in response to the same exact emotional training that they undergo, and there is a difference both in the sensory response and also in how much they generalize, or how specific their emotional response becomes to the stimuli that they’re trained in.  And we think that there is actually a parallel between this and the differences that you can see in the emotional state of the veterans.  So that’s a group of people who have undergone the same emotional experiences, but some of whom have developed PTSD and some of whom have not, and we’re thinking of ways in which we can use these animal models to try to maybe be able to predict whether there could be some basic sensory test that we could develop that would allow us to predict whether some individuals are more at risk for developing PTSD versus those that are not. 

And also, on the flip side, for developing treatments and therapies, we’re trying to understand the circuits that underlie this learning and changes in the sensory perceptions that follow emotional learning.  We believe that the brain circuits are shared with those that are involved in the development of anxiety disorders, and that possibly by training these brain circuits we can develop new therapies for these disorders. 

Maria, that’s just fascinating work and certainly holds a lot of promise for helping some people who definitely need it.  So we’ll certainly keep an eye on that line of research.  Where is your research on the complex relationship between sounds and the brain headed from here?

Now that we have gotten a grip on some basic things that the auditory cortex is involved in, we’re trying to understand the details of the processing that takes shape within the auditory cortex.  On the one hand, we aim to understand how processing of complex sounds is modulated between different areas within the auditory cortex, such as the primary and the secondary auditory cortex.  And there we’re asking a very specific question, which is, how does our brain develop a representation of sounds that is invariant to some basic perturbations?  So for example, if I say the word “neurons” slowly or fast, you can still extract the meaning of that word.  And also if I lower my voice, or I raise the pitch of my voice or somebody else says that word, you can still tell that that word, that’s the same word.  So although several very different sound waveforms enter your ear, somewhere in your brain, the brain creates a representation of that word that’s invariant to the basic acoustic features.  And we think that that transformation happens somewhere in the auditory cortex based on some recent results that we’ve obtained, and also on decades of work by other researchers that have all identified the auditory cortex as this crucial area where the brain goes from representing the physical features of the sound to really the object-based representation.  And so we’re asking the question, when we go from within the auditory cortex between different subdivisions of that brain area, can we see a very gradual shift such that the representation becomes more invariant, it changes less as the pitch of the sound is changed, or as the temporal statistics are modified?  And how does that shift occur at the level of populations of neurons?  And, also, how do different cell types that play a role in that?  That’s also important in parsing the auditory theme, in sort of being able to hear my voice against a very loud background.  For example, if we were in the middle of a cafeteria and you were trying to listen to me, your auditory system would be shutting down what we call the background noise.  That’s where we’re going in trying to understand the processing of complex sounds, and with the study of emotional learning, we’re really pulling apart the different brain mechanisms that are involved in that, and also refining our behavioral approaches to be able to ask more realistic, more complex questions. 

Maria, it’s been a real pleasure to get to know you and your fascinating work.  We wish you the best of luck for continued success, and thanks so much for joining us today on FOCUS In Sound…

Thank you so much, Ernie, it was a real pleasure talking to you, and it was a real pleasure to be able to explain to you the development of the research in our Laboratory of Auditory Coding at the University of Pennsylvania.

We hope you’ve enjoyed this edition of the FOCUS In Sound podcast.  Until next time, this is Ernie Hood.  Thanks for listening!