Teaching, Learning, and Research Hub

Speech Understanding by Cochlear Implant Patients in Complex Listening Environments

This presentation focuses on speech perception outcomes when people with one or two cochlear implants listen in test environment simulations of restaurants and cocktail parties.

The following is a transcript of the presentation video, edited for clarity.

I’m going to talk about speech understanding in simulations of complex environments by my implant patients. My disclosure slide says that the work is supported both by the NIDCD and cochlear implant companies who pay my staff salaries, my patient travel, and allow me to buy new golf balls every once in a while when I need new golf balls.

I spend most of my time at cochlear implant meetings. I rarely come to a meeting like this, that’s why it’s such a treat to be here. And so it’s likely unless you come to one of my specialty meetings, which I don’t advise because actually they’re very boring. I mean they’re surgeons and you know. So I’m going to start today, because you probably won’t hear me again, with three things that you should know about cochlear implants in addition to the topic that I’m going to talk about. And they are relevant to the topic, although it’s a bit of a stretch.

A history of cochlear implants

So we’ll start with a little history of cochlear implants. History starts in France with these two fellows — a surgeon and a physiologist who had a patient whose cochlea had been removed, had a stump of an eighth nerve. They put an induction coil in an electrode and they could stimulate the stump of the eighth nerve. The patient heard things that he described as cricket like. So electrical stimulation of the eighth nerve obviously was possible. Now revisionist historians have actually said that this was probably the first brainstem implant because the stump was actually almost non-existent, and so maybe the first implant was a brainstem implant.

Now word of this travel from France to Los Angeles where Bill House was told about this work in France by a patient, and he did the first cochlear implant that is into the scala tympani. Now what’s critical about Bill is that he’s famous for the single-channel cochlear implant. That is perfectly clear that he knew better than that at the beginning, because his first surgery was one wire, but the second one a few months later was five wires. So you have to believe that he knew perfectly well that you had to restore a wide range of frequencies in order to understand. He took a hiatus from his work to make biocompatible materials, and then start again in 1969. And again the first three patients had five wires, not one, because again I’m reasonably sure he thought this had to work better than one. But the technology at the time, his skill set, he couldn’t make them work better than one. So House eventually became known for the House single-channel implant although I have to believe that Bill knew all along that he was going to need more.

By the end of his life he convinced himself that one was good enough. Which tells you you can have a good idea at one time in life and a bad idea later on. But he is the father of cochlear implants.

Now word of Bill’s work which again was in the early nineteen sixties, spread north to San Francisco where Blair Simmons was at Stanford. And shortly after Blaire put five wires into the modiolus of the patient. In Australia Graeme Clark began work in the late 1960s on animals first and then humans, and out of his work comes Cochlear Corporation. The lads up the road at UC San Francisco, Michelson and Merzenich, took up work around 1970. Don Eddington started a project in 1970. Back to France Claude Henri Chouard started a project in the early 70s and by the middle 70s had a handful of patients with multiple channel implants. He was smart enough to take out a patent, and now claims that everybody stole his ideas from his patent. But the issue is, his patients had no speech understanding and why would one bother to steal that.

If you want a good story about this you should ask Professor Tyler who had one of the great grants of all time. He conned NATO into sending him to Europe on a busman’s holiday. Wandering about Europe testing the early cochlear implant patients — France, Germany, England. Ask him how that went.

Ok so my point here is that after Bill House’s initial work, word spread around the world and all of these projects got started just about the same time. And multi-channel implants came out of this, out of House’s original work, but the work was almost simultaneous worldwide.

Now then the last personal I’ll mention. Oh no, I’m wrong. And then in Vienna the Hochmairs, Inge and Erwin. And now we have all the modern manufacturers. In Australia Graeme Clark and Cochlear Corporation. The UC San Francisco group eventually evolved into Advanced Bionics. And the Hochmairs setup Med-El. The last person to mention is Blake Wilson, in 1983.

At this time quite reasonably the manufacturers held their signal processing to themselves, which you should if you own a company and you want to make money. You don’t give away your secrets. So the NIH decided to fund Wilson’s group — Dewey Lawson and Charlie Finley and Blake — to develop signal processing for cochlear implants which would not be proprietary. And in fact Blake and his team made a decision at that time to give away all of the IP. Today every implant uses some aspect of Wilson’s work. The amount of IP he gave up is estimated to be well over $50 million. So you can decide whether that was a smart decision or not. All right. And that leads us to the modern cochlear implant, which looks like this.

THE CENTRAL AUDITORY SYSTEM

All right now then. Second item: central auditory system. When I was a student this was the drawing of the central auditory system. This is the periphery. The cochlea is on the right, as I hope you understand. And then we have a brainstem, a midbrain, and then the wire as it were ends at the auditory cortex. This is the famous Netter drawing which may still be used in undergraduate classes I think. The problem is this is absolutely wrong. The wire doesn’t stop there. The wire in fact goes everywhere. As others have said in this meeting. The central auditory system certainly doesn’t stop at the Heschl’s gyrus, it goes all over the brain. Which is relevant to my next point. Here this is a current view of speech perception, the so-called dual stream model. You don’t have to know what all the boxes are, but the point is that speech information or acoustic information simultaneously is reasonably well thought to go ventrally to a lexical interface, and dorsally to an interface with the articulatory system, and the large blue area in front which is the inferior frontal cortex. That area the IFC now is the hot area for research because it is involved in almost all speech perception tasks that are even minimally complex, and involves attention. And it even turns out that Broca’s area which you learned was up there somewhere probably isn’t where you were taught it was.

So the auditory pathways go all over, and then we find out that speech perception is not entirely auditory. The famous McGurk effect where you have an auditory input that might be /ba/. The lips that you’re watching at the same time said something like /ga/. And what you hear is neither of the above. You hear something like /da/ or a voiced th. Many versions of the McGurk and MacDonald effect. You know it’s too bad if you’re the graduate student who’s the second one. Because everybody knows the McGurk effect, but what about the other guy? I mean that’s not fair. All right.

So that’s visual input alters speech perception. But so does tactile input. And this I find interesting. So here’s a classic experiment. What’s going on here is you have these wonderful micromanipulators that are wired to little pieces of tape at the edge of the lips. And while you’re listening, this thing can pull your lips up, or it will pull them down. Okay, up or down, while you’re listening. Now that will alter what you hear. If there’s a continuum of head to had. Head as spread lips, head. Had not. And so these are — I just made this up — these are two representations of the vowels. On the bottom we have percent ‘eh’ and the first two are heard as ‘eh’ and then if we pull slightly on the lips up then you hear one more member of that can continue as a. It’s really a cool experiment. The effect is tiny but that’s not the point. The point is that there is an effect. So the gizmo that’s doing speech perception not only has auditory input and visual input, but it’s also attending to tactile input. And so this has to be a multi-modal decision-making cell or groups of cells. Or maybe it’s even a amodal. Which is to say all of these separate modalities have to get translated into a common modality in order to make a decision.

And recently we were fiddling around with tactile input for reasons that aren’t very interesting. This is the stimulator for the so-called BAHA, a bone-anchored hearing aid, and we extracted it. And you hold the bit on the right between your fingers, and we drove it with the fundamental frequency in the amplitude envelope. So we have an implant patient listening, and then in one hand they’re holding this thing and it’s vibrating. And what it’s vibrating at is the F0 and the amplitude envelope. Once again we get a small benefit in the speech understanding. It’s trivial, absolute uninteresting, but it was real. And one of the patients — and these are things that make your day in the laboratory — one of the patients said Dr. Dorman, it sounded like you were talking through my finger. Isn’t that cool? I mean that is really, really cool. How could that possibly be that it sounds like it’s coming through the finger. Think about that.

About the Author

Michael Dorman
Arizona State University

Presented at the 26th Annual Research Symposium at the ASHA Convention (November 2016).
The Research Symposium is hosted by the American Speech-Language-Hearing Association, and is supported in part by grant R13DC003383 from the National Institute on Deafness and Other Communication Disorders (NIDCD) of the National Institutes of Health (NIH).
Copyrighted Material. Reproduced by the American Speech-Language-Hearing Association in the Clinical Research Education Library with permission from the author or presenter.

Share:

Categories

More Posts

General grantsmanship tips regarding formatting, writing to the reviewers and review criteria, and preparing initial sections (part 1 of 3)

The following is a summary of the video transcript: Presenters: Will Hula and Cara Stepp Topics: Managing a research lab,

The following is a summary of the video transcript: Step 1: Defining Your Research Mission Step 2: Assessing Your Skills