© 2024 Kansas City Public Radio
NPR in Kansas City
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Listening In On The Brain To Decode Speech

IRA FLATOW, HOST:

Up next: why scientists may be a step closer to reading your thoughts. What if Steven Hawking, who lost his ability to speak, could communicate through a prosthetic device that read his thoughts? Well, scientists say that – that one day may be possible, thanks to new research. In a study published in the Public Library of Science, researchers say they were able to decode electrical activity in the brain. They looked at the electrical activity and able to guess the words that people were actually listening to. Could that work the other way around? I'm thinking of monitor the brainwaves about thoughts, about words, what the brainwaves look like, and you turn it around, and you can read the thoughts by monitoring those thoughts, and maybe make it command, you know?

Should we also be concerned that not only does this have utility value, should we be concerned this could lead to mental wiretapping? People listening in on what you're thinking. Dr. Robert Knight is a neurologist, neuroscientist and professor of psychology at the U.C. Berkeley. He was co-author of this study. He joins us from Berkley. Welcome to SCIENCE FRIDAY.

ROBERT KNIGHT: Thank you for your interest in this.

FLATOW: Well, how could you not be interested in the ability to possibly read your thoughts?

KNIGHT: Well, you know, everybody has a brain, so it is an interesting thing to think about.

(SOUNDBITE OF LAUGHTER)

FLATOW: I'm tempted to say I wouldn't go that far, but...

(SOUNDBITE OF LAUGHTER)

FLATOW: Let's talk about what you did in the study. You were able to actually monitor the electrical - well, let's give people an idea of what you did, and we actually have some tape that we're going to play, and tell what we're going to hear.

KNIGHT: Are you going to play the tape now?

FLATOW: You know what, yes. We're going to play it, so tell us what we're going to be listening to.

KNIGHT: OK. You're going to hear the word that was spoken to the patient who has electrodes over language areas of the brain. And then you're going to hear playback from our reconstruction from brain electrical activity, two different methods to try to decode or understand exactly what the patient heard. So you'll hear the spoken word and two reconstructions. You'll - and I think you'll hear it four or five different words.

FLATOW: So you'll play the words to the patients, and then you will listen. You monitor the electrical signals of the brain, and then you played them back to see what the machine thought the brain was hearing.

KNIGHT: Yes. We reconstruct exactly - we reconstruct from the data what we need, the spectral and other acoustic properties of the sound that came through your ear and ended up in your brain language areas. It's a little bit like if you were watching - let's say you're watching someone play the piano, and you figure each electrode is a piano key. And there's no sound coming out of the piano, but you're a pretty good musician, and you know from what keys are being compressed - or you can reconstruct in your mind the tune that's being played on the piano.

So, in our case, we took the - the electrodes were like a piano key, and we were able to analyze information in each electrode and put the piano piece back together, if you will. In this case, however, the piano piece was the word that was actually coming up from your ear and landing up in your auditory areas of your brain.

FLATOW: OK. Let's listen to that now.

(SOUNDBITE OF RECORDING)

UNIDENTIFIED MAN: Waldo, Waldo, structure, structure, doubt, doubt, property, property.

FLATOW: And that's it. Some...

KNIGHT: Yeah. Actually, you just played - that was four words, and each word had one of our two reconstructions there. So you heard the word, and then you heard the reconstruction.

FLATOW: That was amazing. Some of them were very close.

KNIGHT: It's actually quite - it actually is pretty amazing, considering this is, you know, the first - our first, you know, crack at this. And it turns out, you know, our electrodes - some of the electrodes are not that very close to each other. Some of them were as far apart as a centimeter. In some patients, they were as close as four millimeters. But we know from other evidence that there's actually information in your cortex, in the surface of your brain, probably at about a millimeter to two. So it may be that if we can better electrode spacing, higher density, we could even improve on that particular - on the reconstructions that you heard.

FLATOW: So if I said the word - the phrase, two words, SCIENCE FRIDAY and you tapped into the brain to see how the electrical stimulations occurred, you could then take, like, put them on a tape recorder, so to speak, play them back through your reconstruction, and we would know that that's what was heard.

KNIGHT: Well, you'd know to the level you could make these words out right now. We know statistically that we are 90 percent accurate in picking out what word was heard from, you know, from a choice of a couple words. But, you know, we are no better than what you just heard right now.

FLATOW: Mm-hmm. But it is one step closer to being able to read what our thoughts are.

KNIGHT: Well, you know, it takes it - it does - it takes us up, if you will, the perceptual food chain. I mean, it's been known for many years that the brain areas that are active when you perceive things - let's say you look at an orange. If you now mentally visualize an orange, that same area responds. And similarly, in the motor system, if you move your right arm, and now you think about moving your right arm, the same areas are active. In fact, that's the principle that we use to try to grab signals when we're trying to create motor-assistive devices for people, for instance, with strokes or, you know, quadriplegic from some spinal cord injury. And we think the same thing holds in the auditory system.

So if I say to you the word - if I say two words, SCIENCE FRIDAY, and now you imagine SCIENCE FRIDAY, we know from other evidence - not this electrical recording, but from other brain imaging techniques - that the same approximate area is activated. So the hope, of course, is that we can go to the next level, which is to go not just from reconstructing what you heard, but reconstructing exactly what you're thinking. That would be a tremendous advance, and that would really, for use in neurological patients, it would be just - it could really lead to assistive devices, implanted assistive devices for people who can't speak but know what they want to say, and their language and their thought is trapped inside their head.

FLATOW: I guess the most famous of those would be Stephen Hawking.

KNIGHT: Well, he's a certainly, you know, a perfect example. But any patient with amyotrophic lateral sclerosis, Lou Gehrig's disease or patients who are, quote, "locked-in" from a stroke in their brain stem - or, for instance, an even bigger group of patients, people who have stroke. If you get stroke usually in the left side of your brain - most people's language is on the left side - you can get stroke where you can't understand what's being said to you. Or you can get a stroke where you can understand, but you can't output. And the patients who can't output, these are patients who are called Broca's aphasia. They can't speak. That would be a prime target because they - the word form is in there.

Interestingly, the area that we get our best word reconstruction from is actually the area that if you damage it from some neurological problem, most commonly a stroke, you actually lose the ability to understand words. So there's a nice link there. In fact, that's - we kind of - that's what, in part, why we focused on there. We thought it would be an important spot to focus on.

FLATOW: So you chose patients from - who were in a hospital to study in that area?

KNIGHT: Well, they - we didn't choose them. We knew that would be a primary for our reconstruction algorithm. And the patients had the electrodes implanted not by us. They were - and we had no control over where they went. They were implanted for clinical reasons. These are patients who have what are called intractable or uncontrolled seizures. They don't respond very well to medications. And in a subgroup of that patient - patients, which is a large number of patients, if you can pinpoint the start of the electrical discharge for the epilepsy, the surgeon can surgically take it out and get, you know, really control rates in the 70 to 80 percent.

So the patients are sitting in the hospital. The surgeon and the epilepsy doctor, the neurologist, pick where the electrodes are. And then the patients are sitting there for two, three, four, sometimes seven, eight days waiting to have a seizure. And if they feel like it, they participate in different experiments. And one of them - one of the types of experiments we've been working on is language representation.

FLATOW: And, of course, these are electrodes, they're placed directly on the brain.

KNIGHT: They are directly on the surface of the brain. There is no intervening dura, skull, scalp. We can't do this right now in - with scalp recordings. The fidelity of the signal is just not good enough in several dimensions. A scalp electrode can probably pick up activity from three centimeters of brain tissue, and we really need it to be much finer to be able to make these kind of reconstructions.

FLATOW: When you say we need to be much finer, what - physically, what technology, what kind of breakthrough here?

KNIGHT: Well, I think it's the higher density grids. I mean, there will be a couple things. You know, we're working on, you know, grids that are in 800 micron spacing, which is, you know, less than a millimeter. We'd be very happy if we could get two millimeter grids, though, because that would almost quadruple the amount of information. I mean, really, you know, the brain is an information processor. So the more information we can get out of it, the better we can do trying to make sense of what the brain is doing.

One of the things we're doing at Berkeley, there's - we have a neural engineering and neural - and prosthetics program we just launched between Berkeley and UCSF. And our engineering colleagues are actually working on an implantable wireless device, which is what you would eventually need. Right now, you know, of course, the wires are coming out of the head and, you know, that wouldn't work. If you're going to have something that works in the same kind of concept, if you will, of like a pacemaker for heart disease or a cochlear implant for hearing, it has to be fully implantable, safe, wireless and externally chargeable.

FLATOW: This is SCIENCE FRIDAY, from NPR. I'm Ira Flatow, talking with Dr. Robert Knight about recognizing words. Let me just really make it crude: being able to recognize words with a device and have it - that's planted in - electrodes are planted in your brain, and then recognizing them well enough that you can play them back and know what the words are. Would that be accurate?

(SOUNDBITE OF LAUGHTER)

KNIGHT: That is perfect. You passed.

(SOUNDBITE OF LAUGHTER)

FLATOW: Thank you. So, you know, it sounds so science-fictiony.

KNIGHT: I know it does. But, you know, the brain makes behaviors. So if we can, you know, what's happened in the last couple decades is we've just had an explosion of methods to actually, not just monitor animal brain, but to monitor the human brain. And this has taken it to the next level - because of unfortunate patients with epilepsy, of course, but their patients are great. And they understand this research is important, and they let us, you know, while they're waiting to be treated, they let us monitor their brain.

FLATOW: Let me give a quick question, here. Let's go to the phones for Bill in South Bend, Indiana. Hi, Bill.

BILL: Amazing topic, Ira. Real quick, would this give us a means to possibly communicate with coma and stroke victims? And I'll take my question off the air. Thank you.

FLATOW: OK.

KNIGHT: Sure. Well, that's a great question. It would - I think it definitely would give us a means to communicate with stroke patients - so particularly patients with what's called Broca's aphasia, which means the areas that extract the meaning of words and formulate what you want to say are intact, but the areas that are damaged in the front part of the brain - our recordings - the understanding part of the brain is in the back part of the brain - in the back part of the temporal lobe, and the output part's in the frontal lobe.

So we do believe that would be a prime clinical group that could benefit from this. Coma's another story. If someone's truly in a coma and not just appearance of a coma, someone who's truly in a coma, their cortex, by definition, doesn't work. So the device that we're envisioning and working on I don't think would work for them. But, you know, there, you know, people who are in, for instance, a minimally conscious state - not a vegetative state or not coma - potentially, it could be useful. In fact, we've had some thoughts about it being used it in that particular group.

FLATOW: And as far as people were concerned that this could lead to mental wiretapping, you're not at that point, right? You have to get the electrodes implant, and it's not a wireless thing yet and...

KNIGHT: Well, it's not only that. It's even more - scientifically, we're only at the phase of understanding that I said to you, Ira, SCIENCE FRIDAY. I can read out the SCIENCE FRIDAY from your brain. There's - right now, we can - we don't have a method to imagine what you're - to analyze what you're imagining. Of course, that's the next important step in making this a translational device, but that means something that is not just telling us neat stuff about how the brain works, but basically taking it into a potential device.

That particular step is being broached already in motor control. And I have - you know, there's many colleagues around the world who have made progress so that we can imagine - we can record - imagine motor signals and turn them into a device that, actually, a patient could either wheelchair or open their email.

FLATOW: All right. We've talked about that, and we'll follow your work, if you don't mind. Thank you very much, Dr. Knight.

KNIGHT: Thank you.

FLATOW: Robert Knight is a neurologist, neuroscientist and professor of psychology at the University of California Berkeley and co-author of the study at the Public Library of Science. Transcript provided by NPR, Copyright NPR.

KCUR prides ourselves on bringing local journalism to the public without a paywall — ever.

Our reporting will always be free for you to read. But it's not free to produce.

As a nonprofit, we rely on your donations to keep operating and trying new things. If you value our work, consider becoming a member.