×
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT

The brain implants that could change humanity

Last Updated 30 August 2020, 22:03 IST

Jack Gallant never set out to create a mind-reading machine. His focus was more prosaic. A computational neuroscientist at the University of California, Berkeley, Dr. Gallant worked for years to improve our understanding of how brains encode information — what regions become active, for example, when a person sees a plane or an apple or a dog — and how that activity represents the object being viewed.

By the late 2000s, scientists could determine what kind of thing a person might be looking at from the way the brain lit up — a human face, say, or a cat. But Dr. Gallant and his colleagues went further. They figured out how to use machine learning to decipher not just the class of thing, but which exact image a subject was viewing. (Which photo of a cat, out of three options, for instance.)

One day, Dr. Gallant and his postdocs got to talking. In the same way that you can turn a speaker into a microphone by hooking it up backward, they wondered if they could reverse engineer the algorithm they’d developed so they could visualize, solely from brain activity, what a person was seeing.

The first phase of the project was to train the AI. For hours, Dr. Gallant and his colleagues showed volunteers in fMRI machines movie clips. By matching patterns of brain activation prompted by the moving images, the AI built a model of how the volunteers’ visual cortex, which parses information from the eyes, worked. Then came the next phase: translation. As they showed the volunteers movie clips, they asked the model what, given everything it now knew about their brains, it thought they might be looking at.

The experiment focused just on a subsection of the visual cortex. It didn’t capture what was happening elsewhere in the brain — how a person might feel about what she was seeing, for example, or what she might be fantasizing about as she watched. The endeavor was, in Dr. Gallant’s words, a primitive proof of concept.

And yet the results, published in 2011, are remarkable.

The reconstructed images move with a dreamlike fluidity. In their imperfection, they evoke expressionist art. (And a few reconstructed images seem downright wrong.) But where they succeed, they represent an astonishing achievement: a machine translating patterns of brain activity into a moving image understandable by other people — a machine that can read the brain.

Dr. Gallant was thrilled. Imagine the possibilities when better brain-reading technology became available? Imagine the people suffering from locked-in syndrome, Lou Gehrig’s disease, the people incapacitated by strokes, who could benefit from a machine that could help them interact with the world?

For decades, we’ve communicated with computers mostly by using our fingers and our eyes, by interfacing via keyboards and screens. These tools and the bony digits we prod them with provide a natural limit to the speed of communication between human brain and machine. We can convey information only as quickly (and accurately) as we can type or click.

What technologies will power the brain-computer interface of the future is still unclear. And if it’s unclear how we’ll “read” the brain, it’s even less clear how we’ll “write” to it.

This is the other holy grail of brain-machine research: technology that can transmit information to the brain directly. We’re probably nowhere near the moment when you can silently ask, “Alexa, what’s the capital of Peru?” and have “Lima” materialize in your mind.

Rafael Yuste, a neurobiologist at Columbia University, counts two great advances in computing that have transformed society: the transition from room-size mainframe computers to personal computers that fit on a desk (and then in your lap), and the advent of mobile computing with smartphones in the 2000s. Noninvasive brain-reading tech would be a third great leap, he says.

Not many people will volunteer to be the first to undergo a novel kind of brain surgery, even if it holds the promise of restoring mobility to those who’ve been paralyzed. So when Robert Kirsch, the chairman of biomedical engineering at Case Western Reserve University, put out such a call nearly 10 years ago, and one person both met the criteria and was willing, he knew he had a pioneer on his hands.

The man’s name was Bill Kochevar. He’d been paralyzed from the neck down in a biking accident years earlier. His motto, as he later explained it, was “somebody has to do the research.”

At that point, scientists had already invented gizmos that helped paralyzed patients leverage what mobility remained — lips, an eyelid — to control computers or move robotic arms. But Dr. Kirsch was after something different. He wanted to help Mr. Kochevar move his own limbs.

The first step was implanting two arrays of sensors over the part of the brain that would normally control Mr. Kochevar’s right arm. Electrodes that could receive signals from those arrays via a computer were implanted into his arm muscles. The implants, and the computer connected to them, would function as a kind of electronic spinal cord, bypassing his injury.

Once his arm muscles had been strengthened — achieved with a regimen of mild electrical stimulation while he slept — Mr. Kochevar, who at that point had been paralyzed for over a decade, was able to feed himself and drink water. He could even scratch his nose.

About two dozen people around the world who have lost the use of limbs from accidents or neurological disease have had sensors implanted on their brains. Many, Mr. Kochevar included, participated in a United States government-funded program called BrainGate. The sensor arrays used in this research, smaller than a button, allow patients to move robotic arms or cursors on a screen just by thinking. But as far as Dr. Kirsch knows, Mr. Kochevar, who died in 2017 for reasons unrelated to the research, was the first paralyzed person to regain use of his limbs by way of this technology.

This fall, Dr. Kirsch and his colleagues will begin version 2.0 of the experiment. This time, they’ll implant six smaller arrays — more sensors will improve the quality of the signal. And instead of implanting electrodes directly in the volunteers’ muscles, they’ll insert them upstream, circling the nerves that move the muscles. In theory, Dr. Kirsch says, that will enable movement of the entire arm and hand.

The next major goal is to restore sensation so that people can know if they’re holding a rock, say, or an orange — or if their hand has wandered too close to a flame. “Sensation has been the longest ignored part of paralysis,” Dr. Kirsch told me.

A few years ago, scientists at the University of Pittsburgh began groundbreaking experiments on that front with a man named Nathan Copeland who was paralyzed from the upper chest down. They routed sensory information from a robotic arm into the part of his cortex that dealt with his right hand’s sense of touch.

Every brain is a living, undulating organ that changes over time. That’s why, before each of Mr. Copeland’s sessions, the AI has to recalibrate — to construct a new brain decoder. “The signals in your brain shift,” Mr. Copeland told me. “They’re not exactly the same every day.”

And the results weren’t perfect. Mr. Copeland described them to me as “weird,” “electrical tingly” but also “amazing.” The sensory feedback was immensely important, though, in knowing that he’d actually grasped what he thought he’d grasped. And more generally, it demonstrated that a person could “feel” a robotic hand as his or her own, and that information coming from electronic sensors could be fed into the human brain.

Preliminary as these experiments are, they suggest that the pieces of a brain-machine interface that can both “read” and “write” already exist. People cannot only move robotic arms just by thinking; machines can also, however imperfectly, convey information to the brain about what that arm encounters.

Edward Chang, a neurosurgeon at the University of California, San Francisco, who works on brain-based speech recognition, said that maintaining the ability to communicate can mean the difference between life or death. “For some people, if they have a means to continue to communicate, that may be the reason they decide to stay alive,” he told me. “That motivates us a lot in our work.”

In a recent study, Dr. Chang and his colleagues predicted with up to 97% accuracy — the best rate yet achieved, they say — what words a volunteer had said (from about 250 words used in a predetermined set of 50 sentences) by using implanted sensors that monitored activity in the part of the brain that moves the muscles involved in speaking.

Dr. Chang used sensor arrays similar to those Dr. Kirsch used, but a noninvasive method may not be too far away. This progress isn’t solely driven by advances in brain-sensing technology — by the physical meeting point of flesh and machine. The AI matters as much, if not more.

Trying to understand the brain from outside the skull is like trying to make sense of a conversation taking place two rooms away. The signal is often messy, hard to decipher. So it’s the same types of algorithms that now allow speech-recognition software to do a decent job of understanding spoken speech — including individual idiosyncrasies of pronunciation and regional accents — that may now enable brain-reading technology.

ADVERTISEMENT
(Published 30 August 2020, 20:10 IST)

Follow us on

ADVERTISEMENT
ADVERTISEMENT