From Stephen Hawking to Matrix: Making Science Fiction Come True (2)

The Battle Between Invasive and Non-invasive Technologies

Around 2005, BCI research has once again become the focus of biomedical engineering. In the summer of that year, an international conference on BCI was held in a valley in Albany, New York, with more than 100 attendants. The conference gathered the world’s earliest BCI researchers who had innovative ideas on inducing brain electrical activities, processing brain electrical signals, etc.

The chairman of the conference is Professor Jonathan Wolpaw of Wadsworth Center in New York State. He believes that non-invasive scalp EEG is the future of BCI technology. Patients with motor impairment like Stephen Hawking can type or control his wheelchair by wearing an EEG hat, which can record and recognise simple electrical activities of the brain with the help of pattern recognition programs. Dr. Wolpaw is a compassionate scholar. He noticed that a different type of BCI technology, which was invasive, had also made great progress. This type of BCI embeds silicon electrode arrays into the brain through surgery to access the brain's motor control areas. It can catch the electric discharge of nerve cells, which, after being analysed by computers, can be used to control a cursor or move a mechanic arm. Although this technology allows for more precise control, the damage done by surgery is unavoidable. Professor John Donoghue of Brown University, whom Professor Wolpaw had also invited to the conference, was a major advocate of invasive BCI. The debate between the two approaches was intense on that day.

Non-Invasive BCI

Imagine you are wearing an EEG cap as you read this article. The EEG cap is covered with a regular array of metal electrodes (usually made of silver and silver chloride), each injected with conductive adhesive (usually of sodium chloride and adhesive substances). Your scalp can communicate with the metal electrodes through the conductive adhesive, to which is attached a bio-amplifier. Now, a weak change of electric potential is detected on the electrodes on your forehead and the back of your head. This potential change reflects your level of concentration, but the words you are reading and your specific thoughts can’t be detected. If we compare your brain to a huge stadium, the audience inside are nerve cells, and they are all "chatting" privately. If you want to hear what everyone is talking about, you need to put a microphone near them. To do this, one has to enter the stadium. What this means to our brains is invasive surgery. Of course, to avoid incision, we can place the electrodes on the scalp surface instead. But this would be like putting a microphone outside the stadium, the result being we won’t be able to listen in on what everyone is chatting about. But if there is a commotion in the stadium, for instance, when everyone is cheering, it can be heard outside of the stadium; The electrodes cannot distinguish the activity of individual nerve cells, but they can detect when a major electrical activity is happening in the brain when many nerve cells are acting in unison, like when half or the entire stadium is cheering.

This comparison shows that EEG is a low-resolution neural sensing technology. According to the electromagnetic field theory, EEG only has a spatial resolution of about one centimetree, which is to say, nervous activities less than one centimetree in size is undetectable by EEG electrodes. If someone claims he can use non-invasive EEG to interpret mental activities in detail, such as words or numbers in your mind, your memories, etc., you shouldn’t take him seriously.

So, why are there so many researchers specializing in non-invasive BCI at the Albany Conference? Because it doesn’t have to cause an incision so that it is applicable in everyday situations. If we can design sophisticated visual or auditory stimuli to allow BCI users to load their simple thoughts onto their own brain waves through attention or imagination mechanisms, then we can analyse the user's brain wave patterns to extract those thoughts. This is an indirect way to interpret thoughts, which uses EEG as the carrier of information and is somewhat similar to the modulation and demodulation of a communication system. Jonathan Wolpaw's research group made it possible for people to type without touching the keyboard through BCI by studying P300, an attention-related event-induced potential. The P300 potential appears when the brain detects a small probability of external stimuli or a new event in a series of repetitive events. This potential activity is often the strongest in the centre of the head. The voltage is positive, and the peak occurs about 300 milliseconds after the event, hence the name P300 (P for Positive). If the alphabet is shown to you at random, a P300 potential will be detected at the crown of your head when the letter you have in mind appears, but no P300 potential will appear when other unrelated letters appear. If we type the 26 English letters randomly so that each pop up briefly on a computer screen, and measure your P300 through EEG at the same time, we can know the letters you desire to enter. Wolpaw’s research team applied this method and achieved great success. The obvious disadvantage of this method is we have to keep showing the 26 letters to the user to the point the user’s eyes might fatigue. In addition, different individuals might differ greatly in their P300 patterns brain waves are large, which is also challenging for algorithm development.

Maybe you’ll think this type of BCI is not intuitive, as it needs external visual stimulation to work. Can’t EEG simply read your thoughts as they naturally occur in your brain? Due to the low resolution of EEG as mentioned above, more detailed ideas can’t be deciphered, the only exception being “motor imagination”. When we imagine moving our left hand, right hand, or foot, EEG electrodes can be placed on the band-shaped area (sensory motor belt) extending from the top of the head to the ears to detect the difference between these three different states. The physiological basis behind this is that the functional division of the sensorimotor band corresponds to the different parts of the body: your left hand is controlled by the right part of the sensorimotor band, and your right hand by the left part of the sensorimotor band, forming a contralateral cross. The lower limbs and feet are controlled by the middle section of the sensorimotor band. Since the responsible areas of the left and the right feet are both in the middle of the scalp, there is no way to tell the two apart. If you place EEG electrodes on these three areas (left, middle and right of the sensorimotor belt), ideally, by detecting the changes in the intensity of brain electrical activities in these three locations, you can deduce whether a person is imagining moving his left hand, right hand, or feet. Of course, when you are moving your hands and feet, the changes in brain electrical activity in these three areas are clearer and more intense. A simple functional magnetic resonance imaging can demonstrate the correlation between these areas. EEG on the surface of the scalp is weak, measured in microvolts only, and it is buried in a large amount of electrical and physiological noise. To accurately detect the motor imagination of the left and right hands and feet, weak signal processing technologies are required. The dominant energy changes of the EEG in the sensorimotor band range between 10 Hz and 20 Hz, which are the so-called alpha and beta bands of EEG. It is necessary to filter the collected EEG signals and extract the energy changes in the alpha and beta frequency bands which contain key information, before sending them to a trained classification algorithm to complete an accurate translation of the EEG signals.

The Application of Non-Invasive BCI

If we have good EEG caps, amplifiers and algorithms with high accuracy, we can use brain waves to control screen cursors, prosthetics, wheelchairs and even robots. Since the EEG can only detect the differences between three mental states, it can only control up to three kinds of movement: turn left, turn right or go forward. Through special training and sophisticated programming, a small number of people can control two-dimensional or three-dimensional movements. For example, Professor Bin He of Minnesota University demonstrated an EEG-controlled drone in his lab. But the most memorable demonstration happened at the opening ceremony of the 2014 Brazil World Cup. History will remember the moment when a Brazilian youth with lower limb disabilities used his BCI-controlled robotic leg to kick the opening shot. The way this technology worked was through the motor imagination BCI technology we just talked about.

These demonstrations seemed straight out of sci-fi fiction. But why aren’t these technologies applied clinically to help severely disabled patients regain their motor ability? On the one hand, the bottleneck is with brain electrodes. As mentioned earlier, in order to capture high-quality and stable EEG signals, it is necessary to have conductive gel between the electrodes and the scalp. While this gel may be suitable for short-term use, after two hours it will dry out so the movement of ions in it will be blocked. The quality of the EEG signal will drop significantly until the BCI system stops working. On the other hand, the challenge is with the existence of various interferences, such as electromagnetic noise in the surroundings, electrophysiological signals on the user's body surface, etc. The user’s psychological state may also mess up the stability of the system. Many BCI recognition algorithms cannot work stably when they leave the lab environment and move into people’s homes. As new EEG sensors (such as dry electrodes) and recognition algorithms come out, EEG caps will hopefully help patients who have lost motor function enjoy the freedom of movement again.

Last updated