5 Crazy Things Brain Interfaces Might Make Possible
Our devices may soon literally know us better than we know ourselves.
For as long as there’s been technology — from the first wheel to the iPhone 7 Plus — humans have been controlling it physically: hands, feet, heads, breathing, eye movement. But thanks to advances that enable scientists to understand neural activity, humans are ready to consider applications of neurally connected interfaces. Case in point: Facebook’s mysterious Building 8 and Elon Musk’s Neuralink.
These systems could come to know a user’s wants better than users themselves. Here are five big picture applications of a brain interface.
5. Control Your Computer with Your Mind
The translation of thought commands into the movements of a mouse cursor or the input of text without typing is going to become very big business. Facebook recently unveiled a new focus on the brain through its Building 8 initiative, not long after Musk announced his intention to meld the minds of humans and machines with the introduction of consumer neural control technology.
The first applications will be simple: mental text replies and hands-free social media while driving might be two uses. People using a brain-machine interface might be able to swipe away annoying pop-ups literally by mental reflex, for instance, but the applications don’t stop at consumer products. DAPRA is also developing brain-machine interfaces to allow better control of military tech, but also quicker, quieter, and more complete communication between soldiers in the field. This is where we start pushing at the edges of brain-machine interfaces and brain-brain interfaces: if one person’s thoughts are electronically ferried to a headphone which automatically creates and plays the message so only the recipient can hear, how many steps still lie between humanity and true technological telepathy?
4. True Next-Gen Gaming
Controlling a game with your mind may seem like a narrow application of neural computing, but the interactivity of games makes them potentially much more responsive to a player’s mental states. A horror game might time its jump-scares based not on a generic Hollywood game of cat-and-mouse, but on a neurally modulated signal of jump-scare-readiness. With a brain-machine interface, enemies could literally come at you when you least expect it. The difficulty could automatically modulate to the user’s preference even without the player being cognizant of that preference. The brains of casual players might broadcast annoyance at multiple deaths and lead the game to open up, while the brains of hardcore players might broadcast enjoyment, and spur even harder enemies in the future.
The effects of cognitive surveillance on gaming could be much deeper than that. One Italian study published this month looks at the concept of “affective level design,” or game design that adapts to the player’s mental states. They propose a system that can adapt not just the difficulty of a particular game but style of play, based on feedback from neural electrodes. In concert with the advanced virtual reality gaming experiences that might exist one or two hardware generations after this current one, a new wave of truly revolutionary gaming experiences could emerge. These are the technologies that may be so powerful they challenge our notions of just what a game is, and they could make the addictive digital worlds of today into truly life-swallowing virtual lives.
3. Truly Emotionally Aware Technology
One big problem for robotics, and technology design in general, is figuring out what people are feeling, and adapting to that reality. There are whole areas of A.I. development dedicated to figuring out what people are feeling in photos, but what if they could just cheat? What if, in every picture uploaded to Facebook, there was an embedded piece of metadata that specified each person’s measured emotional state, as reported by their own voluntarily implanted, skull-borne neural implant. What if, upon having your first neural implant inserted, your previously naive home robot suddenly knew to stay out of your way when you were angry, or to offer to take over a certain task when you’re stressed? What if your technology knew you in ways you would never, or could never, communicate on your own?
Identification of emotion through brain surveillance is coming along at a very nice clip, given that there are currently so few appropriately wired brains in the world to study. This paper does real-time classification of “self-induced” emotions, where people literally try to feel a certain thing and which generally have just 15 percent the overall intensity of the same real emotion. Your future household robot might read your emotions from the brain. This is the threshold of machines perhaps becoming more empathetic than human beings themselves.
2. Don’t Control a Robot Arm. Have a Robot Arm
Though this might seem like one of the most advanced possible applications for a brain-machine interface, it’s turning out to be one of the most intuitive for engineers. That’s because limbs, robotic and natural alike, are controlled far from the brain, meaning that impulses have to be ferried out of the brain and along nerve fibers. Reading an intended motion in the brain is exceedingly difficult, washed out as the signal is by a cacophony of electrical information from the rest of the brain — but in order to leave the brain and get to a muscle, this impulse has to be filtered out the large storm of brain activity and shunted down the appropriate nerve. This offers a much easier way to discern what the brain is trying to do, since it has helpfully removed all other information on its own.
This approach, looking for motor impulses in motor neurons rather than the motor cortex, has allowed a fundamental step forward for bionics. It used to be that to control a replacement arm, users had to start from scratch — their brain had to literally learn to use a new part of the motor cortex, or an old part in a new way, and learn to depend upon the robotic movement that arose from its novel signals. On the other (bionic) hand, by making use of the nerves that were already tailored to controlling the replaced limb, modern bionics can provide the feeling of a real replacement, one that moves when the user naturally tries to move their natural limb. The two experiences reportedly do not compare.
1. Control Your Own (or Someone Else’s) Brain
This one only applies to a sub-section of brain-machine interface technology, but with the announcement of his new venture, Neuralink, Elon Musk has single-handedly made it worthy of discussion. The idea is that a neural electrode need not only pick up and interpret activity in the brain, but that it could actually create that activity, as well. There are already relatively gimmicky products in the works to exploit this ability, but these are blunt instruments. Right now, and for the foreseeable future, the limitations of electrode technology mean that more advanced applications will require the stimulating electrodes to be placed under the skull — but with technology like neural lace on the horizon, that might not be such a high bar, for long.
This could be applied in a wide variety of ways, from preventing seizures by snuffing out their storms of electrical activity to performing electro-convulsive therapy, to potentially enhancing the human intellect through deep brain stimulation — though also maybe not on that last one. This is also the ability that could bring truly sensing bionic limbs to the masses, letting artificial sensors create real sensations in the brain.
There are also major issues with tech like this. There’s the prospect of malfunction or abuse of this technology; a stimulating electrode can cause a seizure much more easily than it can stop one, for example. Scientists don’t yet have the nuanced level of understanding of this area of the brain that they’d need to, say, drive a person around like a car, but the fact that such a thing can even be considered speaks to the incredible potential of the technologies that are right around the corner.