Innovation Monitor: The Brain-Computer Interface Issue
Welcome to this week’s Innovation Monitor. Brain-computer interfaces, or BCI, is less sci-fi and more the stuff of academia and R&D. Current commercial products fall short and are almost gimmicky versions of what’s possible. Because what we envision, for example, is Neuralink: a device that gives us the ability to read thoughts and even stimulate parts of our brain to produce some kind of internal effect. But that’s also limited to R&D for the next… X years. In short, the world of commercially viable BCIs is probably at the stage where VR was back in 1990… when young Jaron Lanier of VPL was showing the world his “eye phones.”
We’ll dive deep into three main categories in this newsletter:
1) Emerging R&D: Research that’s happening now and what it means for the future of BCIs
2) Two of the biggest consumer-facing companies: CTRL-Labs and Neuralink
3) Various applications for BCIs and some smaller companies doing really nifty things.
So to begin, we’re asking ourselves why, in an age when a wind and solar firm is America’s biggest energy company, when AI can write hilarious poetry, when AR is ubiquitous, can’t we read a few brain signals? Well, the brain is simply just really, really hard. Neuroscientists are still chipping away at our most impressive organ after around 200 years of study and experimentation (and that’s just modern neuroscience).
Back in 2013, researchers at the Okinawa Institute of Technology Graduate University in Japan and Forschungszentrum Jülich in Germany managed to simulate one second of human brain activity with nearly 83,000 processors and around 1M gigabytes of system memory… and the process took 40 minutes. In other words, our brain — operating at a measly 20 watts — is absurdly efficient, absurdly complex, and very far from being “solved.”
In addition, despite our limited understanding, researchers and technologists have managed to do some very impressive things as of late, mainly as experimental assistive devices for people with disabilities.
Want to catch up on the brain and BCI tech before diving in? Nothing beats Neuralink and the Brain’s Magical Future by Tim Urban.
If you’ve already read it or want to skip the 200+ minute read, the Royal Society (UK’s national science academy) released a great three-minute explainer.
As always, we wish you and your community safety, calm and solidarity as we support each other through this unprecedented time. Thank you for reading!
Erica Matsumoto BCI R&D
The photo above is of The Applied Physics Laboratory at Johns Hopkins University — the nation’s largest university-affiliated research center, spanning 450 acres, 700 labs, and a $1.52B budget. This is where the Pentagon turns to when it needs an engineering challenge solved — the Lab is the originator of the Navy’s surface-to-air missile, the Tomahawk land-attack missile, and a satellite-based navigation system that preceded GPS.
It also brought experts from microelectronics, software, neural systems, and neural processing for a DARPA-based project known as Revolutionizing Prosthetics in 2006, which aimed to restore “near-natural hand and arm control to people living with the loss of an upper limb.” But the program had a more ambitious side: “What if technology could connect the brain to a prosthetic device or other assistive technology?”
Researchers have been sticking electrodes in animal brains since the 50s and 60s, but BCIs really started to show potential as an assistive device for humans in the past two decades. One of the Laboratory’s participants, Nathan Copeland, was able to use a robotic arm connected to his brain to fist bump Barack Obama in 2016: “The arm detected the pressure of the president’s fist, digitized it the way an iPhone digitizes an image or a voice, and sent the data to the part of the Copeland’s motor cortex that understood ‘pressure on knuckles’ as such.” (For more on the Laboratory, see this Bulletin long-read.)
Besides prosthetics, researchers are exploring ways BCIs can restore speech. In a Nature study in April 2019, researchers attached stamp-sized arrays of electrodes directly onto the brains of five patients. The patients then read hundreds of lines of text from classic children’s stories while the arrays captured brain signals. See the speech decoding in action below:
As you may have noticed, the examples above require a brain implant — which is what Neuralink is trying to automate. It’s not just that brain surgery is a frightening prospect in itself, but also the myriad things that can go wrong after the electrodes are actually in.
For a Frankensteinian tale, set aside some time for Wired’s excellent piece on the neuroscientist who experimented with a BCI on himself. Basically, “implants introduce risk of hemorrhaging, infection, or brain damage,” as research organization Rand notes. We also have no idea what will happen in the long-term.
These risks are the reason why researchers have been looking for non-invasive ways to read our brain while still retaining some of the accuracy of implants. For an example of what state-of-the-art looks like with this approach, watch CMU’s Bin He control a robotic arm using only external electrodes:
Kernel is a neurotech startup that’s developing the first “commercially-scalable time-domain near-infrared spectroscopy (TD-fNIRS) system in history.” What that means is, Kernel’s non-invasive brain interface shoots short pulses of light at your head, recording when that light bounces back, ultimately capturing blood flow in your brain. CTRL-Labs, which we’ll go over in the second section, was bought by Facebook for an estimated $1B and uses nerves in the wrist for brain data collection.
Recently, a team of researchers presented results on a minimally invasive method of capturing motor signals, something between an implant and an EEG. The procedure involved mounting an electrode on a tiny tube called a stent and threading the stent through a jugular vein and into a blood vessel near the primary motor cortex.
The basic BCI was able to help two participants with Lou Gehrig’s disease send texts, shop online, and perform other basic computer functions in Windows 10. Synchron hopes to commercialize the tech one day.
Neuralink & CTRL-Labs In August this year, Neuralink hosted its second press gathering since the company’s founding in 2016. The product-presentation-cum-recruiting-session featured the latest iteration of the hardware, some distant speculation, and a brief look into the roles of the various departments within Neuralink.
There were some peculiar moments where the company’s priorities seemed to shoot out in a number of directions. Radar-vision? Symbiosis with AGI? Eliminating fear altogether? Curing blindness? It encompassed the ambivalence consumer-focused BCI companies have been facing for years.
But aside from the speculation, Musk and co. presented an impressive new device: a 1,024-electrode array — the actual “link” that would be surgically implanted into your skull to measure brain signals and stimulate neurons (aka “write”). Besides holding up the actual device, Musk showcased several pigs, including one with a live Neuralink and one that had its brain chip removed.
Musk also mentioned that the FDA had granted the chip a “breakthrough device” designation. What that means, as per this excellent Scientific American piece: “the company has submitted the paperwork to start the process of gathering the data necessary for FDA approval. There are numerous challenges to overcome before the device could be ready for human use, however. It will have to be shown to be safe and not cause any damage to brain tissue. And its sensitive electronics must be able to withstand the corrosive environment of the human body.”
On the non-invasive side, we have CTRL-Labs. The NY-based company was founded by Thomas Reardon in 2015 and developed a working wristband prototype capable of translating musculoneural signals into interpretable commands. In other words, it lets you do cool Jedi shit like type on an invisible keyboard and control 3D simulations with just your hand.
CTRL-Labs was a relatively unknown company until Facebook came along with BCI ambitions. In July 2019, Facebook announced they were working with researchers to build interfaces that could decode language from brain signals; soon after, the social media company acquired CTRL-Labs for somewhere between $500M and $1B, in September 2019. Check out CEO Reardon’s keynote in NYCML’18 below. (And for more cool CTRL-Labs demos, see this VentureBeat piece.)
BCI Applications In June this year, Forbes had a great rundown of commercial BCI startups. A few highlights (from the companies that still have their sites up):
- “MindX believes the next frontier in computing is a direct link from the brain to the digital world. They’re creating this link by ‘combining neurotechnology, augmented reality and artificial intelligence to create a “look-and-think” interface for next-generation spatial computing applications.’”
- “Users wear NextMind on the back of their heads. It ‘creates a symbiotic connection with the digital world’ by combining neural networks and neural signals.”
- “Neurosity’s goal is to help developers get focused faster and stay focused longer. Notion (Neurositys thought-powered computer) has eight sensors as part of an EEG headset.”
- “Paradromics developed brain-computer interface technology to help those disconnected from the world by mental illness, paralysis, or other types of brain disorders.”
- “Brain-computer interfaces aren’t just for people. Zoolingua, owned by Con Slobodchikoff, wants people to understand dogs. Their device will allow both dog and human to communicate in both directions.”
Some of these are ridiculous, others less so. But there’s already one place where even crude external devices are desirable: the workplace. Companies are already employing what is essentially workplace surveillance to keep tabs on employee productivity, while others are using computer vision and other AI-based techniques to efficiently judge candidates.
Inevitably, this has drawn some fierce criticism. Despite the practice of remote employee monitoring being ethically questionable at best, the software is in high demand. So it’s not a stretch to imagine brain-computer interfaces (BCIs) monitoring employee attention, or, if you’re feeling adventurous, maybe helping you prepare presentations with just your thoughts.
One of the more fascinating studies in this space involved passthoughts technology: a group of researchers used a “home-rigged, single-electrode EEG placed inside the ear canal” to capture “mental gestures” and use “passthought-style authentication.” In other words, your brain waves could potentially be used as a password.
Just as interesting is the ability to modulate brain activity. For example, researchers at Columbia University demonstrated how neurofeedback using a BCI could “affect alertness and to improve subjects’ performance in a cognitively demanding task.”
This may still be far off. Theodore Zanto, a director at the UCSF neuroscience program, says BCIs still can’t differentiate what a user is paying attention to: “I haven’t seen any data indicating you can dissociate if someone is paying attention to the teacher or their phone or just their own internal thoughts and daydreaming.” Furthermore, experts in 2010 estimated that “15–30% of individuals are inherently not able to produce brain signals robust enough to operate a BCI.”
None of these setbacks has stopped investment, however. Last year, DARPA revealed the Next-generation Nonsurgical Neurotechnology Program (or N³), a $104M program that will focus on BCI development. N³ is also the military’s first attempt to develop BCIs with “a more belligerent purpose” — controlling drones and other weapons of destruction with mere thoughts. This Week in Business History This Week in Business History
November 2nd, 1955: 1955 Kermit the Frog, the first of Jim Henson’s Muppets, is copyright registered.
This was a long newsletter and a long week, so let’s end this newsletter with Kermit’s first appearance: