This is Your Brain. This is Your Brain as a Weapon.
This is
Your Brain. This is Your Brain as a Weapon.
"The next war might be fought inside our minds."
Erreh Svaia
Taken form: Foreign Policy
By: Tim Requarth
On an
otherwise routine July day, inside a laboratory at Duke University, two rhesus
monkeys sat in separate rooms, each watching a computer screen that featured an
image of a virtual arm in two-dimensional space. The monkeys' task was to guide
the arm from the center of the screen to a target, and when they did so
successfully, the researchers rewarded them with sips of juice.
But there
was a twist. The monkeys were not provided with joysticks or any other devices
that could manipulate the arm. Rather, they were relying on electrodes
implanted in portions of their brains that influence movement. The electrodes
were able to capture and transmit neural activity through a wired connection to
the computers.
Making
things even more interesting, the primates shared control over the digital
limb. In one experiment, for example, one monkey could direct only horizontal
actions, while the other guided just vertical motions. Yet the monkeys began to
learn by association that a particular way of thinking resulted in the movement
of the limb. After grasping this pattern of cause and effect, they kept up the
behavior--joint thinking, essentially--that led the arm to the target and
earned them juice.
Neuroscientist
Miguel Nicolelis, who led the research, published earlier this year, has a name
for this remarkable collaboration: a "brainet." Ultimately, Nicolelis
hopes that brain-to-brain cooperation could be used to hasten rehabilitation in
people who have neurological damage--more precisely, that a healthy person's
brain could work interactively with that of a stroke patient, who would then
relearn more quickly how to speak or move a paralyzed body part.
His work is
the latest in a long string of recent advances in neurotechnologies: the
interfaces applied to neurons, the algorithms used to decode or stimulate those
neurons, and brain maps that produce a better overall understanding of the
organ's complex circuits governing cognition, emotion and action. From a
medical perspective, a great deal stands to be gained from all this, including
more dexterous prosthetic limbs that can convey sensation to their wearers, new
insights into diseases like Parkinson's, and even treatments for depression and
a variety of other psychiatric disorders. That's why, around the world, major
research efforts are underway to advance the field.
But there
is a potentially dark side to these innovations. Neurotechnologies are
“dual-use” tools, which means that in addition to being employed in medical
problem-solving, they could also be applied (or misapplied) for military
purposes.
The same
brain-scanning machines meant to diagnose Alzheimer’s disease or autism could
potentially read someone’s private thoughts. Computer systems attached to brain
tissue that allow paralyzed patients to control robotic appendages with thought
alone could also be used by a state to direct bionic soldiers or pilot
aircraft. And devices designed to aid a deteriorating mind could alternatively
be used to implant new memories, or to extinguish existing ones, in allies and
enemies alike.
Consider
Nicolelis’s brainet idea. Taken to its logical extreme, says bioethicist
Jonathan Moreno, a professor at the University of Pennsylvania, merging brain
signals from two or more people could create the ultimate superwarrior. “What
if you could get the intellectual expertise of, say, Henry Kissinger, who knows
all about the history of diplomacy and politics, and then you get all the
knowledge of somebody that knows about military strategy, and then you get all
the knowledge of a DARPA engineer, and so on,” he says, referring to the U.S.
Defense Advanced Research Projects Agency. “You could put them all together.”
Such a brainet would create near-military omniscience in high-stakes decisions,
with political and human ramifications.
To be
clear, such ideas are still firmly in the realm of science fiction. But it’s
only a matter of time, some experts say, before they could become realities.
Neurotechnologies are swiftly progressing, meaning that eventual breakout
capabilities and commercialization are inevitable, and governments are already
getting in on the action. DARPA, which executes groundbreaking scientific
research and development for the U.S. Defense Department, has invested heavily
in brain technologies. In 2014, for example, the agency started developing
implants that detect and suppress urges. The stated aim is to treat veterans
suffering from conditions such as addiction and depression. It’s conceivable,
however, that this kind of technology could also be used as a weapon—or that
proliferation could allow it to land in the wrong hands. “It’s not a question
of if nonstate actors will use some form of neuroscientific techniques or
technologies,” says James Giordano, a neuroethicist at Georgetown University
Medical Center, “but when, and which ones they’ll use.”
People have
long been fascinated, and terrified, by the idea of mind control. It may be too
early to fear the worst—that brains will soon be vulnerable to government
hacking, for instance—but the dual-use potential of neurotechnologies looms.
Some ethicists worry that, absent a legal framework to govern these tools,
advances in the lab could enter the real world dangerously unencumbered.
For better
or for worse, Giordano says, “the brain is the next battlespace.”
Driven by
the desire to better understand the brain, arguably the most unknowable of
human organs, the past 10 years have seen a burst of neurotechnology
innovation. In 2005, a team of scientists announced that it had successfully
read a human’s mind using functional magnetic resonance imaging (fMRI), a
technique that measures blood flow triggered by brain activity. A research
subject, lying still in a full-body scanner, observed a small screen that
projected simple visual stimuli—a random sequence of lines oriented in
different directions, some vertical, some horizontal, and some diagonal. Each
line’s orientation provoked a slightly different flurry of brain functions.
Ultimately, just by looking at that activity, the researchers could determine
what kind of line the subject was viewing.
It took
only six years for this brain-decoding technology to be spectacularly
extended—with a touch of Silicon Valley flavor—in a series of experiments at
the University of California, Berkeley. In a 2011 study, subjects were asked to
watch Hollywood movie trailers inside an fMRI tube; researchers used data drawn
from fluxing brain responses to build decoding algorithms unique to each
subject. Then, they recorded neural activity as the subjects watched various
new film scenes—for instance, a clip in which Steve Martin walks across a room.
With each subject’s algorithm, the researchers were later able to reconstruct
this very scene based on brain activity alone. The eerie results are not photo-
realistic, but
impressionistic: a blurry Steve Martin floats across a surreal, shifting
background.
Based on
these outcomes, Thomas Naselaris, a neuroscientist at the Medical University of
South Carolina and a coauthor of the 2011 study, says, “The potential to do
something like mind reading is going to be available sooner rather than later.”
More to the point, “It’s going to be possible within our lifetimes.”
Expediting
this is the rapidly advancing technology behind brain-machine interfaces
(BMI)—neural implants and computers that read brain activity and translate it
into real actions, or that do the reverse, stimulating neurons to create
perceptions or physical movements. The first sophisticated interface made it
out of the operating room in 2006, when neuroscientist John Donoghue’s team at
Brown University implanted a square chip—measuring less than one-fifth of an
inch across and holding 100 electrodes—into the brain of then-26-year-old
Matthew Nagle, a former high school football star who had been stabbed in the
neck and paralyzed below the shoulders. The electrodes were positioned over
Nagle’s motor cortex, which, among other things, controls arm motions. In a
matter of days, Nagle, with his device wired to a computer, could move a cursor
and even open email just by thinking about it.
Eight years
later, BMIs had grown profoundly more complex, as demonstrated at the 2014
World Cup in Brazil. Juliano Pinto, a 29-year-old with complete paralysis of
the lower trunk, donned a mind-controlled robotic exoskeleton—developed by
Duke’s Nicolelis—to deliver the kickoff at the tournament’s opening ceremony in
São Paulo. A cap on Pinto’s head picked up signals from his brain, indicating
his intention to kick. His computer, strapped to his back, received these
signals and then spurred the robotic suit to execute the action.
Neurotechnologies
go further still, dealing with the complexity of memory. Studies have shown
that it might be possible for one person to insert thoughts into another’s
mind, like a real-life version of the blockbuster film Inception. In a 2013
experiment led by Nobel laureate Susumu Tonegawa at the Massachusetts Institute
of Technology, researchers implanted what they called a “false memory” in a
mouse. While observing the rodent’s brain activity, the researchers placed the
animal in a container, and watched as the mouse became acquainted with its
surroundings. The team was able to pick out the precise network of cells among
millions that were stimulated in the mouse’s hippocampus while it formed a
memory of the space. The next day, the researchers put the animal in a new
container it had never seen before, and delivered an electric shock while
simultaneously activating the neurons the mouse had used to remember the first
box. The association was formed: When they put the mouse back in the first
container, it froze in fear, even though it had never experienced a shock
there. Just two years after Tonegawa’s discovery, a team at the Scripps
Research Institute administered mice a compound that could remove a specific
memory while leaving others intact. This kind of erasing technology could be
used to treat post-traumatic stress, eliminating a painful thought and thus
improving someone’s quality of life.
It’s likely
this research momentum will continue, because the mind-science revolution is
being bankrolled lavishly. In 2013, the United States launched the BRAIN
Initiative (Brain Research through Advancing Innovative Neurotechnologies),
with hundreds of millions already earmarked for studies within the first three
years; future funding has not yet been determined. (The National Institutes of
Health (NIH), one of the five federal agencies involved in the project, has
requested $4.5 billion, spread over a 12-year period, for its part alone.) For
its part, the European Union has devoted an estimated $1.34 billion to its
10-year Human Brain Project, which began in 2013. Both programs are designed to
build innovative tools that will map the brain’s structure and eavesdrop on the
electrical activity of its billions of neurons. In 2014, Japan launched a
similar initiative, known as Brain/MINDS (Mapping by Integrated
Neurotechnologies for Disease Studies). And even Paul Allen, Microsoft’s
co-founder, is throwing hundreds of millions of dollars into his own Allen
Institute for Brain Science, a large-scale effort to create brain atlases and
unravel how vision works.
To be sure,
as incredible as recent inventions are, most of today’s neurotechnologies are
inchoate. They do not function for very long inside the brain, can only read or
stimulate a limited number of neurons, or require a wired connection. “Mind-
reading” machines, for example, rely on expensive
equipment available only in lab or hospital settings to produce even their
crude results. Yet the commitment from researchers and funders alike to
neuroscience’s future means
devices will likely become only more sophisticated, ubiquitous, and accessible
with every passing year.
Each new
technology will bring creative possibilities for its application. Ethicists
warn, however, that among these uses is weaponization.
It does not
appear that, to date, any brain tools have been employed as weapons, which is
not to say their battlefield values aren’t currently being considered: Earlier
this year, for example, a quadriplegic woman flew an F-35 fighter-jet simulator
using only her thoughts and a brain implant whose development was funded by
DARPA. It seems the possibility of weaponization might not lie in some distant
future—and there is ample precedent for the rapid transition of technology from
basic science to disruptive, global menace. After all, just 13 years elapsed
between the discovery of the neutron and the atomic blasts in the skies over
Hiroshima and Nagasaki.
Mind
manipulation by governments would be safely in the domain of conspiracy
theorists and fictional thrillers if world powers didn’t have such a checkered
past with neuroscience. In one bizarre set of experiments conducted between
1981 and 1990, Soviet scientists built equipment designed to disturb the
functioning of neurons in the body and brain by exposing people to various
levels of high-frequency electromagnetic radiation. (The results of this
research are still unknown.) Over many decades, the Soviet Union spent more than
$1 billion on such mind-control schemes.
Perhaps the
most notorious examples of U.S. abuses of neuroscience occurred from the 1950s
into the 1960s, when Washington pursued a wide-ranging research program to find
ways of monitoring and influencing human thoughts. CIA investigations,
code-named MK-
Ultra, promoted “research and
development of chemical, biological, and radiological materials capable of
employment in clandestine operations to control human behavior,” according to a
1963 CIA inspector general’s report. Some 80 institutions, including 44
colleges and universities, were involved, but they were often funded under the
veil of other scientific goals, leaving participants unaware they were carrying
out Langley’s bidding. The program’s most infamous aspects involved dosing
individuals—some unwittingly—with LSD. One Kentucky man was administered the
drug for 174 consecutive days. Equally harrowing, however, were the MKUltra
projects that focused on mechanisms of extrasensory perception and electronic manipulation
of subjects’ brains, as well as attempts to gather, interpret, and influence
the thoughts of others through hypnosis or psychotherapy.
Today,
there is no evidence that the United States is similarly abusing
neurotechnology for national security purposes. The armed forces, though,
remain deeply committed to advancing the field. In 2011, according to figures
tabulated by Margaret Kosal, a professor at the Georgia Institute of
Technology, the Army set aside $55 million, the Navy $34 million, and the Air
Force $24 million to pursue neuroscience research. (The U.S. military, it
should be noted, is the primary funder of various scientific fields, including
engineering and computer science.) In 2014, the Intelligence Advanced Research
Projects Activity, or IARPA, a research organization that develops cutting-edge
technology for U.S. intelligence agencies, pledged $12 million to design
performance-enhancing techniques, including electrical stimulation of the brain
for “optimizing human adaptive reasoning”—that is, for making the analysts
smarter.
The real
energy, however, is emanating from DARPA, an agency of international intrigue
and envy. It funds about 250 projects at any given time, recruiting and leading
teams of experts from academia and industry to work on ambitious, highly
defined assignments. DARPA’s knack for funding visionary projects that remake
the world—the Internet, GPS, and the stealth fighter, just to name a few—is
unparalleled. In 2011, DARPA, which has a modest (by defense standards) annual
budget of $3 billion, slated $240 million for neuroscience research alone. It
has also already committed some $225 million to the first few years of the
BRAIN Initiative, only $50 million less than the project’s top funder, the NIH,
during that same period.
With
DARPA’s game-changing model and international cachet, perhaps it was only a
matter of time before other world powers began emulating it. This January,
India announced that it would reshape its Defence Research & Development
Organisation along the lines of DARPA. Last year, Russia’s military announced
its $100 million support of the newly minted Foundation for Advanced Research.
In 2013, Japan made public the creation of an agency with “DARPA of the United
States in mind,” in the words of Science and Technology Minister Ichita
Yamamoto. (It has been dubbed “JARPA” by some observers.) The European Defence
Agency was established in 2001, answering the call for a “European DARPA.” And
there are even efforts to export the DARPA model to corporations, such as
Google.
What role
neuroscience will play at these research centers has yet to be determined.
However, given recent progress in brain technologies, DARPA’s interest in it,
and the new hubs’ desire to follow the Pentagon’s lead, it’s likely the field
will get at least some—if not substantially more—attention. Robert McCreight, a
former U.S. State Department official who specialized in arms control, among
other security issues, for over two decades, says this “competitive
environment” could feed into a sort of neurological space race, a contest to
control and commoditize neurons. The subsequent risk is that research will be
channeled toward weaponization—toward making the brain a tool for fighting wars
more effectively.
It isn’t
hard to imagine what this might look like. Today, a head cap equipped with
electrodes gathers from the scalp someone’s electroencephalographic (EEG) brain
signals relevant only to an intended purpose, like kicking a ball; tomorrow,
EEG-capturing electrodes could surreptitiously collect weaponry access codes.
Likewise, a BMI could become a data siphon—used, say, to hack into an enemy
spy’s thoughts. Arguably more frightening, if terrorists, hackers, or other
criminals were to acquire such neurotechnologies, they could use the tools to
engineer single-minded assassins or steal personal information, such as
passwords or credit card numbers.
Troublingly,
little seems to be preventing these scenarios from materializing. Very few
international agreements or even national laws effectively protect personal
privacy, and none pertain directly to brain technologies. When it comes to dual
use and weaponization, far fewer barriers exist, exposing the human brain as a
vast, lawless territory.
Chevrier
argues that because neuroweapons would affect the brain, a biological system,
the BWC, which prohibits the use of harmful or deadly biological organisms, or
their toxins, could be modified to include them. She isn’t alone: Many
ethicists are pushing for the closer involvement of neuroscientists during the
convention’s regular reviews, when member states decide upon changes to the
treaty. What the process lacks currently, Chevrier says, is a scientific board.
(At a meeting pertaining to the treaty this August, one of the key proposals on
the table was the creation of such an entity, which would include
neuroscientists; the outcome was not known as of press time.) Technical input
could spur state parties into action. “Politicians don’t have an understanding
of how dangerous the threat could be,” Chevrier argues.
Even with a
board, however, the glacial pace of U.N. bureaucracy would likely prove a
problem. BWC review conferences, where states report on new technologies that
could be adapted into biological weapons, happen only every five years—all but
ensuring that changes to the treaty are considered well after the latest
scientific advances. “The general tendency is always that science and
technology take ardent strides, and ethics and politics creep up behind,” says
Giordano, the neuroethicist at Georgetown’s Medical Center. “They tend to be
more reactive, not proactive.” (Ethicists already have a name for this lag: the
Collingridge dilemma, named for David Collingridge, who in his 1980 book, The
Social Control of Technology, argued that it is difficult to predict the
potential impact of a new technology and thus impossible to enact policy to
stay ahead of it.)
But Moreno,
the University of Pennsylvania bioethicist, says this isn’t an excuse for
inaction. Ethics experts have a duty to ensure that scientific developments and
the potential threats they pose are explained fully to policymakers. Moreno
argues that the NIH should establish a permanent neuroethics research program.
The United Kingdom’s Royal Society took a step in that direction five years
ago, when it convened a steering group of neuroscientists and ethicists. Since
then, the group has published four reports on neuroscience advances, including
one on conflict and national security implications. That document calls for
neuroscience to be a focal topic at BWC review meetings and urges bodies, such
as the World Medical Association, to conduct studies on the potential
weaponization of any technologies that affect the nervous system, including
those, such as BMIs, not explicitly covered by international law.
Neuroethics,
however, is a relatively new field. In fact, its name wasn’t properly coined
until 2002. Since then, it has grown
substantially—spawning the Program in Neuroethics at Stanford
University, the Oxford Centre for Neuroethics, and the European Neuroscience
and Society Network, among other programs—and has attracted funding from the
MacArthur Foundation and the Dana Foundation. Nevertheless, these institutions’
influence is still nascent. “They defined the workspace,” says Giordano. “Now
it’s a question of going to work.”
Also
troubling is scientists’ lack of knowledge about the dual-use nature of
neurotechnologies—namely, the disconnect between research and ethics. Malcolm
Dando, a professor of international security at the University of Bradford in
England, recalls organizing several seminars for science departments across the
United Kingdom in 2005, the year before a BWC review conference, to educate
experts on the potential misuses of biological agents and neurological tools.
He was shocked to find that “they didn’t know very much”; one scientist, for
example, denied that a possibly weaponizable microbe he kept in the fridge had
any dual-use potential. Dando remembers it as “a dialogue of the deaf.” Since
then, not much has changed: Lack of awareness, Dando explains, “certainly
remains the case” among neuroscientists.
It is
encouraging that neuroscience’s moral quandaries are being acknowledged in some
key places, Dando points out. Barack Obama charged the Presidential Commission
for the Study of Bioethical Issues to prepare a report of possible ethical and
legal issues related to the advanced technology of the BRAIN Initiative, and
the EU’s Human Brain Project established an Ethics and Society Programme to
guide the endeavor’s governance.
But these
efforts may skirt the particular issue of neuroweapons. For instance, the
two-volume, 200-page report on the ethical implications of the BRAIN
Initiative, released in full this March, does not include the terms “dual use” or
“weaponization.” Dando says this gap—even in neuroscience literature, where one
might expect the topic to thrive—is the rule, not the exception.
When Duke's
Nicolelis created his first brain-machine interface in 1999—a rat, from thought
alone, pressed a lever to receive water—he never imagined the device would be
used as a rehabilitative tool for paralyzed people. But now, his patients can
kick a soccer ball across a World Cup playing field in a brain-controlled
exoskeleton. And the applications of his research are growing. Nicolelis is
working to put a noninvasive version of the brainet—EEG caps worn by users—in
clinics where physical therapists might be able to utilize their own brain
waves to help injured people walk. “The physical therapist lends their brain 90
percent of the time, and the patient 10 percent of
the time, and by doing that the
patient likely will learn faster,” he says.
But
Nicolelis admits he worries that as his innovations gain traction, they could
be put to other nefarious uses. After a project in the mid-2000s, using BMIs to
help veterans gain mobility, he now refuses to accept DARPA money. Nicolelis
senses that, in the United States at least, he is in the minority. “I think
some neuroscientists, at meetings, are foolish enough to brag about how much
they got from DARPA to do research, without even thinking about what DARPA
might want out of that,” he says.
The thought
of BMIs, his life’s work, becoming weaponized pains him. “I’ve been trying for
the last 20 years,” he says, “to do something that might have intellectual
benefit for understanding the brain and eventually have clinical benefit.”
The fact
is, however, that neuroweapons developing alongside the clinical applications
of brain technologies is a foregone conclusion. What kind of weapons these will
be, when they will emerge, and in whose hands remain to be seen; people today
certainly do not need to fear that their minds are on the brink of being
compromised. But though a nightmare scenario in which emerging technologies turn
the human brain into a tool—more sensitive than a bomb-sniffing dog, as
controllable as a drone, or more vulnerable than an open safe—seems a dystopian
fantasy, it’s worth asking: Is enough being done to rein in the next generation
of lethal weapons before it’s too late?
Comments
Post a Comment