Brain-computer interfaces could allow soldiers to control weapons with their thoughts – Japan Today

JapanToday

Gleams Akihabara 703
2-8-16 Higashi-Kanda
Chiyoda-ku
Tokyo 101-0031
Japan

Tel: +81 3 5829 5900
Fax: +81 3 5829 5919
Email: editor@japantoday.com

©2022 GPlusMedia Inc.
Imagine that a soldier has a tiny computer device injected into their bloodstream that can be guided with a magnet to specific regions of their brain. With training, the soldier could then control weapon systems thousands of miles away using their thoughts alone. Embedding a similar type of computer in a soldier’s brain could suppress their fear and anxiety, allowing them to carry out combat missions more efficiently. Going one step further, a device equipped with an artificial intelligence system could directly control a soldier’s behavior by predicting what options they would choose in their current situation.
While these examples may sound like science fiction, the science to develop neurotechnologies like these is already in development. Brain-computer interfaces, or BCI, are technologies that decode and transmit brain signals to an external device to carry out a desired action. Basically, a user would only need to think about what they want to do, and a computer would do it for them.
BCIs are currently being tested in people with severe neuromuscular disorders to help them recover everyday functions like communication and mobility. For example, patients can turn on a light switch by visualizing the action and having a BCI decode their brain signals and transmit it to the switch. Likewise, patients can focus on specific letters, words or phrases on a computer screen that a BCI can move a cursor to select.
However, ethical considerations have not kept pace with the science. While ethicists have pressed for more ethical inquiry into neural modification in general, many practical questions around brain-computer interfaces have not been fully considered. For example, do the benefits of BCI outweigh the substantial risks of brain hacking, information theft and behavior control? Should BCI be used to curb or enhance specific emotions? What effect would BCIs have on the moral agency, personal identity and mental health of their users?
These questions are of great interest to us, a philosopher and neurosurgeon who study the ethics and science of current and future BCI applications. Considering the ethics of using this technology before it is implemented could prevent its potential harm. We argue that responsible use of BCI requires safeguarding people’s ability to function in a range of ways that are considered central to being human.
Expanding BCI beyond the clinic
Researchers are exploring nonmedical brain-computer interface applications in many fields, including gaming, virtual reality, artistic performance, warfare and air traffic control.
For example, Neuralink, a company co-founded by Elon Musk, is developing a brain implant for healthy people to potentially communicate wirelessly with anyone with a similar implant and computer setup.
In 2018, the U.S. military’s Defense Advanced Research Projects Agency launched a program to develop “a safe, portable neural interface system capable of reading from and writing to multiple points in the brain at once.” Its aim is to produce nonsurgical BCI for able-bodied service members for national security applications by 2050. For example, a soldier in a special forces unit could use BCI to send and receive thoughts with a fellow soldier and unit commander, a form of direct three-way communication that would enable real-time updates and more rapid response to threats.
To our knowledge, these projects have not opened a public discussion about the ethics of these technologies. While the U.S. military acknowledges that “negative public and social perceptions will need to be overcome” to successfully implement BCI, practical ethical guidelines are needed to better evaluate proposed neurotechnologies before deploying them.
Utilitarianism
One approach to tackling the ethical questions BCI raises is utilitarian. Utilitarianism is an ethical theory that strives to maximize the happiness or well-being of everyone affected by an action or policy.
Enhancing soldiers might create the greatest good by improving a nation’s warfighting abilities, protecting military assets by keeping soldiers remote, and maintaining military readiness. Utilitarian defenders of neuroenhancement argue that emergent technologies like BCI are morally equivalent to other widely accepted forms of brain enhancement. For example, stimulants like caffeine can improve the brain’s processing speed and may improve memory.
However, some worry that utilitarian approaches to BCI have moral blind spots. In contrast to medical applications designed to help patients, military applications are designed to help a nation win wars. In the process, BCI may ride roughshod over individual rights, such as the right to be mentally and emotionally healthy.
For example, soldiers operating drone weaponry in remote warfare today report higher levels of emotional distress, post-traumatic stress disorder and broken marriages compared to soldiers on the ground. Of course, soldiers routinely elect to sacrifice for the greater good. But if neuroenhancing becomes a job requirement, it could raise unique concerns about coercion.
Neurorights
Another approach to the ethics of BCI, neurorights, prioritizes certain ethical values even if doing so does not maximize overall well-being.
Proponents of neurorights champion individuals’ rights to cognitive liberty, mental privacy, mental integrity and psychological continuity. A right to cognitive liberty might bar unreasonable interference with a person’s mental state. A right to mental privacy might require ensuring a protected mental space, while a right to mental integrity would prohibit specific harms to a person’s mental states. Lastly, a right to psychological continuity might protect a person’s ability to maintain a coherent sense of themselves over time.
BCIs could interfere with neurorights in a variety of ways. For example, if a BCI tampers with how the world seems to a user, they might not be able to distinguish their own thoughts or emotions from altered versions of themselves. This may violate neurorights like mental privacy or mental integrity.
Yet soldiers already forfeit similar rights. For example, the U.S. military is allowed to restrict soldiers’ free speech and free exercise of religion in ways that are not typically applied to the general public. Would infringing neurorights be any different?
Human capabilities
A human capability approach insists that safeguarding certain human capabilities is crucial to protecting human dignity. While neurorights home in on an individual’s capacity to think, a capability view considers a broader range of what people can do and be, such as the ability to be emotionally and physically healthy, move freely from place to place, relate with others and nature, exercise the senses and imagination, feel and express emotions, play and recreate, and regulate the immediate environment.
We find a capability approach compelling because it gives a more robust picture of humanness and respect for human dignity. Drawing on this view, we have argued that proposed BCI applications must reasonably protect all of a user’s central capabilities at a minimal threshold. BCI designed to enhance capabilities beyond average human capacities would need to be deployed in ways that realize the user’s goals, not just other people’s.
For example, a bidirectional BCI that not only extracts and processes brain signals but delivers somatosensory feedback, such as sensations of pressure or temperature, back to the user would pose unreasonable risks if it disrupts a user’s ability to trust their own senses. Likewise, any technology, including BCIs, that controls a user’s movements would infringe on their dignity if it does not allow the user some ability to override it.
A limitation of a capability view is that it can be difficult to define what counts as a threshold capability. The view does not describe which new capabilities are worth pursuing. Yet, neuroenhancement could alter what is considered a standard threshold, and could eventually introduce entirely new human capabilities. Addressing this requires supplementing a capability approach with a fuller ethical analysis designed to answer these questions.
The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts.
A special feature of this year’s event: get interviewed on the spot and possibly get a job in one day!
For example, Neuralink, a company co-founded by Elon Musk, is developing a brain implant for healthy people to potentially communicate wirelessly with anyone with a similar implant and computer setup.
In 2018, the U.S. military’s Defense Advanced Research Projects Agency launched a program to develop “a safe, portable neural interface system capable of reading from and writing to multiple points in the brain at once.”
While Musk drew inspiration from Iain Bank’s Culture series about a “neural lace”, it is highly probable DARPA could be ahead of his private venture in the development of this tech.
The military apps lead to many dystopian avenues different from the utopian ones of Banks who imagined this tech enhancing human abilities in a future dominated by AI.
Utilitarianism is the nerd ethics. It’s the ethical system you utilise once the technology or institution is a fait accompli, or assumed to be. It can be programmed. Bigger contexts, like whether the institution – militaries – or tech themselves are ethical, can be ignored. The assumption here is that we, as a species, are incapable of ever overcoming war and achieving real peace. And we become what we are assumed to be. Tech will simply magnify that. Also anything that brings us closer to being machines ourselves, devoid of thought or feeling – which is a clear trend in capitalism too – is also taking us further away from the removal of violence and exploitation, and peace.
Pfft.Clint Eastwood did this in ‘Firefox’ ages ago.
I think this tech is unnecessary. Repeated use will in time give you the same kind of feeling where the article you are using becomes an extension of your mind. Example, I have motorcycle I have owned and ridden for 38 years and 500,000 km. By now it is an extension of me. I don’t even consciously operate it. I think it and the bike responds. It talks to me through my hands, feet and seat and I communicate with it. It is part of me when I ride. Experienced pilots have the same relationship with aircraft they have a lot of hours in. I was getting pretty close with the old CH-46. The aircraft becomes and extension of their mind. Wrenching on that old bike, performing all the routine maintenance, I can almost do it blindfolded. I have a touch memory of every action required from servicing it so many times over the decades.
What could possibly go wrong?
Use your Facebook account to login or register with JapanToday. By doing so, you will also receive an email inviting you to receive our news alerts.
Join the leaders of English Education for Children in Japan!
A mix of what's trending on our other sites
GaijinPot Blog

source

Related Articles