Readings — From the December 2009 issue
- Current Issue
SIGN IN to access the Harper’s archive
ALERT: Usernames and passwords from the old Harpers.org will no longer work. To create a new password and add or verify your email address, please sign in to customer care and select Email/Password Information. (To learn about the change, please read our FAQ.)
From “Neurosecurity: Security and Privacy for Neural Devices,” by Tamara Denning, Yoky Matsuoka, and Tadayoshi Kohno, which appeared in July in Neurosurgical FOCUS. The authors are computer scientists at the University of Washington.
Future hackers will have no qualms in targeting neural devices. We have already seen vandals place flashing animations on epilepsy support websites, causing patients to have seizures. With neural devices, vandals could take advantage of neural plasticity to make longer-term alterations.
Future prosthetic-limb systems will allow physicians to connect wirelessly to adjust settings. These systems must guard against hackers trying to hijack these signals to take control of the robotic limb or prevent the patient from using the limb, particularly while he is running, driving, or climbing stairs. In addition, these systems must prevent hackers from eavesdropping on wireless signals: such confidentiality attacks could be used to learn what keys a person’s prosthetic limb is typing on a keyboard or discover a person’s intended movements—before those movements take place.
Some neural-engineering devices are designed to stimulate regions of the brain itself. Current-generation Deep Brain Stimulators (DBSs) have had success in treating Parkinson’s disease, chronic pain, and other medical conditions. Hackers could cause cell death or the formation of meaningless neural pathways by bombarding the brain with random signals. Future DBSs must protect the feelings and emotions of patients from external observation. A hacker should not be able to alter the settings of the device to interfere with the normal formation of memories: to create disproportionately intense memories, to cause unimportant things to become long-term memories, or to leave unexpected gaps in a patient’s memory.
It is easy to ask computer users to make security decisions by responding to pop-up windows, but it would be difficult to ask the users of neural devices to make rapid meta-decisions about their own brains. The consequences of a breach in neurosecurity—when human health and free will are at stake—are drastically different from those of a breach in computer security, when the victim is a computer on a desk.