Anyone who’s done any infosec or network protocol work will laugh at the idea that trial and error (evolution) can make a safe high-bandwidth connection.
There are hundreds of things that people have laughed at the idea of trial and error (evolution) doing, which evolution in fact did. Thinking that evolution is dumb is generally not a good heuristic.
Also, I’m not sure what counts as safe in this context. Is language safe? Is sight?
Thinking that evolution is smart on the timescales we care about is probably a worse heuristic, though. Evolution can’t look ahead, which is fine when it’s possible to construct useful intermediate adaptations, but poses a serious problem when there are no useful intermediates. In the case of infosec, it’s as all-or-nothing as it gets. A single mistake exposes the whole system to attack by adversaries. In this case, the attack could destroy the mind of the person using their neural connection.
Consider it from this perspective: a single deleterious mutation to part of the genome encoding the security system opens the person up to someone else poisoning their mind in serious and sudden ways: consider literal toxins, including the wide variety of organochlorides and other chemicals that can bind acetylcholinesterase and cause seizures (i.e., how many pesticides work), but also consider memetic attacks that can cause the person to act against their own interests (yes, language also permits these attacks, but much less efficiently than being able to directly update someone’s beliefs/memories/heuristics/thoughts, which is entirely possible once you open a direct, physical connection to someone’s brain from the outside of their skull—eyes are bad enough, from this perspective!).
A secure system would not only have to be secure for the individual it evolved in, but also be robust to the variety of mutations it will encounter in that individual’s descendants. And the stage in between wherein some individuals have secure neural communication while others can have their minds ravaged by adversaries (or unwitting friends) would prevent any widespread adoption of the genes involved.
Over millions upon millions of years, it’s possible that evolution could devise an ingenious system that gets around all of this, but my guess is that direct neural communication would only noticeably help language-bearing humans, which have existed for only ~100K years. Simpler organisms can just exchange chemicals or other simple signals. I don’t think 100K years nearly enough time to evolve a robust-to-mutations security system for a process that can directly update the contents of someone’s mind.
The first issue isn’t humans abusing the system. It’s opening your brain/etc. up to attack by parasites, to say nothing of disease.
And that would probably be an issue way before the system would be developed enough to have a lot of, if any, upsides from functionality, let alone downsides.
Fair enough—I underestimate the power of evolution at my epistemic peril. My point remains: more direct communication (unfiltered by many levels of decoding and processing) could easily be more harmful than helpful.
Aside from snow crash / basilisk scenarios (which are yet un-demonstrated), language and vision are pretty safe, as they’re filtered through a lot of neural systems to find and pay special attention to surprising things. This is slow, but makes it way harder to trick than a more direct interface would.
Some drugs are an example of more direct impacts that are available today. If there were such a thing that’s actually guided by a human specifically to alter your mind in ways desired by the communicator, it would be quickly abused and removed.
There are hundreds of things that people have laughed at the idea of trial and error (evolution) doing, which evolution in fact did. Thinking that evolution is dumb is generally not a good heuristic.
Also, I’m not sure what counts as safe in this context. Is language safe? Is sight?
Thinking that evolution is smart on the timescales we care about is probably a worse heuristic, though. Evolution can’t look ahead, which is fine when it’s possible to construct useful intermediate adaptations, but poses a serious problem when there are no useful intermediates. In the case of infosec, it’s as all-or-nothing as it gets. A single mistake exposes the whole system to attack by adversaries. In this case, the attack could destroy the mind of the person using their neural connection.
Consider it from this perspective: a single deleterious mutation to part of the genome encoding the security system opens the person up to someone else poisoning their mind in serious and sudden ways: consider literal toxins, including the wide variety of organochlorides and other chemicals that can bind acetylcholinesterase and cause seizures (i.e., how many pesticides work), but also consider memetic attacks that can cause the person to act against their own interests (yes, language also permits these attacks, but much less efficiently than being able to directly update someone’s beliefs/memories/heuristics/thoughts, which is entirely possible once you open a direct, physical connection to someone’s brain from the outside of their skull—eyes are bad enough, from this perspective!).
A secure system would not only have to be secure for the individual it evolved in, but also be robust to the variety of mutations it will encounter in that individual’s descendants. And the stage in between wherein some individuals have secure neural communication while others can have their minds ravaged by adversaries (or unwitting friends) would prevent any widespread adoption of the genes involved.
Over millions upon millions of years, it’s possible that evolution could devise an ingenious system that gets around all of this, but my guess is that direct neural communication would only noticeably help language-bearing humans, which have existed for only ~100K years. Simpler organisms can just exchange chemicals or other simple signals. I don’t think 100K years nearly enough time to evolve a robust-to-mutations security system for a process that can directly update the contents of someone’s mind.
The first issue isn’t humans abusing the system. It’s opening your brain/etc. up to attack by parasites, to say nothing of disease.
And that would probably be an issue way before the system would be developed enough to have a lot of, if any, upsides from functionality, let alone downsides.
Fair enough—I underestimate the power of evolution at my epistemic peril. My point remains: more direct communication (unfiltered by many levels of decoding and processing) could easily be more harmful than helpful.
Aside from snow crash / basilisk scenarios (which are yet un-demonstrated), language and vision are pretty safe, as they’re filtered through a lot of neural systems to find and pay special attention to surprising things. This is slow, but makes it way harder to trick than a more direct interface would.
Some drugs are an example of more direct impacts that are available today. If there were such a thing that’s actually guided by a human specifically to alter your mind in ways desired by the communicator, it would be quickly abused and removed.