This article discusses FAI, mentioning Bostrom, EY etc. Its interesting to see how the problem is approached as it goes more mainstream, and in this particular case a novel approach to FAI is articulated: whole brain emulation (or biologically inspired neural nets) … on acid!
The idea is that the WBE will be too at-one-with-the-universe to want to harm anyone.
Its easy to laugh at this. But I think there’s also a real worry that someone might actually try to build an AI with hopelessly inadequate guarantees of safety.
Having said that, perhaps the idea is not quite as crazy as it sounds. If WBE comes first, then some form of psudo-drug based behavioural conditioning is better than nothing for AI control, although I would have thought that modifying oxytocin to increase empathy would be the obvious strategy: digital MDMA, not LSD.
Tangentially, there seems to be a perception some people have that taking LSD causes prosocial values (or, at least, what they believe to be prosocial values), but there seems to be a real danger of confusing the direction of causality here—hippies do acid, and hippies hold certain values, but the causal direction is surely: hippy values → become hippy → take acid, not take acid → become hippy. Of course, perhaps acid might make the hippy values stronger, but that could be because the experience is interpreted within the structure of your pre-existing values. I have heard some (atypical?) neoreactionaries plan an acid trip, for spiritual reasons, and their values certainly appear different from hippy values. Of course, both the neoreactionaries and the hippies believe that they hold prosocial values, they just differ on what prosocial values are. Perhaps their terminal values are not so different, but they have very different models of the world?
To briefly go back to the original point, I think the author is conflating two things—just because ‘can we program an AI to hallucinate?’ is an interesting question (at least to some people), does not mean that it is an actually sensible proposition for FAI control. Conversely, just because this idea can trigger the absurdity heuristic, does not mean that ‘AI behavioural modification with drugs’ is an entirely useless idea.
This article discusses FAI, mentioning Bostrom, EY etc. Its interesting to see how the problem is approached as it goes more mainstream, and in this particular case a novel approach to FAI is articulated: whole brain emulation (or biologically inspired neural nets) … on acid!
The idea is that the WBE will be too at-one-with-the-universe to want to harm anyone.
Its easy to laugh at this. But I think there’s also a real worry that someone might actually try to build an AI with hopelessly inadequate guarantees of safety.
Having said that, perhaps the idea is not quite as crazy as it sounds. If WBE comes first, then some form of psudo-drug based behavioural conditioning is better than nothing for AI control, although I would have thought that modifying oxytocin to increase empathy would be the obvious strategy: digital MDMA, not LSD.
Tangentially, there seems to be a perception some people have that taking LSD causes prosocial values (or, at least, what they believe to be prosocial values), but there seems to be a real danger of confusing the direction of causality here—hippies do acid, and hippies hold certain values, but the causal direction is surely: hippy values → become hippy → take acid, not take acid → become hippy. Of course, perhaps acid might make the hippy values stronger, but that could be because the experience is interpreted within the structure of your pre-existing values. I have heard some (atypical?) neoreactionaries plan an acid trip, for spiritual reasons, and their values certainly appear different from hippy values. Of course, both the neoreactionaries and the hippies believe that they hold prosocial values, they just differ on what prosocial values are. Perhaps their terminal values are not so different, but they have very different models of the world?
To briefly go back to the original point, I think the author is conflating two things—just because ‘can we program an AI to hallucinate?’ is an interesting question (at least to some people), does not mean that it is an actually sensible proposition for FAI control. Conversely, just because this idea can trigger the absurdity heuristic, does not mean that ‘AI behavioural modification with drugs’ is an entirely useless idea.