I’m sort of pleased to see that I guessed roughly what this episode was about despite having arrived at LessWrong well after it unhappened.+ But if the Rationalwiki description is accurate, I’m now really confused about something new.
I was under the impression that Lesswrong was fairly big on the Litany of Gendlin. But an AI that could do the things Roko proposed (something I place vanishingly small probability, fortunately) could also retrospectively figure out who was being willfully ignorant or failing to reach rational conclusions for which they had sufficient priors.
It’s disconcerting, after watching so much criticism of the rest of humanity finding ways to rationalize around the “inevitability” of death, to see transhumanists finding ways to hide their minds from their own “inevitable” conclusions.
+Since most people who would care about this subject at all have probably read Three Worlds Collide, I think this episode should be referred to as The Confessar Vanishes, but my humor may be idiosyncratic even for this crowd.
The primary issue with the Roko matter wasn’t as much that an AI might actually do but that the relevant memes could cause some degree of stress in neurotic individuals. At the time when it occurred there were at least two people in the general SI/LW cluster who were apparently deeply disturbed by the thought. I expect that the sort who would be vulnerable would be the same sort who if they were religious would lose sleep over the possibility of going to hell.
The primary issue with the Roko matter wasn’t as much that an AI might actually do but that the relevant memes could cause some degree of stress in neurotic individuals.
The original reasons given:
Meanwhile I’m banning this post so that it doesn’t (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I’m not sure I know the sufficient detail.)
...and further:
For those who have no idea why I’m using capital letters for something that just sounds like a random crazy idea, and worry that it means I’m as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us. It is the sort of thing you want to be EXTREMELY CONSERVATIVE about NOT DOING.
I’m sort of pleased to see that I guessed roughly what this episode was about despite having arrived at LessWrong well after it unhappened.+ But if the Rationalwiki description is accurate, I’m now really confused about something new.
I was under the impression that Lesswrong was fairly big on the Litany of Gendlin. But an AI that could do the things Roko proposed (something I place vanishingly small probability, fortunately) could also retrospectively figure out who was being willfully ignorant or failing to reach rational conclusions for which they had sufficient priors.
It’s disconcerting, after watching so much criticism of the rest of humanity finding ways to rationalize around the “inevitability” of death, to see transhumanists finding ways to hide their minds from their own “inevitable” conclusions.
+Since most people who would care about this subject at all have probably read Three Worlds Collide, I think this episode should be referred to as The Confessar Vanishes, but my humor may be idiosyncratic even for this crowd.
The primary issue with the Roko matter wasn’t as much that an AI might actually do but that the relevant memes could cause some degree of stress in neurotic individuals. At the time when it occurred there were at least two people in the general SI/LW cluster who were apparently deeply disturbed by the thought. I expect that the sort who would be vulnerable would be the same sort who if they were religious would lose sleep over the possibility of going to hell.
The original reasons given:
...and further:
(emphasis mine)