I can accelerate the working-out of FAI theory by applying my own efforts and by recruiting others. Messing with macro tech developmental forces to slow other people down doesn’t seem to me to be something readily subject to my own decision.
I don’t trust that human intelligence enhancement can beat AI of either sort into play—it seems to me to be running far behind at the moment. So I’m not willing to slow down and wait for it.
It’s worth noting that even if you consider, say, gentle persuasion, in a right-tail problem, eliminating 90% of the researchers doesn’t get you 10 times as much time, just one standard deviation’s worth of time or whatever.
The sort of theory that goes into hacking up an unFriendly AI and the sort of theory that goes into Friendly AI are pretty distinct as subjects.
I can accelerate the working-out of FAI theory by applying my own efforts and by recruiting others. Messing with macro tech developmental forces to slow other people down doesn’t seem to me to be something readily subject to my own decision.
I don’t trust that human intelligence enhancement can beat AI of either sort into play—it seems to me to be running far behind at the moment. So I’m not willing to slow down and wait for it.
Regarding the CIA thing, I have ethics.
It’s worth noting that even if you consider, say, gentle persuasion, in a right-tail problem, eliminating 90% of the researchers doesn’t get you 10 times as much time, just one standard deviation’s worth of time or whatever.
The sort of theory that goes into hacking up an unFriendly AI and the sort of theory that goes into Friendly AI are pretty distinct as subjects.