Nobody is going to take time off from their utopia to spend resources torturing me. Now, killing me on the way to world domination is more plausible, but if someone solves all the technical problems required to actually use AI for world domination, the chances are still well in favor of them being some generally nice, cosmopolitan person in a lab somewhere.
No, unfortunately, it’s far more likely that I will be killed by pure mistake, rather than malice.
You seem to go out of your way to make your thought experiments be about foreigners targeting you. Have you considered that maybe your concerns about AI here are an expression of an underlying anxiety about bad foreigners?
I suppose that my view of the Chinese and Russian governments are a little bit worse than that of the American one (imo those three are the most likely to win an AGI arms race so I don’t consider others so much), but I think that’s justified.
The American government isn’t ethnically cleansing right now in the same way that the Chinese one is for Uighar Muslims. And to me, it doesn’t seem vanishingly unlikely that such a government would want to maximise suffering for a particular group that it doesn’t happen to be very fond of.
If the Nazis had access to this type of technology, how sure would you be that we wouldn’t have certain groups being infinitely tortured as we speak?
That’s not to mention the perhaps more likely near-miss scenarios in which we fuck up AI alignment in a way that we’re unable to die nor live a meaningful life. Like, say, if we all ended up getting locked underground by Omega because of some black swan possibility that nobody thought of. It just seems to me that human extinction and dystopia are the most likely outcomes.
Nobody is going to take time off from their utopia to spend resources torturing me. Now, killing me on the way to world domination is more plausible, but if someone solves all the technical problems required to actually use AI for world domination, the chances are still well in favor of them being some generally nice, cosmopolitan person in a lab somewhere.
No, unfortunately, it’s far more likely that I will be killed by pure mistake, rather than malice.
You seem to go out of your way to make your thought experiments be about foreigners targeting you. Have you considered that maybe your concerns about AI here are an expression of an underlying anxiety about bad foreigners?
I suppose that my view of the Chinese and Russian governments are a little bit worse than that of the American one (imo those three are the most likely to win an AGI arms race so I don’t consider others so much), but I think that’s justified.
The American government isn’t ethnically cleansing right now in the same way that the Chinese one is for Uighar Muslims. And to me, it doesn’t seem vanishingly unlikely that such a government would want to maximise suffering for a particular group that it doesn’t happen to be very fond of.
If the Nazis had access to this type of technology, how sure would you be that we wouldn’t have certain groups being infinitely tortured as we speak?
That’s not to mention the perhaps more likely near-miss scenarios in which we fuck up AI alignment in a way that we’re unable to die nor live a meaningful life. Like, say, if we all ended up getting locked underground by Omega because of some black swan possibility that nobody thought of. It just seems to me that human extinction and dystopia are the most likely outcomes.