Consider the example that I often use to refute this sort of thing: Video game characters. We have no idea exactly where future AIs will draw the line. It is entirely possible that future AIs will think “humans kill a lot of video game characters, so it’s okay for us to kill lots of humans”.
Of course, this sounds ludicrous because nobody thinks that killing video game characters is wrong, but some people think killing animals is wrong. But if you’re postulating future intelligences whose moral system will be broad enough to save us only if our own system is broad enough, we really don’t know exactly how broad it has to be for this to work. Neither do we know that it will match something like veganism that humans actually believe in.
In fact, we can generalize this. It’s the same as one of the problems with Pascal’s Wager: the wager applies to all gods and even hypothetical gods that don’t have religions, and you don’t know which one to follow. Likewise, the “AI wager” applies to all ideologies, not just to veganism, and including ludicrous ones that I just made up.
I’m not talking about an acausal deal or something where the AI judges our moral system and treats us accordingly. I mean that the AI is aligned to the moral system of its powerful masters, which I think will see too little problem with tormenting us for much the same reason most people see too little problem with tormenting animals: no respect for sentients not powerful enough to enter the social contract.
Also in the limit of extreme computational power and simulation capacity, I would also start worrying about the proverbial “video game characters” of the future, too. Which is why the veganism needs to be generalized, it’s not just about animals, or even just about powerless humans or even only beings that exist right now: post-singularity you’ll be able to just tailor-make more beings for only God knows what purpose.
Consider the example that I often use to refute this sort of thing: Video game characters. We have no idea exactly where future AIs will draw the line. It is entirely possible that future AIs will think “humans kill a lot of video game characters, so it’s okay for us to kill lots of humans”.
Of course, this sounds ludicrous because nobody thinks that killing video game characters is wrong, but some people think killing animals is wrong. But if you’re postulating future intelligences whose moral system will be broad enough to save us only if our own system is broad enough, we really don’t know exactly how broad it has to be for this to work. Neither do we know that it will match something like veganism that humans actually believe in.
In fact, we can generalize this. It’s the same as one of the problems with Pascal’s Wager: the wager applies to all gods and even hypothetical gods that don’t have religions, and you don’t know which one to follow. Likewise, the “AI wager” applies to all ideologies, not just to veganism, and including ludicrous ones that I just made up.
I’m not talking about an acausal deal or something where the AI judges our moral system and treats us accordingly. I mean that the AI is aligned to the moral system of its powerful masters, which I think will see too little problem with tormenting us for much the same reason most people see too little problem with tormenting animals: no respect for sentients not powerful enough to enter the social contract.
Also in the limit of extreme computational power and simulation capacity, I would also start worrying about the proverbial “video game characters” of the future, too. Which is why the veganism needs to be generalized, it’s not just about animals, or even just about powerless humans or even only beings that exist right now: post-singularity you’ll be able to just tailor-make more beings for only God knows what purpose.