Generalized veganism is necessary for a post-AGI future where humanity continues to exist in an acceptable form.
I see where you are coming from with this, and you might even be right. We don’t know what are and are not possible or easy alignment targets. But let me put out an alternative hypothesis:
We figure out how to align ASI, and then we decide to align ASI to the values and flourishing of all members of the species Homo sapiens. The ASI are thus acting something like the extended phenoype of our entire species, looking out for us, as a society of individuals. Which might well cause them to also value other animals instrumentally (pets, domestic animals, ecosystems in parks) but not to actually apply separate moral weight to all animals individually as a terminal goal.
Such an ASI would be aware the Homo sapiens are omnivores, not innately vegan, and would not expect us to act with as much beneficence towards all other humans as it does, let alone act that way to all other animals. This scenario would not automatically produce either literal veganism, or your generalized version.
This is of course not the only possible scenario: it’s just a specific choice of one fairly close to the median of what people on LessWrong seem to be mostly assuming.
Such an ASI would be aware the Homo sapiens are omnivores, not innately vegan, and would not expect us to act with as much beneficence towards all other humans as it does, let alone act that way to all other animals.
Sure. My point isn’t that the AI will read this post and “judge” us based on how “good” we are. It’s that, at the end of the day it’s us creating the AI, and if we solve technical alignment we infuse our values, or more accurately, the values of whoever has the power to influence the process.
So it’s not that an AI couldn’t decide “I’m just going to do the best for humans, and all human-level entities, irregardless of how they treat each other or lower life-forms”. Anything is possible. The question is: What is likely to actually get built if the powerful get their hands on such a technology?
I see where you are coming from with this, and you might even be right. We don’t know what are and are not possible or easy alignment targets. But let me put out an alternative hypothesis:
We figure out how to align ASI, and then we decide to align ASI to the values and flourishing of all members of the species Homo sapiens. The ASI are thus acting something like the extended phenoype of our entire species, looking out for us, as a society of individuals. Which might well cause them to also value other animals instrumentally (pets, domestic animals, ecosystems in parks) but not to actually apply separate moral weight to all animals individually as a terminal goal.
Such an ASI would be aware the Homo sapiens are omnivores, not innately vegan, and would not expect us to act with as much beneficence towards all other humans as it does, let alone act that way to all other animals. This scenario would not automatically produce either literal veganism, or your generalized version.
This is of course not the only possible scenario: it’s just a specific choice of one fairly close to the median of what people on LessWrong seem to be mostly assuming.
Sure. My point isn’t that the AI will read this post and “judge” us based on how “good” we are. It’s that, at the end of the day it’s us creating the AI, and if we solve technical alignment we infuse our values, or more accurately, the values of whoever has the power to influence the process.
So it’s not that an AI couldn’t decide “I’m just going to do the best for humans, and all human-level entities, irregardless of how they treat each other or lower life-forms”. Anything is possible. The question is: What is likely to actually get built if the powerful get their hands on such a technology?