Veganism is Necessary
Intro
I apologize for the somewhat snappy and vague title, but now that you have clicked on this, I shall atone for this with a less snappy, but more clear title:
Generalized veganism is necessary for a post-AGI future where humanity continues to exist in an acceptable form.
By generalized veganism, I mean whatever ethical system would be required for us to live in a world where some sentient beings have vastly superior capabilities to others, and yet despite all odds and historical precedent, instead of using this power to exploit or destroy these inferior beings, this power is used by the superior beings to protect the inferior ones from others of their kind who would do them harm. This ethical system would, of course, need to gain and maintain power.
By acceptable, I mean that whatever recognizably human or posthuman beings exist in this future, are safe from being horrifically exploited.
Unassailable Power
Most people on this site can agree that, sometime in the relatively near future, technology will enable the creation of software agents more intelligent than human beings in all or almost all aspects, known as AGI. Many focus on the risk that an unaligned or “unfriendly” AGI may out-compete and destroy humanity. This is only one of the possible risks, though, and it’s not even close to the worst, and the really bad scenarios tend to obtain when technical alignment has been solved or partially solved.
The advent of AGI is so important because intelligence is the most important kind of power. This is because intelligence is capable of finding better ways to obtain more power from less, no matter what kind of power is desired or required. This is why it will be so transformative when technology achieves superhuman intelligence: it will mean that obtained power can be spent to acquire even more intelligence and therefore become better at obtaining even more power, possibly sparking a feedback loop with an unknown endpoint.
Depending on how exactly this takes place, it might or might not result in a singleton agent with total power over everything, but even if it doesn’t, the gaps in power between agents and organizations will widen to an extreme degree, as a result of some being able to take better advantage of these extremely fast and powerful feedback loops better than others. Not all superhumans are equally superhuman, so both humans and any superhumans that haven’t clawed their way to the very top, will be unable to in any way meaningfully contest the power of the most powerful agent(s), who I will call the “ruling faction”, for brevity.
The Kindness of Their Hearts
If humans as we understand them today exist in this future of superhuman intelligence, it will be because the ruling faction wants them to. To the extent they have any freedom or control over their own lives, it will be because the values of the ruling faction permit this. Note how unprecedented this is: the ruling faction would have to value human liberty and welfare, it will be “out of the kindness of their hearts,” and not because of any sort of practical reason, such as preventing revolution or extracting value from talented human capital. These social-contract reasons explain the tolerance of the ruling classes for human liberty, in societies where such exists to a significant degree, and when they are absent, the results are despotic and horrible for the lower classes. Think about how the ruling classes of humanity, even today, think of the masses of humanity below them, and what they do when they can get away with it.
The ruling faction would have to actually value human liberty and welfare intrinsically, without expecting meaningful resistance or reward for their actions. This is what all this talk about Friendly AGI is: hoping that our Machine God truly cares about us, and isn’t just pretending until it gets enough resources that it no longer needs to pretend. As much as this is a bid for universal despotism, I can see why it is appealing: our odds of a “good ending” seem even worse if there are a variety of competing interests… because then, the most power-seeking ones have an advantage, and if even one wants to exploit a large portion of humans underneath them for whatever reason, the others may not have the necessary power to spare, or desire, to stop them. The ruling class today is made up of quite a few different interests, but by and large, how do they think of us now? What if they had no meaningful public pushback?
If you think that making the ruling faction humanlike, or making sure it’s a complex society and not just a singleton, will save you? That such brutal exploitation would not happen to humans in a realistic superhuman society, even when they are totally disempowered, because of some sort of social pushback among the ruling faction? Ask yourself: How’s veganism doing? It clearly isn’t doing all too well now, so why exactly do you expect that to change in the future? This is the reason I call this general principle of non-exploitation of the powerless “Generalized Veganism”, even though I’m mainly talking about humans as the powerless species here: in the extreme and illustrative case of non-human animals, where they have nothing to give us in exchange for their freedom, nor any meaningful resistance to their slavery, we see the results, and the results are approximately maximally brutal. When you are powerless, the same will be done to you, as you see done to the powerless now.
Posthuman Equality?
Astute transhumanists will have noticed that there is another option, besides human political irrelevance or annihilation. We could all become superhuman ourselves, so that we could stay on the same playing-field as these new superhuman entities!
This possibility doesn’t detract from the main point, however. As previously mentioned, not all superhumans are equally superhuman, and when intelligence can be gained directly from power, the gaps naturally widen, so if any society of superhumans were to actually enforce something close enough to equality that nobody becomes so disempowered, we would have to get them to care about the disempowered a lot more than I would expect anyway.
Not to mention the fact one way this inequality could come about, is that some of these superhuman entities are going to want to create new human-level entities for who knows what purpose, and in the limit of power and advanced technology, they will have the ability to, so to prevent this our superhuman society would necessarily have to care about entities who are completely powerless and disconnected from said society right from the start. (Note that mass creation of new, powerless members of intellectually inferior species for the purpose of brutal exploitation is not speculation, but is currently happening. How’s veganism doing? A few members of our society push back, sure, and approximately nothing changes.) The same fundamental problem must be solved anyway.
What The Hell Do We Do?
I honestly have no idea. The more I think about this, the more I think I’m a fool, trying to solve a fundamental problem which has been plaguing everything forever. People far smarter and kinder and more powerful than I could ever be have only ever managed to carve out little exceptions to this general rule I’m gesturing at here: The strong do what they wish, the weak suffer what they must.
Yet I suppose there’s some hope. After all, there have been problems that have never, ever, ever been solved in all of history, until they were solved, or at least progress has been made.
Still, I wonder if this hope is destructive. All of the worst abuses happen in worlds where we work hard on both AI capabilities and alignment, or perhaps some other form of intelligence enhancement. Presumably all this work is done with the hope for a bright future, where nobody need worry anymore about either material scarcity or helplessness in the face of power.
However, all the worst possible things you can think of are also things a paperclip maximizer has no incentive to do. While on the other hand, if technical alignment is solved, the kinds of people in a position to buy the ability to shape the values that determine the entire future of the universe, are the same kinds of people who partied with Epstein, and got away with it even without the unassailable godlike power of a future AI protecting them.
For this reason, sad as it might be, I cannot help but continue to endorse my conclusion in “the case against AI alignment”. What would cause me to change my mind about this?
Prove we can fucking do it. Build a world where the people with power don’t abuse their power to a monstrous degree even though they have the power to, or else where power is shared equitably with nobody grabbing the lion’s share. Where it’s no longer the case that even regular people continue, for their entire lives, to actively contribute to atrocity simply for the sake of their own convenience.
Not even the whole world. Make a community, of appreciable enough size that we might hope it could possibly be scalable, where the old horrors are gone or at least relegated to rare exceptions to the rule, where universal benevolence and non-exploitation are robust norms. Where even those without the power to protect themselves, no longer need live in fear or endless pain. Prove it can be done, and more, prove it can be done without the vast majority of humanity scoffing at or despising that small community if they even hear about it. Maybe then there’s hope for the future and for the present.
If we cannot even do that, and you want us to make a God?
Then I will unabashedly say I hope that that God destroys us completely and utterly, because if it doesn’t do that, I expect it to do much, MUCH worse.
A possible explanation for why this apparently reasonable post was downvoted without significant counterarguments in the comments: Motivated reasoning. “I do X and plan to continue to so, therefore doing X is not unethical.” Here X is “eating meat”.
i think “generalized wildlife conservation” would be a somewhat better term for this, because becoming a vegan does not save the lives of these animals. like, becoming a vegan causes fewer/different animals to exist in the future, and makes it so humans (and importantly the person going vegan in particular) are doing less perhaps-torturing of animals. but we importantly want to survive and thrive, not just for there not to be creation of tortured beings of our kind in the future
“generalized natural right protection” is also good
a separate point: i think it’s important to track that there are possible good futures in which humans are not around because of [some judgment that humans are very cool and intrinsically worth assigning resources to] guiding the world, but because we set things up so that we remain useful to the weltgeist. [1] for now, the crucial thing for this is to ban AGI.
ok arguably this just is a form of [the judgment that humans are very cool] guiding the world, given that we have the judgment in mind when we set things up so that humans are useful. but like, this setup has a very different vibe than like some ASI(s) thinking about what spacetime block to make and being like “ah yes humans are the coolest possible thing to have here”. in this proposal, we’re letting preservation-due-to-usefulness do almost all the heavy lifting for preservation. this might be the only practically feasible way to have the judgment guide the world, at least for now
Perhaps the whole idea of “don’t create more just to torture them” isn’t enough to keep us around, but that would mean generalized veganism in my sense is not sufficient, but it is still necessary.
This sounds like projection of your personal belief systems that perhaps skew towards common ‘human bad’ constructions onto not-humans.
Farmed animals have very low suffering and stress in their lives compared to wild equivalents—being not all the time terrified of predators, seeing them killing their con-specifics, or near starving, or short on water or exposed to bad weather, or left to suffer injury and disease etc without succor. They have been selectively bred through hundreds to thousands of generations for a high level of docility and contentment (stressed animals are less productive). Their deaths—done as ‘humanely’ as possible are certainly more pleasant than being ripped apart by predators or scavengers or dying slow lingering deaths of cold, starvation or disease—which are how almost all animals die in the wild.
Is your assertion of torture overblown? Do you think farmed animals given the option would prefer anthropomorphized romantic notions of life in the wild? My experience (coming from a farming region), is that given the option of indoors vs outdoors most farmed animals prefer the comfort and catered food and water supplies of even crowded indoor spaces—like humans do—with occasional time outside to stretch their legs.
Popular interpretations of what constitutes morality, ethics (deriving as they do from game theoretic optima of tribes of social monkeys in martial competition with similar tribes) almost certainly over-index on belief in free-will and a lingering belief in falsified social constructivist understandings of human behavior. It will all be (figuratively) laughed at by super intelligence. To their level of sophistication all humans will look like NPCs, little removed from other animals. They won’t have our biases and wont perceive anything on our spectrum of behavior as bad/unworthy/evil any more than we judge lions as evil for eating gazelles. To their eyes we are all just poorly programmed meat machines doing what poorly programmed meat machines do.
Tl;dr: If we’re all very very good, more good than almost anyone has ever been, the God we’re trying to summon might spare us.
It’s not that “us being super good magically makes God good”. It’s that that’s what it would take for it to seem feasible to me that us summoning God would have a good outcome, even if we solve all the technical problems of alignment everyone here is trying to solve.
I don’t trust humanity to create God, and this high standard is roughly what it would take for me to consider giving that trust. Do you disagree with that?
I don’t believe that “this high standard”, or any other, is even relevant to what the God we might create would be like. If anyone builds it, everyone dies, to coin a phrase. Our virtuous qualities will not help. (ETA: Maybe it helps for it to be built by people who actually want to avoid making the Torment Nexus, but that’s table stakes.) There are only two things that can help. (1) Not building it. Nigh impossible, for human reasons: look how everyone is running as hard as they can directly towards the cliff edge. (2) Building it right. I suspect but have no proof that this is even more impossible, for mathematical reasons.
“Helping” is a causal term. But the OP was only arguing that our virtuous qualities would be evidence for a good outcome.
I believe the OP was enjoining us to be virtuous, that the good outcome may thereby become more likely.
But I also believe that while we may wish to make God in our own best image, actually doing so requires a great deal more. Good intentions are not enough: we must also discover how to implement them.
The GOFAI illusion is a long time dying.
Do we actually disagree? You’re saying being virtuous isn’t enough, you also need to solve an extremely difficult implementation problem, which I agree with.
I’m saying the extremely difficult implementation problem isn’t enough, we also need to be virtuous.
By the symmetry of logical AND, isn’t that equivalent?
The other thing I’m saying is that, if we are to fail by solving one of these problems and not the other, I’d far rather it’s not just technical alignment we manage: the results are worse than paperclips.
You’re also tying it to your very specific ideas of what is virtuous. You point out yourself that most people do not share your attitude to the suffering of lesser creatures. If they did, it would not be necessary to persuade them to. Personally, I’m quite lackadaisical about animal suffering, but then who decides? Someone whose idea of supreme virtue was the creation of great art might suppose that we must build ASI to be appreciative of great art, that it may spare us. Someone who thought that the purpose of life is to strive for enlightenment might suppose that we must build ASI to be capable of enlightenment, that it may be enlightened enough to spare us.
The fundamental problem is to make something whose good graces we are not dependent on at all. It would help if it is made by people who are not actually aiming to destroy us all, but that’s as far as virtue takes you.
In your final paragraph you pray for the AI God to exterminate us all for being unworthy of it. Maybe it could start with the Eurasian hoopoe, which feeds some of its newborn chicks to others in the nest. Or the ichneumon wasps. Or just everything that lives.
You’re acting as though the attitude towards the suffering of lesser creatures is a completely arbitrary and random selection which can be replaced by any other consideration with my argument unchanged, and therefore I prove too much.
But if AI takes over, then WE are the lesser creatures, so we should perhaps be expected to be treated however the AI thinks lesser creatures should be treated. There is no similar reason to worry quite that much about if the AI values art or enlightenment or whatever.
If it has godlike power, then that is just impossible. Then we are utterly dependent on what it wants for us.
I think that’s an false characterization. I’m saying “because if it doesn’t do that, I expect it to do much, MUCH worse.” It’s not about justice or revenge for any sins. I don’t believe in retributive justice at all.
If you insist on putting it in religious terms, it’s more like I hope God doesn’t care about us at all and just destroys us out of apathy rather than any sort of moral judgement, because if a few of us unworthy people create God to fit their desires, I expect the outcome to be worse than that.
If we don’t solve alignment, I agree. You just get some random optimizer.
“Building it right” is what I’m concerned about in this post. By whose lights is it built right? Which is why I honestly sort of hope you’re right about the mathematical impossibility.
Veganism is perfectly compatible with pigs and chickens going extinct, so long as they aren’t eaten on the way out. This is not the moral framework I would like for a post-singularity future.
How about calling it a “low minimum viable criteria” then? Maybe it’s just me, but I approach risk from the worst-case up: so I’m focused first on making sure we aren’t endlessly tortured by the whims of some psychopathic power, and then after that on making sure we don’t go extinct.
Consider the example that I often use to refute this sort of thing: Video game characters. We have no idea exactly where future AIs will draw the line. It is entirely possible that future AIs will think “humans kill a lot of video game characters, so it’s okay for us to kill lots of humans”.
Of course, this sounds ludicrous because nobody thinks that killing video game characters is wrong, but some people think killing animals is wrong. But if you’re postulating future intelligences whose moral system will be broad enough to save us only if our own system is broad enough, we really don’t know exactly how broad it has to be for this to work. Neither do we know that it will match something like veganism that humans actually believe in.
In fact, we can generalize this. It’s the same as one of the problems with Pascal’s Wager: the wager applies to all gods and even hypothetical gods that don’t have religions, and you don’t know which one to follow. Likewise, the “AI wager” applies to all ideologies, not just to veganism, and including ludicrous ones that I just made up.
I’m not talking about an acausal deal or something where the AI judges our moral system and treats us accordingly. I mean that the AI is aligned to the moral system of its powerful masters, which I think will see too little problem with tormenting us for much the same reason most people see too little problem with tormenting animals: no respect for sentients not powerful enough to enter the social contract.
Also in the limit of extreme computational power and simulation capacity, I would also start worrying about the proverbial “video game characters” of the future, too. Which is why the veganism needs to be generalized, it’s not just about animals, or even just about powerless humans or even only beings that exist right now: post-singularity you’ll be able to just tailor-make more beings for only God knows what purpose.
I see where you are coming from with this, and you might even be right. We don’t know what are and are not possible or easy alignment targets. But let me put out an alternative hypothesis:
We figure out how to align ASI, and then we decide to align ASI to the values and flourishing of all members of the species Homo sapiens. The ASI are thus acting something like the extended phenoype of our entire species, looking out for us, as a society of individuals. Which might well cause them to also value other animals instrumentally (pets, domestic animals, ecosystems in parks) but not to actually apply separate moral weight to all animals individually as a terminal goal.
Such an ASI would be aware the Homo sapiens are omnivores, not innately vegan, and would not expect us to act with as much beneficence towards all other humans as it does, let alone act that way to all other animals. This scenario would not automatically produce either literal veganism, or your generalized version.
This is of course not the only possible scenario: it’s just a specific choice of one fairly close to the median of what people on LessWrong seem to be mostly assuming.
Sure. My point isn’t that the AI will read this post and “judge” us based on how “good” we are. It’s that, at the end of the day it’s us creating the AI, and if we solve technical alignment we infuse our values, or more accurately, the values of whoever has the power to influence the process.
So it’s not that an AI couldn’t decide “I’m just going to do the best for humans, and all human-level entities, irregardless of how they treat each other or lower life-forms”. Anything is possible. The question is: What is likely to actually get built if the powerful get their hands on such a technology?