To some degree. And I agree on most emotions, they exist for a reason and someone who discounts them without reflection is making a mistake. But I think Envy, on reflection, still strikes me as something better for the goals of evolution and in the environment of our ancestors than one that “makes sense” for us and in the modern world.
I think, insofar as Envy drives people to steal, it decreases people’s likelihood to survive and thrive (jail isn’t the optimal place for either of those, and if you’re stealing from Envy, not desperation or something, it probably wasn’t worth the risk). Cheating, another behavior driven by Envy, can lead to suffering violence at the hands of the spurned party (tho if you have “has more sex than otherwise” as a non-trivial term in “thrive” then possibly this one is a wash).
To me, Envy seems to be the drive to defect against a cooperator in some cases, which is, let’s call it “effective” (to differentiate “good/nice”) to take advantage of when you can. But it’s calibrated for a situation where there are tribal levels of coalition with the cooperators, and now there are societal levels of coalition with the cooperators, so this is a much worse value proposition.
It “makes sense” that it evolved the way it did. And of course, if it didn’t, it wouldn’t have evolved that way. But that doesn’t mean it must continue to “make sense” and I’m not sure it does.
I think I agree with a vibe I see in the comments that an AI that causes this problem is perhaps threading a very small needle.
Yudkowsky wrote The Hidden Complexity of Wishes to explain that a genie that does what you say will almost certainly cause problems. If people have this kind of Superintelligence, it won’t take long before someone asks it to get them as many paperclips as possible and we all die. The kind of AI that does what humans want without killing everyone is one that does what we mean.
But how does this work? If you ask such a superintelligence to pull your kid from the rubble of a collapsed building, does it tell you no, because disturbing the rubble could cause it to collapse further and injure your kid more? That you have to wait for better equipment? If not, it probably causes paperclipping problems. If so, it knows when to not do the things you ask, because they won’t accomplish what you “really want”. This is necessarily paternalist.
Would such an AI still listen when people ask it to isolate themselves or others like this? I’m having trouble thinking of one that thinks that being manipulated into a certain set of beliefs is what is best, but is still “aligned” in a way that doesn’t kill everyone.
Admittedly, I shear pretty close to Yudkowsky on doomerism, so that may be the crux. That I don’t see much space between “we all die” and “robustly solved alignment ushers in techno-utopia” (given superintelligence). So arbitrarily targettable hyper-manipulative AI that don’t cause either “AI takeover” or “massive swings in human power” just don’t seem like a real through-line.
(Like, if someone asks their AI to convince everyone else that they are the king of the world. Does it do that? Does it succeed? Do any protections against “massive swings in human power” prevent this? Do passive AI protections everyone has know to defend against this? Do they not apply to convincing people that “Jesus is your Lord”? Does human civilization end as soon as some drunk person says “Hey GPT go convince everyone you’re king of the world” or something? If I make a bubble and you tell an AI “go tell everyone in that bubble the truth,” what happens? Does an AI war break out? Does it not hurt anyone somehow?)