Thank you for this link and also for your through response “the gears to ascension” I have some reading to do! Including “Overcoming Bias” which I am interested in as the basis for the answer in Value is Fragile.
One of the first points Eliezer makes in “Value is Fragile” is that we almost certainly create paperclip generators if we take our hand of the steering wheel. One of the things that is special about humans is that some claim that the human brain is the most complex structure in the universe—i.e. the opposite of entropy. Is the pursuit of complexity itself a goal (an “alignment”?) that by definition protects against entropy?
I grant that this may be a naïve thought, but I wonder—if things that are not paperclip generators are so hard to come by—how humans and all of the other complex structures that we know of in the universe arose at all....
I really like this article, thanks for writing it! I think you correctly point out that AI successionism is psychologically convenient for a lot of people, especially those working in AI or benefitting from it, and you do a great job illustrating how that happens.
That said, I’m not sure the memetic story settles the underlying philosophical question: are humans actually the optimal long-term stewards of value on Earth (or beyond), and if not, is it at least plausible that advanced AI could become a better steward?
Absent fairly definitive answers to those questions, it seems premature to dismiss AI successionism outright, even if we should be cautious about the motivations and social dynamics that make it appealing.