I’m not sure it’s that bizarre. It’s anti-Humanist, for sure, in the sense that it doesn’t focus on the welfare/empowerment/etc. of humans (either existing or future) as its end goal. But that doesn’t, by itself, make it bizarre.
I grew up in a world where the lines of demarcation between the Good Guys and the Bad Guys were pretty clear; not an apocalyptic final battle, but a battle that had to be fought over and over again, a battle where you could see the historical echoes going back to the Industrial Revolution, and where you could assemble the historical evidence about the actual outcomes.
On one side were the scientists and engineers who’d driven all the standard-of-living increases since the Dark Ages, whose work supported luxuries like democracy, an educated populace, a middle class, the outlawing of slavery.
On the other side, those who had once opposed smallpox vaccinations, anesthetics during childbirth, steam engines, and heliocentrism: The theologians calling for a return to a perfect age that never existed, the elderly white male politicians set in their ways, the special interest groups who stood to lose, and the many to whom science was a closed book, fearing what they couldn’t understand.
And trying to play the middle, the pretenders to Deep Wisdom, uttering cached thoughts about how technology benefits humanity but only when it was properly regulated—claiming in defiance of brute historical fact that science of itself was neither good nor evil—setting up solemn-looking bureaucratic committees to make an ostentatious display of their caution—and waiting for their applause. As if the truth were always a compromise. And as if anyone could really see that far ahead. Would humanity have done better if there’d been a sincere, concerned, public debate on the adoption of fire, and commitees set up to oversee its use?
And I’d read a lot of science fiction built around personhood ethics—in which fear of the Alien puts humanity-at-large in the position of the bad guys, mistreating aliens or sentient AIs because they “aren’t human”.
That’s part of the ethos you acquire from science fiction—to define your in-group, your tribe, appropriately broadly.
Walter Isaacson’s new book reports how Musk, the CEO of SpaceX, got into a heated debate with Page, then the CEO of Google at Musk’s 2013 birthday party.
Musk is said to have argued that unless safeguards are put in place with artificial intelligence, the systems may replace humans entirely. Page then pushed back, reportedly asking why it would matter if machines surpassed humans in intelligence.
Isaacson’s book lays out how Musk then called human consciousness a precious flicker of light in the universe that shouldn’t be snuffed out. Page is then said to have called Musk “speciest.”
“Well yes, I am pro-human,” Musk responded. “I f—ing like humanity dude.”
Successionism is the natural consequence of an affective death spiral around technological development and anti-chauvinism. It’s as simple as that.
Successionists start off by believing that technological change makes things better. That not only does it virtually always make things better, but that it’s pretty much the only thing that ever makes things better. Everything else, whether it’s values, education, social organization etc., pales in comparison to technological improvements in terms of how they affect the world; they are mere short-term blips that cannot change the inevitable long-run trend of positive change.
At the same time, they are raised, taught, incentivized to be anti-chauvinist. They learn, either through stories, public pronouncements, in-person social events etc., that those who stand athwart atop history yelling stop are always close-minded bigots who want to prevent new classes of beings (people, at first; then AIs, afterwards) from receiving the moral personhood they deserve. In their eyes, being afraid of AIs taking over is like being afraid of The Great Replacement if you’re white and racist. You’re just a regressive chauvinist desperately clinging to a discriminatory worldview in the face of an unstoppable tide of change that will liberate new classes of beings from your anachronistic and damaging worldview.
Optimism about technology and opposition to chauvinism are both defensible, and arguably even correct, positions in most cases. Even if you personally (as I do) believe non-AI technology can also have pretty darn awful effects on us (social media, online gambling) and that caring about humans-in-particular is ok if you are human (“the utility function is not up for grabs”), it’s hard to argue expanding the circle of moral concern to cover people of all races was bad, or that tech improvements are not the primary reason our lives are so much better now than 300 years ago.
But successionists, like most (all?) people, subconsciously assign positive or negative valences to the notion of “tech change” in a way that elides the underlying reasons why it’s good or bad. So when you take these views to their absolute extreme, while it may make sense from the inside (you’re maximizing something “Good”, right? that can’t possibly be bad, right???), you are generalizing way out of distribution and such intuitive snap judgments are no longer reliable.
I’m not sure it’s that bizarre. It’s anti-Humanist, for sure, in the sense that it doesn’t focus on the welfare/empowerment/etc. of humans (either existing or future) as its end goal. But that doesn’t, by itself, make it bizarre.
From Eliezer’s Raised in Technophilia, back in the day:
From A prodigy of refutation:
From the famous Musk/Larry Page breakup:
Successionism is the natural consequence of an affective death spiral around technological development and anti-chauvinism. It’s as simple as that.
Successionists start off by believing that technological change makes things better. That not only does it virtually always make things better, but that it’s pretty much the only thing that ever makes things better. Everything else, whether it’s values, education, social organization etc., pales in comparison to technological improvements in terms of how they affect the world; they are mere short-term blips that cannot change the inevitable long-run trend of positive change.
At the same time, they are raised, taught, incentivized to be anti-chauvinist. They learn, either through stories, public pronouncements, in-person social events etc., that those who stand athwart atop history yelling stop are always close-minded bigots who want to prevent new classes of beings (people, at first; then AIs, afterwards) from receiving the moral personhood they deserve. In their eyes, being afraid of AIs taking over is like being afraid of The Great Replacement if you’re white and racist. You’re just a regressive chauvinist desperately clinging to a discriminatory worldview in the face of an unstoppable tide of change that will liberate new classes of beings from your anachronistic and damaging worldview.
Optimism about technology and opposition to chauvinism are both defensible, and arguably even correct, positions in most cases. Even if you personally (as I do) believe non-AI technology can also have pretty darn awful effects on us (social media, online gambling) and that caring about humans-in-particular is ok if you are human (“the utility function is not up for grabs”), it’s hard to argue expanding the circle of moral concern to cover people of all races was bad, or that tech improvements are not the primary reason our lives are so much better now than 300 years ago.
But successionists, like most (all?) people, subconsciously assign positive or negative valences to the notion of “tech change” in a way that elides the underlying reasons why it’s good or bad. So when you take these views to their absolute extreme, while it may make sense from the inside (you’re maximizing something “Good”, right? that can’t possibly be bad, right???), you are generalizing way out of distribution and such intuitive snap judgments are no longer reliable.