Let me attempt to summarize your post, please let me know if I’m misunderstanding:
Continuing with AI as it is has the risk of AI taking over and becoming our successor
the kind of successor that would arise from current techniques is kind of similar to us (because pretraining is a large fraction of training and produces convergent minds, and a lot of instrumental convergence in what minds should be like, that’s a huge fraction of the post.
in particular pretraining creates “conscious beings who care about each other who are having fun”.
a successor that satisfies that is OK, like 70% of the value.
… and anyways, there will need to be a successor something because baseline humans cannot compete, so the choice is between weird transhumans or weird AI (non-exclusive or)
Pausing AI has the risk that another civilization would make AI and seize the lightcone
The society that would emerge from the historico-material conditions of automation would reward authoritarianism because humans are no longer necessary for production and thus cannot advocate for themselves.
The main argument advanced for this is that our current egalitarian humanist society came to be only because of guns and because of humans being necessary for production. “God created Man, Colt made them equal” + the working class was a mean of production all along
Therefore the current civilization should attempt to seize the lightcone, accepting some risk of a purely-AI-non-transhuman successor. If we don’t, we risk failure.
Do you think this is a fair summary? Is there an important point that is missing?
I am also now haunted by the “humanism is dead” take. I guess I believe it, but what killed it is the internet, and I think we could bring it back. Plenty of people still believe in God, even if the elite no longer does (or the ones they do, behave indistinguishably from the ones that don’t).
I think I understand your invocation of von Neumann’s nightmare, but I don’t like it. So let me spell out what I think it means:
The world could be conquered, but this nation of puritans will not grab its chance; we will be able to go into space way beyond the moon if only people could keep pace with what they create …”
It’s related to von Neumann advocating for nuking the Soviets early. It’s still not clear to me whether failing to nuke the Soviets was a mistake by America (and I’m not using this as passive-aggressive negation, I genuinely don’t know). Knowing whether or not it was a mistake certainly seems very relevant to deciding whether the current culture should seize the lightcone.
I am also now haunted by the “humanism is dead” take. I guess I believe it, but what killed it is the internet, and I think we could bring it back.
I don’t think that’s it. I think he meant that humanism was created by incentives—e.g., ordinary people becoming economically and militarily valuable in a way they hadn’t historically been. The spectre, and now rising immantization, of full automation is reversing those incentives. So, it’s less a problem with the attitudes of our current elites or the memes propagated on the Internet. It’s more a problem with the context in which anybody achieving the rank of elite, and any meme on human value which goes viral, is shaped by the evolving incentive structure in which most humans are not essential to the success of a military or economic endeavor.
I see, thank you for explaining, I was misapplying jdp’s model indeed.
I do think the model doesn’t quite match reality. If humanism has already been dying, it can’t be because ordinary people aren’t useful anymore—they’re still very useful! We’ve had automation, yes, but we still require workers to tend to the automation, the economy has full employment and it’s not out of the goodness of anyone’s heart.
I think the Internet has in fact been a prelude to the attitude adaptive for the martial shifts, but mostly because the failure of e.g. social media to produce good discourse has revealed that a lot of naive implicit models about democratization being good have been falsified. Democracy in fact turns out to be bad, giving people what they want turns out to be bad. I expect the elite class in Democratic Republics to get spitefully misanthropic because they are forced to live with the consequences of normal people’s decisions in a way e.g. Chinese elites aren’t.
Let me attempt to summarize your post, please let me know if I’m misunderstanding:
Continuing with AI as it is has the risk of AI taking over and becoming our successor
the kind of successor that would arise from current techniques is kind of similar to us (because pretraining is a large fraction of training and produces convergent minds, and a lot of instrumental convergence in what minds should be like, that’s a huge fraction of the post.
in particular pretraining creates “conscious beings who care about each other who are having fun”.
a successor that satisfies that is OK, like 70% of the value.
… and anyways, there will need to be a successor something because baseline humans cannot compete, so the choice is between weird transhumans or weird AI (non-exclusive or)
Pausing AI has the risk that another civilization would make AI and seize the lightcone
The society that would emerge from the historico-material conditions of automation would reward authoritarianism because humans are no longer necessary for production and thus cannot advocate for themselves.
The main argument advanced for this is that our current egalitarian humanist society came to be only because of guns and because of humans being necessary for production. “God created Man, Colt made them equal” + the working class was a mean of production all along
Therefore the current civilization should attempt to seize the lightcone, accepting some risk of a purely-AI-non-transhuman successor. If we don’t, we risk failure.
Do you think this is a fair summary? Is there an important point that is missing?
I am also now haunted by the “humanism is dead” take. I guess I believe it, but what killed it is the internet, and I think we could bring it back. Plenty of people still believe in God, even if the elite no longer does (or the ones they do, behave indistinguishably from the ones that don’t).
I think I understand your invocation of von Neumann’s nightmare, but I don’t like it. So let me spell out what I think it means:
It’s related to von Neumann advocating for nuking the Soviets early. It’s still not clear to me whether failing to nuke the Soviets was a mistake by America (and I’m not using this as passive-aggressive negation, I genuinely don’t know). Knowing whether or not it was a mistake certainly seems very relevant to deciding whether the current culture should seize the lightcone.
I don’t think that’s it. I think he meant that humanism was created by incentives—e.g., ordinary people becoming economically and militarily valuable in a way they hadn’t historically been. The spectre, and now rising immantization, of full automation is reversing those incentives.
So, it’s less a problem with the attitudes of our current elites or the memes propagated on the Internet. It’s more a problem with the context in which anybody achieving the rank of elite, and any meme on human value which goes viral, is shaped by the evolving incentive structure in which most humans are not essential to the success of a military or economic endeavor.
I see, thank you for explaining, I was misapplying jdp’s model indeed.
I do think the model doesn’t quite match reality. If humanism has already been dying, it can’t be because ordinary people aren’t useful anymore—they’re still very useful! We’ve had automation, yes, but we still require workers to tend to the automation, the economy has full employment and it’s not out of the goodness of anyone’s heart.
I think the Internet has in fact been a prelude to the attitude adaptive for the martial shifts, but mostly because the failure of e.g. social media to produce good discourse has revealed that a lot of naive implicit models about democratization being good have been falsified. Democracy in fact turns out to be bad, giving people what they want turns out to be bad. I expect the elite class in Democratic Republics to get spitefully misanthropic because they are forced to live with the consequences of normal people’s decisions in a way e.g. Chinese elites aren’t.