It seems like, yes, he is saying that wealth levels get locked in by early investment choices, and then that it is ‘hard to justify’ high levels of ‘inequality’ and that even if you can make 10 million a year in real income in the post-abundance future Larry Page’s heirs owning galaxies is not okay.
I say, actually, yes that’s perfectly okay, provided there is stable political economy and we’ve solved the other concerns so you can enjoy that 10 million a year in peace.
I dunno about that. I think it is not okay for directionally the same reasons it wouldn’t be okay if we got an “infinitesimally aligned” paperclip maximizer who leaves the Solar System alone but paperclips the rest of the universe: astronomical waste.
Like, suppose 99% of the universe ends up split between twenty people, with them using it as they please, in ways that don’t generate much happiness for others. Arguably it’s not going to be that bad even in the “tech-oligarch capture” future (because Dario Amodei has made a pledge to donate 10% of his earnings or whatever[1]), but let’s use that to examine our intuitions.
One way to look at it is: this means the rest of civilization will only end up 1% as big as it could be. This argument may or may not feel motivating to you; I know “more people is better” is not a very visceral-feeling intuition.
Another way to look at it is: this means all the other people will only end up with 1% of the lifespan they could have had. Like, in the very long term, post-scarcity isn’t real, the universe’s resources are finite (as far as we currently know), and physical entities need to continuously consume those to keep living. If 99% of resources are captured by people who don’t care to share them, everyone else will end up succumbing to the heat death much faster than in the counterfactual.
This is isomorphic to “the rich have their soldiers take all timber and leave the poor to freeze to death in the winter”. The only reason it doesn’t feel the same way is because it’s hard to wrap your head around large numbers: surely you’d be okay with only living for 100 billion years, instead of 10 trillion years, right? In the here and now, both numbers just round up to “effectively forever”. But no, once you actually get to the point of you and all your loved ones dying of negentropic starvation 100 billion years in, it would feel just as infuriatingly unfair.
I understand the tactical moves of “we have to pretend it’s okay if the currently-powerful capture most of the value of the universe, so that they’re more amicable to listening to our arguments for AI safety and don’t get so scared of taxes they accelerate AI even further” and “we have to shut down the discussion of ‘but which monkey gets the banana?’ at every turn because it competes with the ‘the banana is poisoned’ messaging”. But if we’re keeping to Simulacrum Level 1, no, I do not in fact believe it’s okay.
I also don’t necessarily agree that those moves are pragmatically good. It’s mostly pointless to keep talking to AI-industry insiders; if we’re doing any rhetoric, it should focus on “outsiders”. And if it is in fact true that the current default trajectory of worlds in which the AGI labs’ plans succeed may lead to something like the above astronomical-waste scenarios, making those arguments to the general public is potentially a high-impact move. “Ban the AGI research because otherwise the rich will take all the stuff” is a much more memetically viral message than “ban the AGI because Terminator”.
(To be clear, I’m not arguing we should join various coalitions making false arguments to that end, e. g. the datacenter water thing. But if there are true arguments of that form, as I believe there are...)
I dunno about that. I think it is not okay for directionally the same reasons it wouldn’t be okay if we got an “infinitesimally aligned” paperclip maximizer who leaves the Solar System alone but paperclips the rest of the universe: astronomical waste.
Like, suppose 99% of the universe ends up split between twenty people, with them using it as they please, in ways that don’t generate much happiness for others. Arguably it’s not going to be that bad even in the “tech-oligarch capture” future (because Dario Amodei has made a pledge to donate 10% of his earnings or whatever[1]), but let’s use that to examine our intuitions.
One way to look at it is: this means the rest of civilization will only end up 1% as big as it could be. This argument may or may not feel motivating to you; I know “more people is better” is not a very visceral-feeling intuition.
Another way to look at it is: this means all the other people will only end up with 1% of the lifespan they could have had. Like, in the very long term, post-scarcity isn’t real, the universe’s resources are finite (as far as we currently know), and physical entities need to continuously consume those to keep living. If 99% of resources are captured by people who don’t care to share them, everyone else will end up succumbing to the heat death much faster than in the counterfactual.
This is isomorphic to “the rich have their soldiers take all timber and leave the poor to freeze to death in the winter”. The only reason it doesn’t feel the same way is because it’s hard to wrap your head around large numbers: surely you’d be okay with only living for 100 billion years, instead of 10 trillion years, right? In the here and now, both numbers just round up to “effectively forever”. But no, once you actually get to the point of you and all your loved ones dying of negentropic starvation 100 billion years in, it would feel just as infuriatingly unfair.
I understand the tactical moves of “we have to pretend it’s okay if the currently-powerful capture most of the value of the universe, so that they’re more amicable to listening to our arguments for AI safety and don’t get so scared of taxes they accelerate AI even further” and “we have to shut down the discussion of ‘but which monkey gets the banana?’ at every turn because it competes with the ‘the banana is poisoned’ messaging”. But if we’re keeping to Simulacrum Level 1, no, I do not in fact believe it’s okay.
I also don’t necessarily agree that those moves are pragmatically good. It’s mostly pointless to keep talking to AI-industry insiders; if we’re doing any rhetoric, it should focus on “outsiders”. And if it is in fact true that the current default trajectory of worlds in which the AGI labs’ plans succeed may lead to something like the above astronomical-waste scenarios, making those arguments to the general public is potentially a high-impact move. “Ban the AGI research because otherwise the rich will take all the stuff” is a much more memetically viral message than “ban the AGI because Terminator”.
(To be clear, I’m not arguing we should join various coalitions making false arguments to that end, e. g. the datacenter water thing. But if there are true arguments of that form, as I believe there are...)
I do not trust that guy to keep such non-binding promises, by the way. His track record isn’t good, what with “Anthropic won’t advance the AI frontier”.