I think this is a good and important post, that was influential in the discourse, and that people keep misunderstanding.
What people engaged with
What did people engage with? Mostly stuff about whether saving money is a good strategy for an individual to prepare for AGI (whether for selfish or impact reasons), human/human inequality, and how bad human/human inequality is on utilitarian grounds. Many of these points were individually good, but felt tangential to me.
What I was trying to write about
But none of that is what I was centrally writing about. Here is what I wrote about instead:
Power. The world’s institutions are a product of many things, including culture and inertia, but a big chunk is also selection for those institutions that are best at accumulating power, and then those institutions that get selected for wielding their power for their ends. If the game changes due to technology (especially as radical as AGI), the strategy changes. Currently the winning strategy is rather fortunate for most people, since it encourages things like prosperity & education, and creates pressures towards democracy. On the other hand, the default vision of AGI explicitly sets out to render people powerless, and therefore useless to Power. This will make good treatment of the vast majority of humanity far more contingent:
Adam Smith could write that his dinner doesn’t depend on the benevolence of the butcher or the brewer or the baker. The classical liberal today can credibly claim that the arc of history really does bend towards freedom and plenty for all, not out of the benevolence of the state, but because of the incentives of capitalism and geopolitics. But after labour-replacing AI, this will no longer be true. If the arc of history keeps bending towards freedom and plenty, it will do so only out of the benevolence of the state (or the AI plutocrats) [EDIT: or, I obviously should’ve explicitly written out, if we have a machine-god singleton that enforces, though I would fold this into “state”, just an AI one]. If so, we better lock in that benevolence while we have leverage—and have a good reason why we expect it to stand the test of time.
Ambition. Much change in the world, and much that is great about the human experience, comes from ambition. I go through the general routes to changing the world, from entrepreneurship to science to being An Intellectual™ to politics to religion to even military conquest, and point out that full AGI makes all of those harder. Ambition having outlier impacts is the biggest tool that human labor has for shifting the world in ways that are different from the grinding out of material incentives or the vested interests that already have capital (or, as I neglected to mention, offices). Also, can’t you just feel it?
Dynamism & progress. We, presumably, want cultural, social & moral progress to continue. How does that progress come about? Often, because someone comes from below and challenges whoever is currently on top. This requires the possibility of someone winning against those at the top. Or, to take another tack: (and this is not even implicitly in the post since I hadn’t yet articulated this a year ago, though the vibe is there) historically, the longer a certain social state of affairs is kept in place, the more participants in it goodhart for whatever the quirks of the incentive structure are. So far, this goodharting has been limited by the fact that if you goodhart hard enough, your civilization collapses at the political and economic as well as cultural level, and is invaded, and the new invaders bring some new incentive game with them. But if an AI-run economy and power concentration prevent the part where civilization collapses and is invaded, it seems possible for the political & economic collapse to be forestalled indefinitely, and the cultural collapse / goodharting / stagnation to get indefinitely bad. Or to take yet another tack: isn’t this dynamism thing the point of Western civilization? I admit that I don’t have a general theory of why I feel like shouting “BUT DYNAMISM! BUT PROGRESS!” at any locked-in vision of the future, but, as the spirit commands it, I will continue shouting it.
The cruxes
Some of the big questions:
How true are the selectionist accounts of why modern institutions tend towards niceness, and under which AGI scenarios are these accounts true or false?
What is it that makes a culture alive, dynamic, and progress-driving, and how does this relate to questions about material conditions and the distribution of power?
… and I have to admit, man, these are tough questions! If you want a solution, maybe get back to me next year. (I also think these cruxes cannot be rounded to just e.g. takeoff speeds, or other technical factors; there are also a lot of thorny questions about culture, economics, (geo)politics, human psychology, and moral philosophy that matter for these questions regardless of (aligned) AI outcomes.)
I agree with everyone: do not waste perhaps the most impactful time in history just accumulating personal capital!
What do I wish I had emphasized more? I really did not want people to read this and go accumulate capital at AGI labs or quant finance, as I wrote at the top of the takeaways section. I wish I had emphasized more this thing, which Scott Alexander recently also said:
But don’t waste this amazing opportunity you’ve been given on a vapid attempt to “escape the permanent underclass”.
Other notes
Another underrated point is inter-state inequality (Anton Leicht has discussed this e.g. here, but is the only person I know thinking seriously about it). Non-US/China survival strategies for AGI remain neglected! I go through potential ramifications of current trends towards the end of this post.
The Substack version was called “Capital, AGI, and human ambition”, which I think was a clearer title and might’ve prevented focus on capital and its (personal) importance. “AGI entrenches capital and reduces dynamism in society” might’ve been a better title than either—though I do think “human ambition” belongs in the title.)
I think this is a good and important post, that was influential in the discourse, and that people keep misunderstanding.
What people engaged with
What did people engage with? Mostly stuff about whether saving money is a good strategy for an individual to prepare for AGI (whether for selfish or impact reasons), human/human inequality, and how bad human/human inequality is on utilitarian grounds. Many of these points were individually good, but felt tangential to me.
What I was trying to write about
But none of that is what I was centrally writing about. Here is what I wrote about instead:
Power. The world’s institutions are a product of many things, including culture and inertia, but a big chunk is also selection for those institutions that are best at accumulating power, and then those institutions that get selected for wielding their power for their ends. If the game changes due to technology (especially as radical as AGI), the strategy changes. Currently the winning strategy is rather fortunate for most people, since it encourages things like prosperity & education, and creates pressures towards democracy. On the other hand, the default vision of AGI explicitly sets out to render people powerless, and therefore useless to Power. This will make good treatment of the vast majority of humanity far more contingent:
Ambition. Much change in the world, and much that is great about the human experience, comes from ambition. I go through the general routes to changing the world, from entrepreneurship to science to being An Intellectual™ to politics to religion to even military conquest, and point out that full AGI makes all of those harder. Ambition having outlier impacts is the biggest tool that human labor has for shifting the world in ways that are different from the grinding out of material incentives or the vested interests that already have capital (or, as I neglected to mention, offices). Also, can’t you just feel it?
Dynamism & progress. We, presumably, want cultural, social & moral progress to continue. How does that progress come about? Often, because someone comes from below and challenges whoever is currently on top. This requires the possibility of someone winning against those at the top. Or, to take another tack: (and this is not even implicitly in the post since I hadn’t yet articulated this a year ago, though the vibe is there) historically, the longer a certain social state of affairs is kept in place, the more participants in it goodhart for whatever the quirks of the incentive structure are. So far, this goodharting has been limited by the fact that if you goodhart hard enough, your civilization collapses at the political and economic as well as cultural level, and is invaded, and the new invaders bring some new incentive game with them. But if an AI-run economy and power concentration prevent the part where civilization collapses and is invaded, it seems possible for the political & economic collapse to be forestalled indefinitely, and the cultural collapse / goodharting / stagnation to get indefinitely bad. Or to take yet another tack: isn’t this dynamism thing the point of Western civilization? I admit that I don’t have a general theory of why I feel like shouting “BUT DYNAMISM! BUT PROGRESS!” at any locked-in vision of the future, but, as the spirit commands it, I will continue shouting it.
The cruxes
Some of the big questions:
How true are the selectionist accounts of why modern institutions tend towards niceness, and under which AGI scenarios are these accounts true or false?
What is it that makes a culture alive, dynamic, and progress-driving, and how does this relate to questions about material conditions and the distribution of power?
… and I have to admit, man, these are tough questions! If you want a solution, maybe get back to me next year. (I also think these cruxes cannot be rounded to just e.g. takeoff speeds, or other technical factors; there are also a lot of thorny questions about culture, economics, (geo)politics, human psychology, and moral philosophy that matter for these questions regardless of (aligned) AI outcomes.)
I agree with everyone: do not waste perhaps the most impactful time in history just accumulating personal capital!
What do I wish I had emphasized more? I really did not want people to read this and go accumulate capital at AGI labs or quant finance, as I wrote at the top of the takeaways section. I wish I had emphasized more this thing, which Scott Alexander recently also said:
Other notes
Another underrated point is inter-state inequality (Anton Leicht has discussed this e.g. here, but is the only person I know thinking seriously about it). Non-US/China survival strategies for AGI remain neglected! I go through potential ramifications of current trends towards the end of this post.
The Substack version was called “Capital, AGI, and human ambition”, which I think was a clearer title and might’ve prevented focus on capital and its (personal) importance. “AGI entrenches capital and reduces dynamism in society” might’ve been a better title than either—though I do think “human ambition” belongs in the title.)
Scott Alexander’s post on It’s Still Easier To Imagine The End Of The World Than The End Of Capitalism is valuable for pointing out that the space of possibilities is large. I have been meaning to write a response to this, and also some related work from Beren & Christiano, for a long time.