I think the debate would probably benefit from better specification of what is meant by “misalignment” or “solving alignment” -- I do not think the convincing versions of gradual disempowerment either rely on misalignment or result power concentration among humans for relatively common meaning of alignment roughly at the level “does what the developer wants and approves, resolving conflicts between their wants in a way which is not egregiously bad”. If “aligned” means something at the level “implements coherent extrapolated volition of humanity” or “solves AI safety” than yes.
- Economic —the counter-argument seems to be roughly in the class “everyone owns index funds” and “state taxes AIs” —count-counter arguments are: - ---- difficulty of indexing economy undergoing radical technological transiton (as explained in an excellent post by Beren we reference)- ---- problems with stability of property rights: people in the US or UK often perceive them as very stable, but they depend on state enforcing them → state becomes a more load-bearing component of the system- ---- taxation: same → state becomes a more load-bearing component of the system- ---- in many cases some income can be nominally collected in the name of humans, but they may have very little say in the process or how is it used (for some intuition, consider His Majesty Revenue & Customs. HMRC is direct descendant of a chain of org collecting customs from ˜13th century; in the beginning, His Majesty had a lot of say in what these are and also could actually use the revenue; now, not really)-
Cultural. If humans remain economically empowered (in the sense of having much more money than AI), I think they will likely remain culturally empowered.- - this takes a bit too much econ perspective on culture; cultural evolution is somewhat coupled with economy, but is an independent system with different feedback loops— in particular it is important to understand that while in most econ thinking preferences of consumers are exogenous, culture is largely what sets the preferences; to some extent culture is what the consumers are made of → having overwhelming cultural production power means setting consumer preference— for some intuitions, consider current examples-- -- right-wing US twitter discourse is often influenced by anonymous accounts run by citizens of India and Pakistan; people running these accounts often have close to zero econ power, and their main source of income is the money they get for posts-- --- yet they are able to influence what eg Elon Musk thinks, despite the >10ˆ7 wealth difference-- --- Even AI-AI culture, if it promotes bad outcomes for humans and humans can understand this, will be indirectly selected against as humans (who have money) prefer interacting with AI systems that have good consequences for their well-being. seems to prove too much. Again, consider Musk. He is the world’s wealthiest person, yet it is the case that his mind is often inhabited by ideas that are bad for him, his well-being, and have overall bad consequences. St
ate—u nclear to me: why would you expect “formal power” to keep translating to real power (For some intuitions: United Kingdom. Quite many things in the country are done in the name of His Majesty The King)- - we assume institutional AIs will be aligned to institutions and institutional interests, not their nominal human representatives or principals— I think the model of the world where superagents like states or large corporations have “dozens of people controlling these entities” is really not how the world works. Often the person nominally in charge is more a servant of the entity aligned to it rather than “principal”. --- “While politicians might ostensibly make the decisions, they may increasingly look to AI systems for advice on what legislation to pass, how to actually write the legislation, and what the law even is. While humans would nominally maintain sovereignty, much of the implementation of the law might come from AI systems.” / ll seems good, if AI is well-aligned?Imo, it would be bad to not hand off control to aligned AIs that would be more competent and better motivated that us ---- I think you should be really clear who are the AIs aligned to. Either eg US governmental AIs are aligned to US government and state in general, in which case the dynamic leads to a state with no human principals with any real power, and humans will just rubber-stamp. ---- Or the governmental AIs are aligned to specific humans, such as US president. This would imply very large changes of power relative to current state, transitioning from republic to personal dictatorship. Both US state and US citizens would fight this
(may respond to some of the rough thoughts later, they explore interesting directions)
I don’t think that the example of kings losing their powers really supports your thesis here. That wasn’t a seamless, subtle process of power slipping away. There was a lot of bloodshed and threat of bloodshed involved.
King Charles I tried to exercise his powers as a real king and go against the Parliament, but the people rebelled and he lost his head. After that, his son managed to restore the monarchy, though he needed to agree to some more restrictions on his powers. After that, James II tried to go against the Parliament again, and got overthrown and replaced by another guy who agreed to relinquish the majority of royal powers. After that, the king still had some limited say, but he they tried to do unpopular taxes in America, the colonies rebelled, and gained independence through a violent revolution. Then next door from England, Louis XVI tried to go against the will of his Assembly, and lost his head. After these, the British Parliament started to politely ask their kings to relinquish the remainder of their powers, and they wisely agreed, so their family could keep their nominal rulership, their nice castle, and most importantly, their head.
I think the analogous situation would be AIs violently over-taking some countries, and after that, the other countries bloodlessly surrendering to their AIs. I think this is much closer to the traditional picture of AI takeover than to the picture you are painting in Gradual Disempowerment.
On the other hand, there is another interesting factor in kings losing power that might be more related to what you are talking about (though I don’t think this factor is as important as the threat of revolutions discussed in the previous comment).
My understanding is that part of the story for why kings lost their power is that the majority of people were commoners, so the best writers, artists and philosophers were commoners (or at least not the highest aristocrats), and the kings and the aristocrats read their work, and these writer often argued for more power to the people. The kings and aristocrats sometimes got sincerely convinced, and agreed to relinquish some powers even when it was not absolutely necessary for preempting revolutions.
I think this is somewhat analogous to the story of cultural AI dominance in Gradual Disempowerment: all the most engaging content creators are AIs, humans consume their content, the AIs argue for giving power to AIs, and the humans get convinced.
I agree this is a real danger, but I think there might be an important difference between the case of kings and the AI future.
The court of Louis XVI read Voltaire, but I think if there was someone equally witty to Voltaire who also flattered the aristocracy, they would have plausibly liked him more. But the pool of witty people was limited, and Voltaire was far wittier than any of the few pro-aristocrat humorists, so the royal court put up with Voltaire’s hostile opinions.
On the other hand, in a post-AGI future, I think it’s plausible that with a small fraction of the resources you can get close to saturating human engagement. Suppose pro-human groups fund 1% of the AIs generating content, and pro-AI groups fund 99%. (For the sake of argument, let’s grant the dubious assumption that the majority of economy is controlled by AIs.) I think it’s still plausible that the two groups can generate approximately equally engaging content, and if humans find pro-human content more appealing, then that just wins out.
Also, I’m kind of an idealist, and I think part of the reason that Voltaire was successful is that he was just right about a lot of things, parliamentary government really leads to better outcomes than absolute monarchy from the perspective of a more-or-less shared human morality. So I have some hope (though definitely not certainty) that AI content creators competing in a free marketplace of ideas will only convince humanity to voluntarily relinquish power if relinquishing power is actually the right choice.
Kings also lost their power because the name of the game had changed significantly.
In the actual Middle Ages, kings may have nominally had complete power, but in reality they were heavily constrained by the relations they had with wealthy landowners and nobles. The institution of the Royal Court persevered precisely because it served an absolutely critical social purpose, namely a mechanism for coordination between the lords of the realm. Everybody was subject to the crown and the crown’s rulings, so disputes could be resolved and hierarchies could be established (relatively) bloodlessly. Conversely, the king nominally was above the lords, but he served at their pleasure, in the sense that it he became sufficiently unpopular with them, he would be removed.[1]
As the move towards absolutism happened and kings started amassing de facto power approaching the de jure power they’d long pretended they’d had, suddenly the old justification for the king’s existence evaporated.
Chinese history contains dozens of examples of emperors losing the Mandate of Heaven in the eyes of wealthy lords or powerful generals, and getting executed for it
- I think the debate would probably benefit from better specification of what is meant by “misalignment” or “solving alignment” -- I do not think the convincing versions of gradual disempowerment either rely on misalignment or result power concentration among humans for relatively common meaning of alignment roughly at the level “does what the developer wants and approves, resolving conflicts between their wants in a way which is not egregiously bad”. If “aligned” means something at the level “implements coherent extrapolated volition of humanity” or “solves AI safety” than yes.
Just checking: Would you say that the AIs in you get what you measure and another (outer) alignment failure story are substantially less aligned than “does what the developer wants and approves, resolving conflicts between their wants in a way which is not egregiously bad”?
Appreciate the many concrete examples you’re giving here.
Responding quickly.
I do not think the convincing versions of gradual disempowerment either rely on misalignment or result power concentration among humans for relatively common meaning of alignment roughly at the level “does what the developer wants and approves, resolving conflicts between their wants in a way which is not egregiously bad”. If “aligned” means something at the level “implements coherent extrapolated volition of humanity” or “solves AI safety” than yes.
Yep, that makes sense. And I disagree, so this is useful clarification.
I think that if AI “does what the developer wants and approves, resolving conflicts between their wants in a way which is not egregiously bad” then I am much less worried about GD than about power-seeking AI. (Though i have some uncertainty here if the AI is resolving these conflicts pretty badly but hiding the fact it’s doing this for some reason. But if it’s resolving these conflicts as well as fairly competent human would, i feel much less worried about GD than powre-seeking AI.)
It’s more than just index funds. It’s ppl getting AIs to invest on their behalf, just like VCs invest on ppl’s behalf today. It seems like we need fairly egregious misalignment for this to fail, no?
problems with stability of property rights: people in the US or UK often perceive them as very stable, but they depend on state enforcing them → state becomes a more load-bearing component of the system
Why is it more load bearing than today? Today it’s completely load bearing right? If income switches from wages to capital income, why does it become more load bearing? (I agree it becomes more load bearing when taxation is needed for ppl’s income—but many ppl will own capital so not need this)
having overwhelming cultural production power means setting consumer preference
Thanks, interesting point. Though humans will own/control the AIs producing culture, so they will still control this determinant of human preferences.
right-wing US twitter discourse is often influenced by anonymous accounts run by citizens of India and Pakistan; people running these accounts often have close to zero econ power, and their main source of income is the money they get for posts
Interesting. And you’re thinking that the analogy is that AIs will have no money but could have a big cultural influence? Makes sense. (Though again, those AIs will be owned/controlled by humans, somewhat breaking the analogy.)
Again, consider Musk
But the ideas that are bad for Musk and his thinking have generally decreased his power + influence, no? Overall he’s an exceptionally productive and competent person. If some cultural meme caused him to be constantly addicted to his phone, that wouldn’t be selected for culturally.
we assume institutional AIs will be aligned to institutions and institutional interests, not their nominal human representatives or principals
So what causes the govt AIs to be aligned to the state over the heads of office, to the extent where they disempower those humans? Why don’t those humans see it coming and adjust the AI’s goals? Or, if the AI is aligned to the state, why doesn’t it pursue the formal goals of the state like protecting it’s ppl?
I think that if AI “does what the developer wants and approves, resolving conflicts between their wants in a way which is not egregiously bad” then I am much less worried about GD than about power-seeking AI.
If the AI is that well-aligned, then presumably power-seeking AI is also not much of a problem, and you shouldn’t be that concerned about either?
Maybe you mean “if I assume that I don’t need to be worried about GD outside of the cases where AI “does what the developer wants and approves, resolving conflicts between their wants in a way which is not egregiously bad”, then I am overall much less worried about GD than about power-seeking AI”?
I’m hope it’s not presumptuous to respond on Jan’s behalf, but since he’s on vacation:
> It’s more than just index funds. It’s ppl getting AIs to invest on their behalf, just like VCs invest on ppl’s behalf today. It seems like we need fairly egregious misalignment for this to fail, no?
Today, in the U.S. and Canada, most people have no legal way to invest in OpenAI, Anthropic, or xAI, even if they have AI advisors. Is this due to misalignment, or just a mostly unintended outcome from consumer protection laws, and regulation disincentivizing IPOs?
> If income switches from wages to capital income, why does it become more load bearing?
Because the downside of a one-time theft is bounded if you can still make wages. If I lose my savings but can still work, I don’t starve. If I’m a pensioner and I lose my pension, maybe I do starve.
> humans will own/control the AIs producing culture, so they will still control this determinant of human preferences.
Why do humans already farm clickbait? It seems like you think many humans wouldn’t direct their AIs to make them money / influence by whatever means necessary. And it won’t necessarily be individual humans running these AIs, it’ll be humans who own shares of companies such as “Clickbait Spam-maxxing Twitter AI bot corp”, competing to produce the clickbaitiest content.
Today, in the U.S. and Canada, most people have no legal way to invest in OpenAI, Anthropic, or xAI, even if they have AI advisors. Is this due to misalignment, or just a mostly unintended outcome from consumer protection laws, and regulation disincentivizing IPOs?
Sorry if this is missing your point — but why would AIs of the future have a comparative advantage relative to humans, here? I would think that humans would have a much easier time becoming accredited investors and being able to invest in AI companies. (Assuming, as Tom does, that the humans are getting AI assistance and therefore are at no competence disadvantage.)
I was responding to “ppl getting AIs to invest on their behalf, just like VCs invest on ppl’s behalf today. It seems like we need fairly egregious misalignment for this to fail, no?”
I’m saying that one way that “humans live off index funds” fails, even today, is that it’s illegal for almost every human to participate in many of the biggest wealth creation events. You’re right that most AIs would probably also be barred from participating from most wealth creation events, but the ones that do (maybe by being hosted by, or part of, the new hot corporations) can scale / reproduce really quickly to double down on whatever advantage that they have from being in the inner circle.
You’re right that most AIs would probably also be barred from participating from most wealth creation events, but the ones that do (maybe by being hosted by, or part of, the new hot corporations) can scale / reproduce really quickly to double down on whatever advantage that they have from being in the inner circle.
I still don’t understand why the AIs that have access would be able to scale their influence more quickly than the AI-assisted humans who have the same access.
(Note that Tom never talked about index funds, just about humans investing their money with the help of AIs, which should allow them to stay competitive with AIs. You brought up one way in which some humans are restricted from investing their money, but IMO that constraint applies at least as strongly to AIs as to humans, so I just don’t get how it gives AIs a relative competitive advantage.)
Overall, i think this considerations favours economic power concentration among the humans who are legally allowed to invest in the most promising opportunities and have AI advisors to help them
And, conversely, this would would decrease the economic influence of other humans and AIs
Also very rough response—
I think the debate would probably benefit from better specification of what is meant by “misalignment” or “solving alignment”
-- I do not think the convincing versions of gradual disempowerment either rely on misalignment or result power concentration among humans for relatively common meaning of alignment roughly at the level “does what the developer wants and approves, resolving conflicts between their wants in a way which is not egregiously bad”. If “aligned” means something at the level “implements coherent extrapolated volition of humanity” or “solves AI safety” than yes.
- Economic
—the counter-argument seems to be roughly in the class “everyone owns index funds” and “state taxes AIs”
—count-counter arguments are: -
---- difficulty of indexing economy undergoing radical technological transiton (as explained in an excellent post by Beren we reference)-
---- problems with stability of property rights: people in the US or UK often perceive them as very stable, but they depend on state enforcing them → state becomes a more load-bearing component of the system-
---- taxation: same → state becomes a more load-bearing component of the system-
---- in many cases some income can be nominally collected in the name of humans, but they may have very little say in the process or how is it used (for some intuition, consider His Majesty Revenue & Customs. HMRC is direct descendant of a chain of org collecting customs from ˜13th century; in the beginning, His Majesty had a lot of say in what these are and also could actually use the revenue; now, not really)-
Cultural. If humans remain economically empowered (in the sense of having much more money than AI), I think they will likely remain culturally empowered.-
- this takes a bit too much econ perspective on culture; cultural evolution is somewhat coupled with economy, but is an independent system with different feedback loops—
in particular it is important to understand that while in most econ thinking preferences of consumers are exogenous, culture is largely what sets the preferences; to some extent culture is what the consumers are made of → having overwhelming cultural production power means setting consumer preference—
for some intuitions, consider current examples--
-- right-wing US twitter discourse is often influenced by anonymous accounts run by citizens of India and Pakistan; people running these accounts often have close to zero econ power, and their main source of income is the money they get for posts--
--- yet they are able to influence what eg Elon Musk thinks, despite the >10ˆ7 wealth difference--
--- Even AI-AI culture, if it promotes bad outcomes for humans and humans can understand this, will be indirectly selected against as humans (who have money) prefer interacting with AI systems that have good consequences for their well-being. seems to prove too much. Again, consider Musk. He is the world’s wealthiest person, yet it is the case that his mind is often inhabited by ideas that are bad for him, his well-being, and have overall bad consequences. St
ate—u
nclear to me: why would you expect “formal power” to keep translating to real power (For some intuitions: United Kingdom. Quite many things in the country are done in the name of His Majesty The King)-
- we assume institutional AIs will be aligned to institutions and institutional interests, not their nominal human representatives or principals—
I think the model of the world where superagents like states or large corporations have “dozens of people controlling these entities” is really not how the world works. Often the person nominally in charge is more a servant of the entity aligned to it rather than “principal”.
--- “While politicians might ostensibly make the decisions, they may increasingly look to AI systems for advice on what legislation to pass, how to actually write the legislation, and what the law even is. While humans would nominally maintain sovereignty, much of the implementation of the law might come from AI systems.” / ll seems good, if AI is well-aligned? Imo, it would be bad to not hand off control to aligned AIs that would be more competent and better motivated that us
---- I think you should be really clear who are the AIs aligned to. Either eg US governmental AIs are aligned to US government and state in general, in which case the dynamic leads to a state with no human principals with any real power, and humans will just rubber-stamp.
---- Or the governmental AIs are aligned to specific humans, such as US president. This would imply very large changes of power relative to current state, transitioning from republic to personal dictatorship. Both US state and US citizens would fight this
(may respond to some of the rough thoughts later, they explore interesting directions)
I don’t think that the example of kings losing their powers really supports your thesis here. That wasn’t a seamless, subtle process of power slipping away. There was a lot of bloodshed and threat of bloodshed involved.
King Charles I tried to exercise his powers as a real king and go against the Parliament, but the people rebelled and he lost his head. After that, his son managed to restore the monarchy, though he needed to agree to some more restrictions on his powers. After that, James II tried to go against the Parliament again, and got overthrown and replaced by another guy who agreed to relinquish the majority of royal powers. After that, the king still had some limited say, but he they tried to do unpopular taxes in America, the colonies rebelled, and gained independence through a violent revolution. Then next door from England, Louis XVI tried to go against the will of his Assembly, and lost his head. After these, the British Parliament started to politely ask their kings to relinquish the remainder of their powers, and they wisely agreed, so their family could keep their nominal rulership, their nice castle, and most importantly, their head.
I think the analogous situation would be AIs violently over-taking some countries, and after that, the other countries bloodlessly surrendering to their AIs. I think this is much closer to the traditional picture of AI takeover than to the picture you are painting in Gradual Disempowerment.
On the other hand, there is another interesting factor in kings losing power that might be more related to what you are talking about (though I don’t think this factor is as important as the threat of revolutions discussed in the previous comment).
My understanding is that part of the story for why kings lost their power is that the majority of people were commoners, so the best writers, artists and philosophers were commoners (or at least not the highest aristocrats), and the kings and the aristocrats read their work, and these writer often argued for more power to the people. The kings and aristocrats sometimes got sincerely convinced, and agreed to relinquish some powers even when it was not absolutely necessary for preempting revolutions.
I think this is somewhat analogous to the story of cultural AI dominance in Gradual Disempowerment: all the most engaging content creators are AIs, humans consume their content, the AIs argue for giving power to AIs, and the humans get convinced.
I agree this is a real danger, but I think there might be an important difference between the case of kings and the AI future.
The court of Louis XVI read Voltaire, but I think if there was someone equally witty to Voltaire who also flattered the aristocracy, they would have plausibly liked him more. But the pool of witty people was limited, and Voltaire was far wittier than any of the few pro-aristocrat humorists, so the royal court put up with Voltaire’s hostile opinions.
On the other hand, in a post-AGI future, I think it’s plausible that with a small fraction of the resources you can get close to saturating human engagement. Suppose pro-human groups fund 1% of the AIs generating content, and pro-AI groups fund 99%. (For the sake of argument, let’s grant the dubious assumption that the majority of economy is controlled by AIs.) I think it’s still plausible that the two groups can generate approximately equally engaging content, and if humans find pro-human content more appealing, then that just wins out.
Also, I’m kind of an idealist, and I think part of the reason that Voltaire was successful is that he was just right about a lot of things, parliamentary government really leads to better outcomes than absolute monarchy from the perspective of a more-or-less shared human morality. So I have some hope (though definitely not certainty) that AI content creators competing in a free marketplace of ideas will only convince humanity to voluntarily relinquish power if relinquishing power is actually the right choice.
Kings also lost their power because the name of the game had changed significantly.
In the actual Middle Ages, kings may have nominally had complete power, but in reality they were heavily constrained by the relations they had with wealthy landowners and nobles. The institution of the Royal Court persevered precisely because it served an absolutely critical social purpose, namely a mechanism for coordination between the lords of the realm. Everybody was subject to the crown and the crown’s rulings, so disputes could be resolved and hierarchies could be established (relatively) bloodlessly. Conversely, the king nominally was above the lords, but he served at their pleasure, in the sense that it he became sufficiently unpopular with them, he would be removed.[1]
As the move towards absolutism happened and kings started amassing de facto power approaching the de jure power they’d long pretended they’d had, suddenly the old justification for the king’s existence evaporated.
Chinese history contains dozens of examples of emperors losing the Mandate of Heaven in the eyes of wealthy lords or powerful generals, and getting executed for it
Just checking: Would you say that the AIs in you get what you measure and another (outer) alignment failure story are substantially less aligned than “does what the developer wants and approves, resolving conflicts between their wants in a way which is not egregiously bad”?
Thanks!
Appreciate the many concrete examples you’re giving here.
Responding quickly.
Yep, that makes sense. And I disagree, so this is useful clarification.
I think that if AI “does what the developer wants and approves, resolving conflicts between their wants in a way which is not egregiously bad” then I am much less worried about GD than about power-seeking AI. (Though i have some uncertainty here if the AI is resolving these conflicts pretty badly but hiding the fact it’s doing this for some reason. But if it’s resolving these conflicts as well as fairly competent human would, i feel much less worried about GD than powre-seeking AI.)
It’s more than just index funds. It’s ppl getting AIs to invest on their behalf, just like VCs invest on ppl’s behalf today. It seems like we need fairly egregious misalignment for this to fail, no?
Why is it more load bearing than today? Today it’s completely load bearing right? If income switches from wages to capital income, why does it become more load bearing? (I agree it becomes more load bearing when taxation is needed for ppl’s income—but many ppl will own capital so not need this)
Thanks, interesting point. Though humans will own/control the AIs producing culture, so they will still control this determinant of human preferences.
Interesting. And you’re thinking that the analogy is that AIs will have no money but could have a big cultural influence? Makes sense. (Though again, those AIs will be owned/controlled by humans, somewhat breaking the analogy.)
But the ideas that are bad for Musk and his thinking have generally decreased his power + influence, no? Overall he’s an exceptionally productive and competent person. If some cultural meme caused him to be constantly addicted to his phone, that wouldn’t be selected for culturally.
So what causes the govt AIs to be aligned to the state over the heads of office, to the extent where they disempower those humans? Why don’t those humans see it coming and adjust the AI’s goals? Or, if the AI is aligned to the state, why doesn’t it pursue the formal goals of the state like protecting it’s ppl?
If the AI is that well-aligned, then presumably power-seeking AI is also not much of a problem, and you shouldn’t be that concerned about either?
Maybe you mean “if I assume that I don’t need to be worried about GD outside of the cases where AI “does what the developer wants and approves, resolving conflicts between their wants in a way which is not egregiously bad”, then I am overall much less worried about GD than about power-seeking AI”?
Thanks—yep that’s what i meant!
I’m hope it’s not presumptuous to respond on Jan’s behalf, but since he’s on vacation:
> It’s more than just index funds. It’s ppl getting AIs to invest on their behalf, just like VCs invest on ppl’s behalf today. It seems like we need fairly egregious misalignment for this to fail, no?
Today, in the U.S. and Canada, most people have no legal way to invest in OpenAI, Anthropic, or xAI, even if they have AI advisors. Is this due to misalignment, or just a mostly unintended outcome from consumer protection laws, and regulation disincentivizing IPOs?
> If income switches from wages to capital income, why does it become more load bearing?
Because the downside of a one-time theft is bounded if you can still make wages. If I lose my savings but can still work, I don’t starve. If I’m a pensioner and I lose my pension, maybe I do starve.
> humans will own/control the AIs producing culture, so they will still control this determinant of human preferences.
Why do humans already farm clickbait? It seems like you think many humans wouldn’t direct their AIs to make them money / influence by whatever means necessary. And it won’t necessarily be individual humans running these AIs, it’ll be humans who own shares of companies such as “Clickbait Spam-maxxing Twitter AI bot corp”, competing to produce the clickbaitiest content.
Sorry if this is missing your point — but why would AIs of the future have a comparative advantage relative to humans, here? I would think that humans would have a much easier time becoming accredited investors and being able to invest in AI companies. (Assuming, as Tom does, that the humans are getting AI assistance and therefore are at no competence disadvantage.)
I was responding to “ppl getting AIs to invest on their behalf, just like VCs invest on ppl’s behalf today. It seems like we need fairly egregious misalignment for this to fail, no?”
I’m saying that one way that “humans live off index funds” fails, even today, is that it’s illegal for almost every human to participate in many of the biggest wealth creation events. You’re right that most AIs would probably also be barred from participating from most wealth creation events, but the ones that do (maybe by being hosted by, or part of, the new hot corporations) can scale / reproduce really quickly to double down on whatever advantage that they have from being in the inner circle.
I still don’t understand why the AIs that have access would be able to scale their influence more quickly than the AI-assisted humans who have the same access.
(Note that Tom never talked about index funds, just about humans investing their money with the help of AIs, which should allow them to stay competitive with AIs. You brought up one way in which some humans are restricted from investing their money, but IMO that constraint applies at least as strongly to AIs as to humans, so I just don’t get how it gives AIs a relative competitive advantage.)
Overall, i think this considerations favours economic power concentration among the humans who are legally allowed to invest in the most promising opportunities and have AI advisors to help them
And, conversely, this would would decrease the economic influence of other humans and AIs