I’m hope it’s not presumptuous to respond on Jan’s behalf, but since he’s on vacation:
> It’s more than just index funds. It’s ppl getting AIs to invest on their behalf, just like VCs invest on ppl’s behalf today. It seems like we need fairly egregious misalignment for this to fail, no?
Today, in the U.S. and Canada, most people have no legal way to invest in OpenAI, Anthropic, or xAI, even if they have AI advisors. Is this due to misalignment, or just a mostly unintended outcome from consumer protection laws, and regulation disincentivizing IPOs?
> If income switches from wages to capital income, why does it become more load bearing?
Because the downside of a one-time theft is bounded if you can still make wages. If I lose my savings but can still work, I don’t starve. If I’m a pensioner and I lose my pension, maybe I do starve.
> humans will own/control the AIs producing culture, so they will still control this determinant of human preferences.
Why do humans already farm clickbait? It seems like you think many humans wouldn’t direct their AIs to make them money / influence by whatever means necessary. And it won’t necessarily be individual humans running these AIs, it’ll be humans who own shares of companies such as “Clickbait Spam-maxxing Twitter AI bot corp”, competing to produce the clickbaitiest content.
Today, in the U.S. and Canada, most people have no legal way to invest in OpenAI, Anthropic, or xAI, even if they have AI advisors. Is this due to misalignment, or just a mostly unintended outcome from consumer protection laws, and regulation disincentivizing IPOs?
Sorry if this is missing your point — but why would AIs of the future have a comparative advantage relative to humans, here? I would think that humans would have a much easier time becoming accredited investors and being able to invest in AI companies. (Assuming, as Tom does, that the humans are getting AI assistance and therefore are at no competence disadvantage.)
I was responding to “ppl getting AIs to invest on their behalf, just like VCs invest on ppl’s behalf today. It seems like we need fairly egregious misalignment for this to fail, no?”
I’m saying that one way that “humans live off index funds” fails, even today, is that it’s illegal for almost every human to participate in many of the biggest wealth creation events. You’re right that most AIs would probably also be barred from participating from most wealth creation events, but the ones that do (maybe by being hosted by, or part of, the new hot corporations) can scale / reproduce really quickly to double down on whatever advantage that they have from being in the inner circle.
You’re right that most AIs would probably also be barred from participating from most wealth creation events, but the ones that do (maybe by being hosted by, or part of, the new hot corporations) can scale / reproduce really quickly to double down on whatever advantage that they have from being in the inner circle.
I still don’t understand why the AIs that have access would be able to scale their influence more quickly than the AI-assisted humans who have the same access.
(Note that Tom never talked about index funds, just about humans investing their money with the help of AIs, which should allow them to stay competitive with AIs. You brought up one way in which some humans are restricted from investing their money, but IMO that constraint applies at least as strongly to AIs as to humans, so I just don’t get how it gives AIs a relative competitive advantage.)
Overall, i think this considerations favours economic power concentration among the humans who are legally allowed to invest in the most promising opportunities and have AI advisors to help them
And, conversely, this would would decrease the economic influence of other humans and AIs
I’m hope it’s not presumptuous to respond on Jan’s behalf, but since he’s on vacation:
> It’s more than just index funds. It’s ppl getting AIs to invest on their behalf, just like VCs invest on ppl’s behalf today. It seems like we need fairly egregious misalignment for this to fail, no?
Today, in the U.S. and Canada, most people have no legal way to invest in OpenAI, Anthropic, or xAI, even if they have AI advisors. Is this due to misalignment, or just a mostly unintended outcome from consumer protection laws, and regulation disincentivizing IPOs?
> If income switches from wages to capital income, why does it become more load bearing?
Because the downside of a one-time theft is bounded if you can still make wages. If I lose my savings but can still work, I don’t starve. If I’m a pensioner and I lose my pension, maybe I do starve.
> humans will own/control the AIs producing culture, so they will still control this determinant of human preferences.
Why do humans already farm clickbait? It seems like you think many humans wouldn’t direct their AIs to make them money / influence by whatever means necessary. And it won’t necessarily be individual humans running these AIs, it’ll be humans who own shares of companies such as “Clickbait Spam-maxxing Twitter AI bot corp”, competing to produce the clickbaitiest content.
Sorry if this is missing your point — but why would AIs of the future have a comparative advantage relative to humans, here? I would think that humans would have a much easier time becoming accredited investors and being able to invest in AI companies. (Assuming, as Tom does, that the humans are getting AI assistance and therefore are at no competence disadvantage.)
I was responding to “ppl getting AIs to invest on their behalf, just like VCs invest on ppl’s behalf today. It seems like we need fairly egregious misalignment for this to fail, no?”
I’m saying that one way that “humans live off index funds” fails, even today, is that it’s illegal for almost every human to participate in many of the biggest wealth creation events. You’re right that most AIs would probably also be barred from participating from most wealth creation events, but the ones that do (maybe by being hosted by, or part of, the new hot corporations) can scale / reproduce really quickly to double down on whatever advantage that they have from being in the inner circle.
I still don’t understand why the AIs that have access would be able to scale their influence more quickly than the AI-assisted humans who have the same access.
(Note that Tom never talked about index funds, just about humans investing their money with the help of AIs, which should allow them to stay competitive with AIs. You brought up one way in which some humans are restricted from investing their money, but IMO that constraint applies at least as strongly to AIs as to humans, so I just don’t get how it gives AIs a relative competitive advantage.)
Overall, i think this considerations favours economic power concentration among the humans who are legally allowed to invest in the most promising opportunities and have AI advisors to help them
And, conversely, this would would decrease the economic influence of other humans and AIs