My website is https://www.cs.toronto.edu/~duvenaud/
David Duvenaud
Language suggestion: I would replace “random sampling” everywhere in this article with “simple Monte Carlo”, which means “estimating an expectation by taking the mean of samples from the distribution”. There are lots of sampling-based methods more sophisticated, and lower-variance than, simple Monte Carlo, such as importance sampling. In fact, you might be able to use the methods you outlined to build better proposal distributions for importance sampling. This would still be a sampling-based method, but better than simple Monte Carlo.
Upcoming Workshop on Post-AGI Economics, Culture, and Governance
Hmmm, maybe we got mixed somewhere along the way, because I was also trying to argue that humans won’t keep more money than AI in the Malthusian limit!
Summary of our Workshop on Post-AGI Outcomes
I think it matters bc AIs won’t be able to save any money. They’ll spend all their wages renting compute to run themselves on. So it blocks problems that stem from AI having more disposal income and therefore weighing heavily on economic demand signals.
This doesn’t make sense to me, and sounds like it proves too much—something like “Corporations can never grow because they’ll spend all their revenue on expenses, which will be equal to revenue due to competition”. Sometimes AIs (or corporations) will earn more than their running costs, and invest those in growth, and end up with durable advantages due to things such as returns to scale or network effects.
I was responding to “ppl getting AIs to invest on their behalf, just like VCs invest on ppl’s behalf today. It seems like we need fairly egregious misalignment for this to fail, no?”
I’m saying that one way that “humans live off index funds” fails, even today, is that it’s illegal for almost every human to participate in many of the biggest wealth creation events. You’re right that most AIs would probably also be barred from participating from most wealth creation events, but the ones that do (maybe by being hosted by, or part of, the new hot corporations) can scale / reproduce really quickly to double down on whatever advantage that they have from being in the inner circle.
I’m hope it’s not presumptuous to respond on Jan’s behalf, but since he’s on vacation:
> It’s more than just index funds. It’s ppl getting AIs to invest on their behalf, just like VCs invest on ppl’s behalf today. It seems like we need fairly egregious misalignment for this to fail, no?Today, in the U.S. and Canada, most people have no legal way to invest in OpenAI, Anthropic, or xAI, even if they have AI advisors. Is this due to misalignment, or just a mostly unintended outcome from consumer protection laws, and regulation disincentivizing IPOs?
> If income switches from wages to capital income, why does it become more load bearing?
Because the downside of a one-time theft is bounded if you can still make wages. If I lose my savings but can still work, I don’t starve. If I’m a pensioner and I lose my pension, maybe I do starve.
> humans will own/control the AIs producing culture, so they will still control this determinant of human preferences.
Why do humans already farm clickbait? It seems like you think many humans wouldn’t direct their AIs to make them money / influence by whatever means necessary. And it won’t necessarily be individual humans running these AIs, it’ll be humans who own shares of companies such as “Clickbait Spam-maxxing Twitter AI bot corp”, competing to produce the clickbaitiest content.
Oh, makes sense. Kind of like Yudkowsky’s arguments about how you don’t know how a chess master will beat you, just that they will. We also can’t predict exactly how a civilization will disempower its least productive and sophisticated members. But a fool and his money are soon parted, except under controlled circumstances.
Thanks for the detailed feedback, argumentation, and criticism!
There’s still a real puzzle about why Xi/Trump/CEOs can’t coordinate here after they realise what’s happening.
Maybe it’s unclear even to superintelligent AIs where this will lead, but it in fact leads to disempowerment. Or maybe the AIs aren’t aligned enough to tell us it’s bad for us.
I agree that having truthful, aligned AGI advisors might be sufficient to avoid coordination failures. But then again, why do current political leaders regularly appoint or listen to bad advisors? Steve Byrnes had a great list of examples of this pattern, which he calls “conservation of wisdom”
why not deploy aligned AI that makes as much money as possible and then uses it for your interests? maybe the successionism means ppl choose not to? (Seems weak!)
For the non-rich, one way or another, they’ll quickly end up back in Malthusian competition with beings that are more productive, and have much more reproductive flexibility than them.
For the oligarchs / states, as long as human reproduction remained slow, they could easily use a small amount of their fortunes to keep humanity alive. But there are so many possible forms of successionism, that I expect at least one of them to be more appealing to a given oligarch / government than letting humans-as-they-are continue to consume substantial physical resources. E.g.:Allow total reproductive freedom, which ends up Goodhearting whatever UBI / welfare system is in existence with “spam humans”, e.g. just-viable frozen embryos with uploaded / AI brains legally attached.
Some sort of “greatest hits of humanity” sim that replays human qualia involved in their greatest achievements, best days, etc., Or, support some new race of AGIs that are fine-tuned to simulate the very best of humanity (according to the state).
Force everyone to upload to save money, and also to police / abolish extreme suffering. Then selection effects turns the remaining humans into full-time activists / investors / whatever the government or oligarchs choose to reward. (This also might be what a good end looks like if done well enough.)
I buy you could get radical cultural changes. [...] But stuff as big as in this story feels unlikely. Often culture changes radically bc the older generation dies off, but that won’t happen here.
Good point, but imo old peoples’ influence mostly wanes well before they die, as they become unemployed, out-of-touch, and isolated from the levers of cultural production and power. Which is what we’re saying will happen to almost all humans, too.
Another way that culture changes radically is through mass immigration, which will also effectively happen as people spend more time interacting with effectively more-numerous AIs.
> If people remained economically indispensable, even fairly serious misalignment could have non catastrophic outcomes.
Good point. Relatedly, even the most terribly misaligned governments mostly haven’t starved or killed a large fraction of their citizens. In this sense, we already survive misaligned superintelligence on a regular basis. But only when, as you say, people remain economically indispensable.
> Someone I was explaining it to described it as “indefinite pessimism”.
I think this is a fair criticism, in the sense that it’s not clear what could make us happy about the long-term future even in principle. But to me, this is just what being long-term agentic looks like! I don’t understand why so many otherwise-agentic people I know seem content to YOLO it post-AGI, or seem to be reassured that “the AGI will figure it out for us”.
Even if AIs do earn wages, those wages may be driven down to subsistence levels via Malthusian dynamics (you can quickly make more compute) so that human income from capital assets dominates AI income.
Why does it matter whether AIs’ wages are subsistence-level? This seems to prove too much, e.g. “monkeys won’t be threatened by human domination of the economy, since the humans will just reproduce until they’re at subsistence level.”
Even if AIs earn significant non-subsistence wages, humans can easily tax that income at >50% and give it to humans.
Maybe—but taxing machine income seems to me to be similarly difficult to taxing corporate income. As a machine, you have many more options to form a legal super-organization and blur the lines between consumption, trade, employment, and capex.
How did it go? Any recurring questions, or big conceptual updates?
Sounds like we’re in the same boat!
Hard agree. It’s ironic that it took hundreds of years to get people to accept the unintuitive positive-sum-ness of liberalism, libertarianism, and trade. But now we might have to convince everyone that those seemingly-robust effects are likely to go away, and that governments and markets are going to be unintuitively harsh.
There are several important “happy accidents” that allowed almost everyone to thrive under liberalism, that are likely to go away:
- Not usually enough variation in ability to allow sheer domination (though this is not surprising, due to selection—everyone who was completely dominated is mostly not around anymore).
- Predictable death from old age as a leveler preventing power lock-in.
- Sexual reproduction (and deleterious effects of inbreeding) giving gains to intermixing beyond family units, and reducing the all-or-nothing stakes of competition.
- Not usually enough variation in reproductive rates to pin us to Malthusian equilibria.
I’m afraid you might be right, though maybe something like “transhumanist North Korea” is the best we can hope for while remaining meaningfully human. Care to outline, or link to, other options you have in mind?
Right, like many people I agree that in most cases derandomization is a good idea if you can afford a little extra code complexity. But the structure of your argument makes it sound like that’s the important part, when actually it’s taking advantage of structure that’s the important part, sampling or not.
Concretely, I would change “Understanding structure helps outperform sampling” to simply “Understanding structure helps”.