Last week we wrapped the second post-AGI workshop; I’m copying across some reflections I put up on twitter:
The post-AGI question is very interdisciplinary: whether an outcome is truly stable depends not just on economics and shape of future technology but also on things like the nature of human ideological progress and the physics of interplanetary civilizations
Some concrete takeaways:
proper global UBI is *enormously* expensive (h/t @yelizarovanna)
instead of ‘lower costs’, we should talk about relative prices (h/t @akorinek)
lots human values are actually pretty convergent—they’re shared by many animals (h/t @BerenMillidge)
Among the many tensions in perspective, one of the more productive ones was between the ‘alignment is easy so let’s try to solve the rest’ crowd and the ‘alignment is hard and maybe this will make people realise they should fully halt AGI’ crowd. Strange bedfellows!
It’s hard to avoid partisan politics, but part of what’s weird about AGI is that it can upend basic political assumptions. Maybe AGI will outperform the invisible hand of the market! Maybe governments will grow so powerful that revolution is literally impossible!
Funnily enough, it seems like the main reason people got less doomy was seeing that other people were working hard on the problem, and the main reason people got more doomy was thinking about the problem themselves. Maybe selection effects? Maybe not?
Compared to last time, even if nobody had good answers to how the world could be nice for humans post-AGI, it felt like we were at least beginning to converge on certain useful perspectives and angles of attack, which seems like a good sign
Overall, it was a great time! The topic is niche enough that it self-selects a lot for people who actually care, and that is proving to be a very thoughtful and surprisingly diverse crowd. Hopefully soon we’ll be sharing recordings of the talks!
>proper global UBI is *enormously* expensive (h/t @yelizarovanna)
This seems wrong. There will be huge amounts of wealth post-ASI. Even a relatively small UBI (e.g. 1% of AI companies) will be enough to support way better QOL for everyone on earth. Moreover, everything will become way cheaper because of efficiency gains downstream of AI. Even just at AGI, I think it’s plausible that physical labour is something like 10x cheaper and cognitive labour is something like 1000x cheaper.
Sorry! I realise now that this point was a bit unclear. My sense of the expanded claim is something like:
People sometimes talk about AI UBI/UBC as if it were basically a scaled-up version of the UBI people normally talk about, but it’s actually pretty substantially different
Global UBI right now would be incredibly expensive
In between now and a functioning global UBI we’d need some mix of massive taxes and massive economic growth (which could indeed just be the latter!)
But either way, the world in which that happened would not be economics as usual
(And maybe it is also a huge mess trying to get this set up beforehand so that it’s robust to the transition, or afterwards when the people who need it don’t have much leverage)
For my part I found this surprising because I hadn’t reflected on the sheer orders of magnitude involved, and the fact that any version of this basically involves passing through some fragile craziness. Even if it’s small as a proportion of future GDP, it would in absolute terms be tremendously large.
I separately think there was something important to Korinek’s claim (which I can’t fully regenerate) that the relevant thing isn’t really whether stuff is ‘cheaper’, but rather the prices of all of these goods relative to everything else going on.
I was also there, and my take is there was actually fairly little specific, technical discussion about the economics and politics of what happens post-AGI. This is mostly due to it not being anyone’s job to think about these questions, and only somewhat because they’re inherently hard questions. Not really sure what I would change.
it seems like the main reason people got less doomy was seeing that other people were working hard on the problem [...]
This would be v surprising to me!
It seems like, to the extend that we’re less doomy about survival/flourishing, this isn’t bc we’ve seen a surprising amount of effort, and think effort is v correlated with success. It’s more like: our observations increase our confidence that the problem was easy all along, or that we have been living in a ‘lucky’ world all along.
I might ask you about this when I see you next—I didn’t attend the workshop so maybe I’m just wrong here.
I agree that will be a tricky stretch even if we solve alignment.
Post ASI the only question is whether it’s aligned or intent aligned to a good person(s). It takes care of the rest.
One solution is to push fast from AGI to ASI.
With an aligned ASI, other concerns are largely (understandable) failures of the imagination. The possibilities are nearly limitless. You can find something to love.
This is under a benevolent sovereign. The intuitively appealing balances of power seem really tough to stabilize long term or even short term during takeoff.
I’m not at all surprised by the assertion that humans share values with animals. When you consider that selective pressures act on all systems (which is to say that every living system has to engage with the core constraints of visibility, cost, memory, and strain), it’s not much of a leap to conclude that there would be shared attractor basins where values converge over evolutionary timescales.
Last week we wrapped the second post-AGI workshop; I’m copying across some reflections I put up on twitter:
The post-AGI question is very interdisciplinary: whether an outcome is truly stable depends not just on economics and shape of future technology but also on things like the nature of human ideological progress and the physics of interplanetary civilizations
Some concrete takeaways:
proper global UBI is *enormously* expensive (h/t @yelizarovanna)
instead of ‘lower costs’, we should talk about relative prices (h/t @akorinek)
lots human values are actually pretty convergent—they’re shared by many animals (h/t @BerenMillidge)
Among the many tensions in perspective, one of the more productive ones was between the ‘alignment is easy so let’s try to solve the rest’ crowd and the ‘alignment is hard and maybe this will make people realise they should fully halt AGI’ crowd. Strange bedfellows!
It’s hard to avoid partisan politics, but part of what’s weird about AGI is that it can upend basic political assumptions. Maybe AGI will outperform the invisible hand of the market! Maybe governments will grow so powerful that revolution is literally impossible!
Funnily enough, it seems like the main reason people got less doomy was seeing that other people were working hard on the problem, and the main reason people got more doomy was thinking about the problem themselves. Maybe selection effects? Maybe not?
Compared to last time, even if nobody had good answers to how the world could be nice for humans post-AGI, it felt like we were at least beginning to converge on certain useful perspectives and angles of attack, which seems like a good sign
Overall, it was a great time! The topic is niche enough that it self-selects a lot for people who actually care, and that is proving to be a very thoughtful and surprisingly diverse crowd. Hopefully soon we’ll be sharing recordings of the talks!
Bonus: Two other reactions from attendees
https://danmackinlay.name/post/neurips2025#post-agi-workshop
https://x.com/carl_feynman/status/1997370228842344565
Thanks to all who came, and especially to @DavidDuvenaud, @jankulveit, @StephenLCasper, and Maria Kostylew for organising!
>proper global UBI is *enormously* expensive (h/t @yelizarovanna)
This seems wrong. There will be huge amounts of wealth post-ASI. Even a relatively small UBI (e.g. 1% of AI companies) will be enough to support way better QOL for everyone on earth. Moreover, everything will become way cheaper because of efficiency gains downstream of AI. Even just at AGI, I think it’s plausible that physical labour is something like 10x cheaper and cognitive labour is something like 1000x cheaper.
Sorry! I realise now that this point was a bit unclear. My sense of the expanded claim is something like:
People sometimes talk about AI UBI/UBC as if it were basically a scaled-up version of the UBI people normally talk about, but it’s actually pretty substantially different
Global UBI right now would be incredibly expensive
In between now and a functioning global UBI we’d need some mix of massive taxes and massive economic growth (which could indeed just be the latter!)
But either way, the world in which that happened would not be economics as usual
(And maybe it is also a huge mess trying to get this set up beforehand so that it’s robust to the transition, or afterwards when the people who need it don’t have much leverage)
For my part I found this surprising because I hadn’t reflected on the sheer orders of magnitude involved, and the fact that any version of this basically involves passing through some fragile craziness. Even if it’s small as a proportion of future GDP, it would in absolute terms be tremendously large.
I separately think there was something important to Korinek’s claim (which I can’t fully regenerate) that the relevant thing isn’t really whether stuff is ‘cheaper’, but rather the prices of all of these goods relative to everything else going on.
I was also there, and my take is there was actually fairly little specific, technical discussion about the economics and politics of what happens post-AGI. This is mostly due to it not being anyone’s job to think about these questions, and only somewhat because they’re inherently hard questions. Not really sure what I would change.
This would be v surprising to me!
It seems like, to the extend that we’re less doomy about survival/flourishing, this isn’t bc we’ve seen a surprising amount of effort, and think effort is v correlated with success. It’s more like: our observations increase our confidence that the problem was easy all along, or that we have been living in a ‘lucky’ world all along.
I might ask you about this when I see you next—I didn’t attend the workshop so maybe I’m just wrong here.
You mean post AGI and pre ASI?
I agree that will be a tricky stretch even if we solve alignment.
Post ASI the only question is whether it’s aligned or intent aligned to a good person(s). It takes care of the rest.
One solution is to push fast from AGI to ASI.
With an aligned ASI, other concerns are largely (understandable) failures of the imagination. The possibilities are nearly limitless. You can find something to love.
This is under a benevolent sovereign. The intuitively appealing balances of power seem really tough to stabilize long term or even short term during takeoff.
I’m not at all surprised by the assertion that humans share values with animals. When you consider that selective pressures act on all systems (which is to say that every living system has to engage with the core constraints of visibility, cost, memory, and strain), it’s not much of a leap to conclude that there would be shared attractor basins where values converge over evolutionary timescales.
Do you have a link to any of the global UBI math?