I appreciate the analysis of talent-vs-funding constraints. I think the bar-for-useful-contribution is so high that we loop back around to “we need to spend more money (and effort) on finding (and making) more talent”, and the programs to do those may be more funding-constrained than talent-constrained.
Like, the 20th century had some really good mathematicians and physicists, and the US government spared little expense towards getting them what they needed, finding them, and so forth. Top basketball teams will “check up on anyone over 7 feet that’s breathing”.
Consider how huge Von Neumann’s expense account must’ve been, between all the consulting and flight tickets and car accidents. Now consider that we don’t seem to have Von Neumanns anymore. There are caveats to at least that second point, but the overall problem still hasn’t been “fixed”.
Things an entity with absurdly-greater funding (e.g. the US Department of Defense) could probably do, with their absurdly-greater funding and probably coordination power:
Indefinitely-long-timespan basic minimum income for everyone who
Coordinating, possibly by force, every AI alignment researcher and aspiring alignment researcher on Earth to move to one place that doesn’t have high rents like the Bay. Possibly up to and including creating that place and making rent free for those who are accepted in.
An entire school system (or at least an entire network of universities, with university-level funding) focused on Sequences-style rationality in general and AI alignment in particular.
I think the big logistical barrier here is something like “LTFF is not the US government”, or more precisely “nothing cool like this can be done ‘on-the-margin’ or with any less than the full funding”. However, I think some of these could be scaled down into mere megaprojects.
I appreciate the analysis of talent-vs-funding constraints. I think the bar-for-useful-contribution is so high that we loop back around to “we need to spend more money (and effort) on finding (and making) more talent”, and the programs to do those may be more funding-constrained than talent-constrained.
Like, the 20th century had some really good mathematicians and physicists, and the US government spared little expense towards getting them what they needed, finding them, and so forth. Top basketball teams will “check up on anyone over 7 feet that’s breathing”.
Consider how huge Von Neumann’s expense account must’ve been, between all the consulting and flight tickets and car accidents. Now consider that we don’t seem to have Von Neumanns anymore. There are caveats to at least that second point, but the overall problem still hasn’t been “fixed”.
Things an entity with absurdly-greater funding (e.g. the US Department of Defense) could probably do, with their absurdly-greater funding and probably coordination power:
Indefinitely-long-timespan basic minimum income for everyone who
Coordinating, possibly by force, every AI alignment researcher and aspiring alignment researcher on Earth to move to one place that doesn’t have high rents like the Bay. Possibly up to and including creating that place and making rent free for those who are accepted in.
Enforce a global large-ML-training shutdown.
An entire school system (or at least an entire network of universities, with university-level funding) focused on Sequences-style rationality in general and AI alignment in particular.
Genetic engineering, focused-training-from-a-young-age, or other extreme “talent development” setups.
All of these at once.
I think the big logistical barrier here is something like “LTFF is not the US government”, or more precisely “nothing cool like this can be done ‘on-the-margin’ or with any less than the full funding”. However, I think some of these could be scaled down into mere megaprojects.