a remotely realistic-seeming story for how things will be OK, without something that looks like coordination to not build ASI for quite a while
My mainline scenario is something like:
LLM scaling and tinkering peters out in the next few years without reaching capacity for autonomous R&D. LLMs end up being good enough to displace some entry-level jobs, but the hype bubble bursts and we enter a new AI winter for at least a couple of decades.
The “intelligence” thingie turns out to be actually hard and not amenable to a bag of simple tricks with a mountain of compute, for reasons gestured at in Realism about rationality. Never mind ASI, we’re likely very far from being able to instantiate an AGI worthy of the name, which won’t happen while we remain essentially clueless about this stuff.
I also expect that each subsequent metaphorical AI “IQ point” will be harder to achieve, not easier, so no foom or swift takeover. Of course, even assuming all that, it still doesn’t guarantee that “things will be OK”, but I’m sufficiently uncertain either way.
My mainline scenario is something like:
LLM scaling and tinkering peters out in the next few years without reaching capacity for autonomous R&D. LLMs end up being good enough to displace some entry-level jobs, but the hype bubble bursts and we enter a new AI winter for at least a couple of decades.
The “intelligence” thingie turns out to be actually hard and not amenable to a bag of simple tricks with a mountain of compute, for reasons gestured at in Realism about rationality. Never mind ASI, we’re likely very far from being able to instantiate an AGI worthy of the name, which won’t happen while we remain essentially clueless about this stuff.
I also expect that each subsequent metaphorical AI “IQ point” will be harder to achieve, not easier, so no foom or swift takeover. Of course, even assuming all that, it still doesn’t guarantee that “things will be OK”, but I’m sufficiently uncertain either way.