So… I totally think there are people who sort of nod along with Paul, using it as an excuse to believe in a rosier world where things are more comprehensible and they can imagine themselves doing useful things without having a plan for solving the actual hard problems. Those types of people exist. I think there’s some important work to be done in confronting them with the hard problem at hand.
But, also… Paul’s world AFAICT isn’t actually rosier. It’s potentially more frightening to me. In Smooth Takeoff world, you can’t carefully plan your pivotal act with an assumption that the strategic landscape will remain roughly the same by the time you’re able to execute on it. Surprising partial-gameboard-changing things could happen that affect what sort of actions are tractable. Also, dumb, boring ML systems run amok could kill everyone before we even get to the part where recursive self improving consequentialists eradicate everyone.
I think there is still something seductive about this world – dumb, boring ML systems run amok feels like the sort of problem that is easier to reason about and maybe solve. (I don’t think it’s actually necessarily easier to solve, but I think it can feel that way, whether it’s easier or not). And if you solve ML-run-amok-problems, you still end up dead from recursive-self-improving-consequentialists if you didn’t have a plan for them.
But, that seductiveness feels like a different problem to me than what’s getting argued about in this dialog. (This post seemed to mostly be arguing on the object level at Paul. I recall a previous Eliezer comment where he complained that Paul kept describing things in language that were easy to round off to “things are easy to deal with” even though Eliezer knew that Paul didn’t believe that. That feels more like what the argument here was actually about, but the way the conversation was conducted didn’t seem to acknowledge that.)
My current take some object level points in this post:
It (probably) matters what the strategic landscape looks like in the years leading up to AGI.
It might not matter if you have a plan for pivotal acts that you’re confident are resilient against the sort of random surprises that might happen in Smooth Takeoff World.
A few hypotheses that are foregrounded by this post include:
Smooth Takeoff World, as measured in GDP.
GDP mostly doesn’t seem like it matters except as a proxy, so I’m not that hung up on evaluating this. (That said, the “Bureaucracy and Thielian Secrets” model is interesting, and does provoke some interesting thoughts on how the world might be shaped)
Smooth Takeoff World, as measured by “AI-breakthroughs-per-year-or-something”.
This feels like something that might potentially matter. I agree that AI-breakthroughs-per-year is hard to operationalize, but if AI is able to feed back into AI research that seems strategically relevant. I’m surprised/confused that Eliezer wasn’t more interested in exploring this.
Abrupt Fast Takeoff World, which mostly like this one except suddenly someone has a decisive advantage and/or we’re all dead.
Chunky Takeoff World. Mostly listed for completeness. Maybe there won’t be a smooth hyperbolic curve all the way to FOOM, there might be a few discrete advances in between here and there.
Eliezer’s arguments against Smooth-Takeoff-World generally don’t feel as ironclad to me as the arguments about FOOM. AFAICT he also only specified arguments in this post against Smooth-Takeoff-Measured-By GDP. It seems possible that, i.e. Deepmind could start making AI advances that they use fully internally without running them through external bureaucracy bottlenecks. It’s possible that any sufficiently large organization develops it’s own internal bureaucracy bottlenecks, but also totally possible that all the smartest people at DeepMind talk to each other and the real work gets done in a way that cuts through it
The “Bureaucracy Bottleneck as crux against Smooth Takeoff GDP World” was quite interesting for general worldmodeling, whether or not it’s strategically relevant. It does suggest it might be quite bad if the AI ecosystem figured out how to bypass it’s own bureaucracy bottlenecks.
So… I totally think there are people who sort of nod along with Paul, using it as an excuse to believe in a rosier world where things are more comprehensible and they can imagine themselves doing useful things without having a plan for solving the actual hard problems. Those types of people exist. I think there’s some important work to be done in confronting them with the hard problem at hand.
But, also… Paul’s world AFAICT isn’t actually rosier. It’s potentially more frightening to me. In Smooth Takeoff world, you can’t carefully plan your pivotal act with an assumption that the strategic landscape will remain roughly the same by the time you’re able to execute on it. Surprising partial-gameboard-changing things could happen that affect what sort of actions are tractable. Also, dumb, boring ML systems run amok could kill everyone before we even get to the part where recursive self improving consequentialists eradicate everyone.
I think there is still something seductive about this world – dumb, boring ML systems run amok feels like the sort of problem that is easier to reason about and maybe solve. (I don’t think it’s actually necessarily easier to solve, but I think it can feel that way, whether it’s easier or not). And if you solve ML-run-amok-problems, you still end up dead from recursive-self-improving-consequentialists if you didn’t have a plan for them.
But, that seductiveness feels like a different problem to me than what’s getting argued about in this dialog. (This post seemed to mostly be arguing on the object level at Paul. I recall a previous Eliezer comment where he complained that Paul kept describing things in language that were easy to round off to “things are easy to deal with” even though Eliezer knew that Paul didn’t believe that. That feels more like what the argument here was actually about, but the way the conversation was conducted didn’t seem to acknowledge that.)
My current take some object level points in this post:
It (probably) matters what the strategic landscape looks like in the years leading up to AGI.
It might not matter if you have a plan for pivotal acts that you’re confident are resilient against the sort of random surprises that might happen in Smooth Takeoff World.
A few hypotheses that are foregrounded by this post include:
Smooth Takeoff World, as measured in GDP.
GDP mostly doesn’t seem like it matters except as a proxy, so I’m not that hung up on evaluating this. (That said, the “Bureaucracy and Thielian Secrets” model is interesting, and does provoke some interesting thoughts on how the world might be shaped)
Smooth Takeoff World, as measured by “AI-breakthroughs-per-year-or-something”.
This feels like something that might potentially matter. I agree that AI-breakthroughs-per-year is hard to operationalize, but if AI is able to feed back into AI research that seems strategically relevant. I’m surprised/confused that Eliezer wasn’t more interested in exploring this.
Abrupt Fast Takeoff World, which mostly like this one except suddenly someone has a decisive advantage and/or we’re all dead.
Chunky Takeoff World. Mostly listed for completeness. Maybe there won’t be a smooth hyperbolic curve all the way to FOOM, there might be a few discrete advances in between here and there.
Eliezer’s arguments against Smooth-Takeoff-World generally don’t feel as ironclad to me as the arguments about FOOM. AFAICT he also only specified arguments in this post against Smooth-Takeoff-Measured-By GDP. It seems possible that, i.e. Deepmind could start making AI advances that they use fully internally without running them through external bureaucracy bottlenecks. It’s possible that any sufficiently large organization develops it’s own internal bureaucracy bottlenecks, but also totally possible that all the smartest people at DeepMind talk to each other and the real work gets done in a way that cuts through it
The “Bureaucracy Bottleneck as crux against Smooth Takeoff GDP World” was quite interesting for general worldmodeling, whether or not it’s strategically relevant. It does suggest it might be quite bad if the AI ecosystem figured out how to bypass it’s own bureaucracy bottlenecks.