There’s the high-level argument that AIs will recursively self-improve very fast.
There’s support for this argument from the example of humans.
There’s a rebuttal to that support from the concept of changing selection pressures.
There’s a counterrebuttal to changing selection pressures from my post.
By the time we reach the fourth level down, there’s not that much scope for updates on the original claim, because at each level we lose confidence that we’re arguing about the right thing, and also we’ve zoomed in enough that we’re ignoring most of the relevant considerations.
So my reasoning is something like:
There’s the high-level argument that AIs will recursively self-improve very fast.
There’s support for this argument from the example of humans.
There’s a rebuttal to that support from the concept of changing selection pressures.
There’s a counterrebuttal to changing selection pressures from my post.
By the time we reach the fourth level down, there’s not that much scope for updates on the original claim, because at each level we lose confidence that we’re arguing about the right thing, and also we’ve zoomed in enough that we’re ignoring most of the relevant considerations.
I’ll make this more explicit.