Complementarity between humans and AIs. I see plausible arguments for low complementarity owing to big advantages from full automation, but it seems pretty clear there will be some complementarity, i.e. that output will be larger than (AI output) + (human output). Today there is obviously massive complementarity. Even modest amounts of complementarity significantly slow down takeoff. I believe there is a significant chance (perhaps 30%?) that complementarity from horizon length alone is sufficient to drive an unambiguously slow takeoff.
This is a big crux, in that I believe complementarity is very low, low enough that in practice, it can be ignored.
And I think Amdahl’s law severely suppresses complementarity, and this is a crux, in that if I changed my mind about this, then I think slow takeoff is likely.
This is a big crux, in that I believe complementarity is very low, low enough that in practice, it can be ignored.
And I think Amdahl’s law severely suppresses complementarity, and this is a crux, in that if I changed my mind about this, then I think slow takeoff is likely.