When you can’t figure something out, you need to act under uncertainty. The question is still doom vs. no short term doom. Even if you conclude “terror”, that is only an argument for the uncertainty being unresolvable (with some class of arguments that would otherwise help), not an argument that doom has been ruled out (there still needs to be some prior). The “doom vs. terror” framing doesn’t adequately capture this.
Since 5-20% doom within 10 years is a relatively popular position, mixing in more nodoom because terror made certain doom vs. nodoom arguments useless doesn’t change this state of uncertainty too much, the decision relevant implications remain about the same. That is, it’s still worth working on a mixture of short term and longer term projects, possibly even very long term ones that almost inevitably won’t be fruitful before takeoff-capable AGI (because we might use the head start to prompt the AGIs into completing such projects before takeoff).
When you can’t figure something out, you need to act under uncertainty. The question is still doom vs. no short term doom. Even if you conclude “terror”, that is only an argument for the uncertainty being unresolvable (with some class of arguments that would otherwise help), not an argument that doom has been ruled out (there still needs to be some prior). The “doom vs. terror” framing doesn’t adequately capture this.
Since 5-20% doom within 10 years is a relatively popular position, mixing in more nodoom because terror made certain doom vs. nodoom arguments useless doesn’t change this state of uncertainty too much, the decision relevant implications remain about the same. That is, it’s still worth working on a mixture of short term and longer term projects, possibly even very long term ones that almost inevitably won’t be fruitful before takeoff-capable AGI (because we might use the head start to prompt the AGIs into completing such projects before takeoff).