The pesky thing is that we can’t fully analyze it from the outside. The analysis itself can be colored by a terror generator that has nothing to do with the objective situation.
So if there’s reason to think there’s a subconscious distortion happening to our collective reasoning, distinguishing between doom and nodoom might functionally have sorting out terror as a prerequisite. Which sucks in terms of “If there’s doom then we don’t want to waste time working on things that aren’t related.” But if we literally cannot tell what’s real due to distortions in perception, then sorting out those perception errors becomes the top priority.
(I’m describing it in black-and-white framing to make the logic clear, not to assert that we literally cannot tell at all what’s going on.)
When you can’t figure something out, you need to act under uncertainty. The question is still doom vs. no short term doom. Even if you conclude “terror”, that is only an argument for the uncertainty being unresolvable (with some class of arguments that would otherwise help), not an argument that doom has been ruled out (there still needs to be some prior). The “doom vs. terror” framing doesn’t adequately capture this.
Since 5-20% doom within 10 years is a relatively popular position, mixing in more nodoom because terror made certain doom vs. nodoom arguments useless doesn’t change this state of uncertainty too much, the decision relevant implications remain about the same. That is, it’s still worth working on a mixture of short term and longer term projects, possibly even very long term ones that almost inevitably won’t be fruitful before takeoff-capable AGI (because we might use the head start to prompt the AGIs into completing such projects before takeoff).
Analyzing from the outside, I agree.
The pesky thing is that we can’t fully analyze it from the outside. The analysis itself can be colored by a terror generator that has nothing to do with the objective situation.
So if there’s reason to think there’s a subconscious distortion happening to our collective reasoning, distinguishing between doom and nodoom might functionally have sorting out terror as a prerequisite. Which sucks in terms of “If there’s doom then we don’t want to waste time working on things that aren’t related.” But if we literally cannot tell what’s real due to distortions in perception, then sorting out those perception errors becomes the top priority.
(I’m describing it in black-and-white framing to make the logic clear, not to assert that we literally cannot tell at all what’s going on.)
When you can’t figure something out, you need to act under uncertainty. The question is still doom vs. no short term doom. Even if you conclude “terror”, that is only an argument for the uncertainty being unresolvable (with some class of arguments that would otherwise help), not an argument that doom has been ruled out (there still needs to be some prior). The “doom vs. terror” framing doesn’t adequately capture this.
Since 5-20% doom within 10 years is a relatively popular position, mixing in more nodoom because terror made certain doom vs. nodoom arguments useless doesn’t change this state of uncertainty too much, the decision relevant implications remain about the same. That is, it’s still worth working on a mixture of short term and longer term projects, possibly even very long term ones that almost inevitably won’t be fruitful before takeoff-capable AGI (because we might use the head start to prompt the AGIs into completing such projects before takeoff).