the claim that there is a <1% probability of AI existential risk
That’s a serious constraint. What possible argument that’s not literally a demonstration of a working AGI is going to do that to the epistemic state about a question this confusing? Imagining a future where AI is not an existential risk is easy (and there are many good arguments for it being more likely than one would expect, just as there are many good arguments for it being less likely than one would expect). But imagining a present where it’s known to not be an existential risk with 99% probability (or 1% probability), despite not having already been built, doesn’t work for me.
Maybe there is 0.1% probability (I sorta tried to actually assess the order of magnitude for this number) that in 15 years the world’s state of knowledge builds up to a point where that epistemic state becomes thinkable (conditional on actual AGI not having been built). This would most likely require shockingly better alignment theory and expectation that less aligned AGIs can’t (as in alignment-by-default) or won’t be built first.
That’s a serious constraint. What possible argument that’s not literally a demonstration of a working AGI is going to do that to the epistemic state about a question this confusing? Imagining a future where AI is not an existential risk is easy (and there are many good arguments for it being more likely than one would expect, just as there are many good arguments for it being less likely than one would expect). But imagining a present where it’s known to not be an existential risk with 99% probability (or 1% probability), despite not having already been built, doesn’t work for me.
Maybe there is 0.1% probability (I sorta tried to actually assess the order of magnitude for this number) that in 15 years the world’s state of knowledge builds up to a point where that epistemic state becomes thinkable (conditional on actual AGI not having been built). This would most likely require shockingly better alignment theory and expectation that less aligned AGIs can’t (as in alignment-by-default) or won’t be built first.