… Did … X-risk from non-AI sources just … vanish somehow, that it’s mentioned nowhere in anything anyone’s talking about on this stop talking about it thing that supposedly lists many possibilities? Should I not be including probability of existential catastrophe / survivable apocalypse / extinction event from non-AI sources in any p(doom) number I come up with? Where do they go instead if so?
Great point! I focused on AI risk since that’s what most people I’m familiar with are talking about right now, but there are indeed other risks, and that’s yet another potential source of miscommunication. One person could report a high p(doom) due to their concerns about bioterrorism, and another interprets that as them being concerned about AI.
Most people putting lots of effort into calculating AI-X-Risk seem to subsume the other kinds of risk (call them human x-risk) into AI, believing that AI will either solve them or accelerate them, but either way be pivotal such that AI is all that matters.
Personally, I tend to think we were on the path to disaster (perhaps not permanent, but maybe. At least a few thousand years of civilizational regression) already, and AI is not likely to either solve it or go so far awry as to be the proximal cause of doom. It WILL be an accelerator, just as it’s turning into an accelerator for everything. But the seeds were planted long ago, and it’s purely human decisions at fault.
0.5% chance nature directly causes doom 5% chance AI directly causes an avoidable doom 15% chance AI directly solves an avoidable (human or natural) doom 50% chance humans cause doom with or without AI 99% chance AI accelerates both problems and solutions for doom
… what single number does that actually subsume into? (I could most likely calculate at least one literal answer to this, but you can see the communication problems from the example, I hope.)
I think one of the points of this post is that you shouldn’t have or communicate one single number. These are different things, and you at the very least need to quantify “avoidable”, and figure out the correlations between them (like “human-caused degradation would reduce the world population by 90% if AI didn’t extend our ability to cope by 30 years, but then another part of fragility of human expectations makes civilization collapse anyway”).
At some point (which we’re well past in most discussions around here), it becomes too complex and FAR too dependent on assumptions with very large error bars (and very large conditionals on surprising levels of human coordination), that there is no way to predict the outcomes. About the best you can do is very large buckets of “somewhat likely”, “rather unlikely”, and “not gonna worry about it”, with another dimension of “how much, if any, will my actions change things”, also focused on paths wide enough that you’re not basing it on insane multiplication of very small/large made-up numbers.
‘Avoidable’ in above toy numbers are purely that 1 - avoidable doom directly caused by AI is in fact avoided if we destroy all (relevantly capable?) AI when testing for doom. 2 - avoidable doom directly caused by humans or nature is in fact avoided by AI technology we possess when testing for doom.
Still not sure I follow. “testing for doom” is done by experiencing the doom or non-doom-yet future at some point in time, right? And we can’t test under conditions that don’t actually obtain. Or do you have some other test that works on counterfactual (or future-unknown-maybe-factual) worlds?
… Did … X-risk from non-AI sources just … vanish somehow, that it’s mentioned nowhere in anything anyone’s talking about on this stop talking about it thing that supposedly lists many possibilities? Should I not be including probability of existential catastrophe / survivable apocalypse / extinction event from non-AI sources in any p(doom) number I come up with? Where do they go instead if so?
Great point! I focused on AI risk since that’s what most people I’m familiar with are talking about right now, but there are indeed other risks, and that’s yet another potential source of miscommunication. One person could report a high p(doom) due to their concerns about bioterrorism, and another interprets that as them being concerned about AI.
Most people putting lots of effort into calculating AI-X-Risk seem to subsume the other kinds of risk (call them human x-risk) into AI, believing that AI will either solve them or accelerate them, but either way be pivotal such that AI is all that matters.
Personally, I tend to think we were on the path to disaster (perhaps not permanent, but maybe. At least a few thousand years of civilizational regression) already, and AI is not likely to either solve it or go so far awry as to be the proximal cause of doom. It WILL be an accelerator, just as it’s turning into an accelerator for everything. But the seeds were planted long ago, and it’s purely human decisions at fault.
So if I had a model, like (toy numbers follow):
0.5% chance nature directly causes doom
5% chance AI directly causes an avoidable doom
15% chance AI directly solves an avoidable (human or natural) doom
50% chance humans cause doom with or without AI
99% chance AI accelerates both problems and solutions for doom
… what single number does that actually subsume into? (I could most likely calculate at least one literal answer to this, but you can see the communication problems from the example, I hope.)
I think one of the points of this post is that you shouldn’t have or communicate one single number. These are different things, and you at the very least need to quantify “avoidable”, and figure out the correlations between them (like “human-caused degradation would reduce the world population by 90% if AI didn’t extend our ability to cope by 30 years, but then another part of fragility of human expectations makes civilization collapse anyway”).
At some point (which we’re well past in most discussions around here), it becomes too complex and FAR too dependent on assumptions with very large error bars (and very large conditionals on surprising levels of human coordination), that there is no way to predict the outcomes. About the best you can do is very large buckets of “somewhat likely”, “rather unlikely”, and “not gonna worry about it”, with another dimension of “how much, if any, will my actions change things”, also focused on paths wide enough that you’re not basing it on insane multiplication of very small/large made-up numbers.
‘Avoidable’ in above toy numbers are purely that
1 - avoidable doom directly caused by AI is in fact avoided if we destroy all (relevantly capable?) AI when testing for doom.
2 - avoidable doom directly caused by humans or nature is in fact avoided by AI technology we possess when testing for doom.
Still not sure I follow. “testing for doom” is done by experiencing the doom or non-doom-yet future at some point in time, right? And we can’t test under conditions that don’t actually obtain. Or do you have some other test that works on counterfactual (or future-unknown-maybe-factual) worlds?
Yeah, the test is just if doom is experienced (and I have no counterfactual world testing, useful as that would be).