Limiting advanced AI to a few companies is guaranteed to make for normal dystopian outcomes; its badness is in-distribution for our civilization. Justifying an all but certain bad outcome by speculative x-risk is just religion. (AI x-risk in the medium term is not at all in-distribution and it is very difficult to bound its probability in any direction. I.e, it’s Pascal mugging.)
Huh? X-risk isn’t speculative, it’s theory. If you don’t believe in theory, you may be in the wrong place. It’s one thing to think theory is wrong, and argue against it. It’s another to dismiss it as “just theory”.
It’s not Pascal’s Mugging, because that is about a-priori incredibly unlikely outcomes. Everyone with any sort of actual argument or theory puts AI x-risk between 1 and 99% likely.
I think there’s plenty of empirical data, but there’s disagreement over what counts as relevant evidence and how it should be interpreted. (E.g. Hanson and Yudkowsky both cited a number of different empirical observations in support of their respective positions, back during their debate.)
Right. So a really epistemically humble estimate would put the extinction risk at 50%. I realize this is arguable, and I think you can bring a lot of relevant indirect evidence to bear. But the claim that it’s epistemically humble to estimate a low risk seems very wrong to me.
Ah! I read that post, so that was probably partly shaping my response. I had been thinking about this since Tyler Cowan’s “epistimic humbleness” for not worrying much about AI x-risk. I think applying similar probabilities to all of the futures he can imagine, with human extinction being only one of many. But that’s succumbing to availability bias in a big way.
I agree with you that a 99% p(doom) estimate is not epistemically humble, and I think it sounds hubristic and causes negative reactions.
Did you actually mention a downside vs. no regulation?
I think this is a vitally important question that we should be discussing.
The best argument I’ve heard is that regulation slows down progress even when it’s not well done.
Regulatory capture is a real thing, but even that would limit the number of companies developing advanced AI.
Limiting advanced AI to a few companies is guaranteed to make for normal dystopian outcomes; its badness is in-distribution for our civilization. Justifying an all but certain bad outcome by speculative x-risk is just religion. (AI x-risk in the medium term is not at all in-distribution and it is very difficult to bound its probability in any direction. I.e, it’s Pascal mugging.)
Huh? X-risk isn’t speculative, it’s theory. If you don’t believe in theory, you may be in the wrong place. It’s one thing to think theory is wrong, and argue against it. It’s another to dismiss it as “just theory”.
It’s not Pascal’s Mugging, because that is about a-priori incredibly unlikely outcomes. Everyone with any sort of actual argument or theory puts AI x-risk between 1 and 99% likely.
Extinction-level AI x-risk is a plausible theoretical model without any empirical data for or against.
I think there’s plenty of empirical data, but there’s disagreement over what counts as relevant evidence and how it should be interpreted. (E.g. Hanson and Yudkowsky both cited a number of different empirical observations in support of their respective positions, back during their debate.)
Right. I’d think the only “empirical evidence” that counts is the one accepted by both sides as good evidence. I cannot think of any good examples.
Right. So a really epistemically humble estimate would put the extinction risk at 50%. I realize this is arguable, and I think you can bring a lot of relevant indirect evidence to bear. But the claim that it’s epistemically humble to estimate a low risk seems very wrong to me.
I agree that either a very low or a very high estimate of extinction due to AI is not.. epistemically humble. I asked a question about it. https://www.lesswrong.com/posts/R6kGYF7oifPzo6TGu/how-can-one-rationally-have-very-high-or-very-low
Ah! I read that post, so that was probably partly shaping my response. I had been thinking about this since Tyler Cowan’s “epistimic humbleness” for not worrying much about AI x-risk. I think applying similar probabilities to all of the futures he can imagine, with human extinction being only one of many. But that’s succumbing to availability bias in a big way.
I agree with you that a 99% p(doom) estimate is not epistemically humble, and I think it sounds hubristic and causes negative reactions.