Isn’t it more like “I think there’s a 10% chance of transformative AI by 2027, and that is like 100x higher than what it looks like most people think, so people really need to think thru that timeline”?
That might be. It sounds really plausible. I don’t know why they wrote it!
But all the same: I don’t think most people know what 10% likelihood of a severe outcome is like or how to think about it sensibly. My read is that the vast majority of people need to treat 10% likelihood of doom as either “It’s not going to happen” (because 10% is small) or “It’s guaranteed to happen” (because it’s a serious outcome if it does happen, and it’s plausible). So, amplifying the public awareness of this possibility seems more to me like moving awareness of the scenario from “Nothing existential is going to happen” to “This specific thing is the default thing to expect.”
So I expect that unless something is done to… I don’t know, magically educate the population on statistical thinking, or propagate a public message that it’s roughly right but its timeline is wrong? then the net effect will be that either (a) AI 2027 will have been collectively forgotten by 2028 in roughly the same way that, say, Trudeau’s use of the Emergencies Act has been forgotten; or (b) the predictions failing to pan out will be used as reason to dismiss other AI doom predictions that are apparently considered more likely.
The main benefit I see is if some key folk are made to think about AI doom scenarios in general as a result of AI 2027, and start to work out how to deal with other scenarios.
But I don’t know. That’s been part of this community’s strategy for over two decades. Get key people thinking about AI risk. And I’m not too keen on the results I’ve seen from that strategy so far.
That might be. It sounds really plausible. I don’t know why they wrote it!
But all the same: I don’t think most people know what 10% likelihood of a severe outcome is like or how to think about it sensibly. My read is that the vast majority of people need to treat 10% likelihood of doom as either “It’s not going to happen” (because 10% is small) or “It’s guaranteed to happen” (because it’s a serious outcome if it does happen, and it’s plausible). So, amplifying the public awareness of this possibility seems more to me like moving awareness of the scenario from “Nothing existential is going to happen” to “This specific thing is the default thing to expect.”
So I expect that unless something is done to… I don’t know, magically educate the population on statistical thinking, or propagate a public message that it’s roughly right but its timeline is wrong? then the net effect will be that either (a) AI 2027 will have been collectively forgotten by 2028 in roughly the same way that, say, Trudeau’s use of the Emergencies Act has been forgotten; or (b) the predictions failing to pan out will be used as reason to dismiss other AI doom predictions that are apparently considered more likely.
The main benefit I see is if some key folk are made to think about AI doom scenarios in general as a result of AI 2027, and start to work out how to deal with other scenarios.
But I don’t know. That’s been part of this community’s strategy for over two decades. Get key people thinking about AI risk. And I’m not too keen on the results I’ve seen from that strategy so far.