I’m not totally sure of this, but it looks to me like there’s already more scientific consensus around mirror life being a threat worth taking seriously, than is the case for AI. E.g., my impression is that this paper was largely positively received by various experts in the field, including experts that weren’t involved in the paper. AI risk looks much more contentious to me even if there are some very credible people talking about it. That could be driving some of the difference in responses, but yeah, the economic potential of AI probably drives a bunch of the difference too.
I sorta agree, but sorta don’t. Remember the CAIS statement? There have been plenty of papers about AI risk that were positively received by various experts in the field who were uninvolved in those papers. I agree that there is more contention about AI risk than about chirality risk though… which brings me to my other point, which is that part of the contention around AGI risks seems to be downstream of the incentives rather than downstream of scientific disputes. Like, presumably the fact that there are already powerful corporations that stand to make tons of money from AI is part of why it’s hard to get scientists to agree on things like “we should ban it” even when they’ve already agreed “it could kill us all,” and part of why it’s hard to get them to even agree “it could kill us all” even when they’ve already agreed “it will surpass humans across the board soon, and also, we aren’t ready” and part of why it’s hard to get them to agree “it will surpass humans across the board soon” even as all the evidence piles up over the last few years.
IMO, AI safety has the problem that both a lot of the science of AI safety on how to make AIs safe is only partially known (but it has made progress), the evidence base for the AI field, especially on the big questions like deceptive alignment is way smaller than a lot of other fields (for several reasons), combined with your last point about incentives to get AI more powerful by companies.
I’m not totally sure of this, but it looks to me like there’s already more scientific consensus around mirror life being a threat worth taking seriously, than is the case for AI. E.g., my impression is that this paper was largely positively received by various experts in the field, including experts that weren’t involved in the paper. AI risk looks much more contentious to me even if there are some very credible people talking about it. That could be driving some of the difference in responses, but yeah, the economic potential of AI probably drives a bunch of the difference too.
I sorta agree, but sorta don’t. Remember the CAIS statement? There have been plenty of papers about AI risk that were positively received by various experts in the field who were uninvolved in those papers. I agree that there is more contention about AI risk than about chirality risk though… which brings me to my other point, which is that part of the contention around AGI risks seems to be downstream of the incentives rather than downstream of scientific disputes. Like, presumably the fact that there are already powerful corporations that stand to make tons of money from AI is part of why it’s hard to get scientists to agree on things like “we should ban it” even when they’ve already agreed “it could kill us all,” and part of why it’s hard to get them to even agree “it could kill us all” even when they’ve already agreed “it will surpass humans across the board soon, and also, we aren’t ready” and part of why it’s hard to get them to agree “it will surpass humans across the board soon” even as all the evidence piles up over the last few years.
IMO, AI safety has the problem that both a lot of the science of AI safety on how to make AIs safe is only partially known (but it has made progress), the evidence base for the AI field, especially on the big questions like deceptive alignment is way smaller than a lot of other fields (for several reasons), combined with your last point about incentives to get AI more powerful by companies.
Add them all up, and it’s a tricky problem.