My mental model does include those gradients even though I expressed them as categories.
I currently think it’s all too likely that decision-makers would accept a 20% or more chance of extinction in exchange for the benefits.
One route is to make better guesses about what happens by default. The other is to try to create better decisions by spreading the relevant logic.
Those who want to gamble will certainly push the “it should be fine and we need to do it!” logic. The two sets of beliefs will probably develop symbiotically. It’s hard to separate emotional from rational reasons for beliefs.
Based on that logic, I actually think human cognitive biases and cognitive limitations are the biggest challenge to surviving ASI. We’re silly creatures with a spark of reason.
I think there are just very few people for whom this is a compelling argument. I don’t think government are coming anywhere close to explicitly making this calculation. I think some people in labs are maybe making this decision but they aren’t actually the target audience for this.
I agree that governments aren’t coming anywhere close to making this calculation at this point. I mean they very well might once they’ve actually thought about the issue. I think it will depend a lot on their collective distribution of p(doom). I’d think they’d push ahead if they could convince themselves it was lower than maybe 20% or thereabouts. I’d love to be convinced they would be more cautious.
Of course I think it very much depends on who is in the relevant governments at that time. I think that the issue could play a large role in elections, and that might help a lot.
Good points; I agree with all of them.
It’s hard to know how to weigh them.
My mental model does include those gradients even though I expressed them as categories.
I currently think it’s all too likely that decision-makers would accept a 20% or more chance of extinction in exchange for the benefits.
One route is to make better guesses about what happens by default. The other is to try to create better decisions by spreading the relevant logic.
Those who want to gamble will certainly push the “it should be fine and we need to do it!” logic. The two sets of beliefs will probably develop symbiotically. It’s hard to separate emotional from rational reasons for beliefs.
It looks to me like people automatically convince themselves that what they want to do emotionally is also the logical thing to do. See Motivated reasoning/confirmation bias as the most important cognitive bias for a brief discussion.
Based on that logic, I actually think human cognitive biases and cognitive limitations are the biggest challenge to surviving ASI. We’re silly creatures with a spark of reason.
I think there are just very few people for whom this is a compelling argument. I don’t think government are coming anywhere close to explicitly making this calculation. I think some people in labs are maybe making this decision but they aren’t actually the target audience for this.
I agree that governments aren’t coming anywhere close to making this calculation at this point. I mean they very well might once they’ve actually thought about the issue. I think it will depend a lot on their collective distribution of p(doom). I’d think they’d push ahead if they could convince themselves it was lower than maybe 20% or thereabouts. I’d love to be convinced they would be more cautious.
Of course I think it very much depends on who is in the relevant governments at that time. I think that the issue could play a large role in elections, and that might help a lot.