Why would being a lead AI scientist make somebody uninterested in small talk? Working on complex/important things doesn’t cause you to stop being a regular adult with regular social interactions!
The question of the proportion of AI scientists that would be “interested” in such a conversational topic is interesting and tough, my guess would be very high though (~85 percent). To become a “lead AI scientist” you have to care a lot about AI and the science surrounding it, and that generally implies you’ll like talking about it and its potential harms/benefits to others! Even if their opinion on x-risk rhetoric is dismissiveness, that opinion is likely something important to them as it’s somewhat of a moral standing, since being a capabilities-advancing AI researcher with a high p(doom) is problematic. You can draw parallels with vegetarian/veganism: if you eat meat you have to choose between defending the morality of factory farming processes, accepting that you are being amoral, or having extreme cognitive dissonance. If you are an AI capabilities researcher, you have to choose between defending the morality of advancing ai (downplaying x risk), accepting you are being amoral, or having extreme cognitive dissonance. I would be extremely surprised if there is a large coalition of top AI researchers who simply “have no opinion” or “don’t care” about x-risk, though this is mostly just intuition and I’m happy to be proven wrong!
Why would being a lead AI scientist make somebody uninterested in small talk? Working on complex/important things doesn’t cause you to stop being a regular adult with regular social interactions!
The question of the proportion of AI scientists that would be “interested” in such a conversational topic is interesting and tough, my guess would be very high though (~85 percent). To become a “lead AI scientist” you have to care a lot about AI and the science surrounding it, and that generally implies you’ll like talking about it and its potential harms/benefits to others! Even if their opinion on x-risk rhetoric is dismissiveness, that opinion is likely something important to them as it’s somewhat of a moral standing, since being a capabilities-advancing AI researcher with a high p(doom) is problematic. You can draw parallels with vegetarian/veganism: if you eat meat you have to choose between defending the morality of factory farming processes, accepting that you are being amoral, or having extreme cognitive dissonance. If you are an AI capabilities researcher, you have to choose between defending the morality of advancing ai (downplaying x risk), accepting you are being amoral, or having extreme cognitive dissonance. I would be extremely surprised if there is a large coalition of top AI researchers who simply “have no opinion” or “don’t care” about x-risk, though this is mostly just intuition and I’m happy to be proven wrong!