Interesting, thank you for sharing! As someone also newer to this space, I’m curious about estimates for the proportion of people in leading technical positions similar to “lead AI scientist” at a big company who would actually be interested in this sort of serendipitous conversation. I was under the impression that many in the position “lead AI scientist” at a big company would be either too 1) wrapped up in thinking about their work/pressing problems or 2) uninterested in mundane small-talk topics to spend “a majority of the conversation talking about [OP’s] bike seat,” but this clearly provides evidence to the contrary.
Why would being a lead AI scientist make somebody uninterested in small talk? Working on complex/important things doesn’t cause you to stop being a regular adult with regular social interactions!
The question of the proportion of AI scientists that would be “interested” in such a conversational topic is interesting and tough, my guess would be very high though (~85 percent). To become a “lead AI scientist” you have to care a lot about AI and the science surrounding it, and that generally implies you’ll like talking about it and its potential harms/benefits to others! Even if their opinion on x-risk rhetoric is dismissiveness, that opinion is likely something important to them as it’s somewhat of a moral standing, since being a capabilities-advancing AI researcher with a high p(doom) is problematic. You can draw parallels with vegetarian/veganism: if you eat meat you have to choose between defending the morality of factory farming processes, accepting that you are being amoral, or having extreme cognitive dissonance. If you are an AI capabilities researcher, you have to choose between defending the morality of advancing ai (downplaying x risk), accepting you are being amoral, or having extreme cognitive dissonance. I would be extremely surprised if there is a large coalition of top AI researchers who simply “have no opinion” or “don’t care” about x-risk, though this is mostly just intuition and I’m happy to be proven wrong!
Interesting, thank you for sharing! As someone also newer to this space, I’m curious about estimates for the proportion of people in leading technical positions similar to “lead AI scientist” at a big company who would actually be interested in this sort of serendipitous conversation. I was under the impression that many in the position “lead AI scientist” at a big company would be either too 1) wrapped up in thinking about their work/pressing problems or 2) uninterested in mundane small-talk topics to spend “a majority of the conversation talking about [OP’s] bike seat,” but this clearly provides evidence to the contrary.
Why would being a lead AI scientist make somebody uninterested in small talk? Working on complex/important things doesn’t cause you to stop being a regular adult with regular social interactions!
The question of the proportion of AI scientists that would be “interested” in such a conversational topic is interesting and tough, my guess would be very high though (~85 percent). To become a “lead AI scientist” you have to care a lot about AI and the science surrounding it, and that generally implies you’ll like talking about it and its potential harms/benefits to others! Even if their opinion on x-risk rhetoric is dismissiveness, that opinion is likely something important to them as it’s somewhat of a moral standing, since being a capabilities-advancing AI researcher with a high p(doom) is problematic. You can draw parallels with vegetarian/veganism: if you eat meat you have to choose between defending the morality of factory farming processes, accepting that you are being amoral, or having extreme cognitive dissonance. If you are an AI capabilities researcher, you have to choose between defending the morality of advancing ai (downplaying x risk), accepting you are being amoral, or having extreme cognitive dissonance. I would be extremely surprised if there is a large coalition of top AI researchers who simply “have no opinion” or “don’t care” about x-risk, though this is mostly just intuition and I’m happy to be proven wrong!