I answered the following subquestion to help me answer the overall question: “How likely is it that the condition Rohin specifies will not be met by 2100?”
This could happen due to any of the following non-mutually exclusive reasons:
1. Global catastrophe before the condition is met that makes it so that people are no longer thinking about AI safety (e.g. human extinction or end of civilization): I think there’s a 50% chance
2. Condition is met sometime after the timeframe (mostly, I’m imagining that AI progress is slower than I expect): 5%
3. AGI succeeds despite the condition not being met: 30%
4. There’s some huge paradigm shift that makes AI safety concerns irrelevant—maybe most people are convinced that we’ll never build AGI, or our focus shifts from AGI to some other technology: 10%
5. Some other reason: 20%
I thought about this subquestion before reading the comments or looking at Rohin’s distribution. Based on that thinking, I thought that there was a 60% chance that the condition would not be met by 2100.
I answered the following subquestion to help me answer the overall question: “How likely is it that the condition Rohin specified would already be met (if he went out and talked to the researchers today)?”
Considerations that make it more likely:
1. The considerations identified in ricaz’s and Owain’s comments and their subcomments
2. The bar for understanding safety concerns (question 2 on the “survey”) seems like it may be quite low. It seems to me that researchers entirely unfamiliar with safety could gain the required level of understanding in just 30 minutes of reading (depends on how Rohin would interpret his conversation with the researcher in deciding whether to mark “Yes” or “No”)
Considerations that make it less likely:
1. I’d guess that currently, most AI researchers have no idea what any of the concrete safety concerns are, i.e. they’d be “No”s on question 2
2. The bar for question 3 on the “survey” (“should we wait to build AGI”) might be pretty high. If someone thinks that some safety concerns remain but that we should cautiously move forward on building things that look more and more like AGI, does that count as a “Yes” or a “No”?
3. I have the general impression that many AI researchers really dislike the idea that safety concerns are serious enough that we should in any way slow down AI research
I thought about this subquestion before reading the comments or looking at Rohin’s distribution. Based on that thinking, I thought that there was a 25% chance that the condition Rohin specified would already be met.
Note: I work at Ought, so I’m ineligible for the prizes