Currently, I’d estimate there are ~50 people in the world who could make a case for working on AI alignment to me that I’d think wasn’t clearly flawed. (I actually ran this experiment with ~20 people recently, 1 person succeeded.)
I wonder if this is because people haven’t optimised for being able to make the case. You don’t really need to be able to make a comprehensive case for AI risk to do productive research on AI risk. For example, I can chip away at the technical issues without fully understanding the governance issues, as long as I roughly understand something like “coordination is hard, and thus finding technical solutions seems good”.
Put differently: The fact that there are (in your estimation) few people who can make the case well doesn’t mean that it’s very hard to make the case well. E.g., for me personally, I think I could not make a case for AI risk right now that would convince you. But I think I could relatively easily learn to do so (in maybe one to three months???)
I agree you don’t need to have a comprehensive case for risk to do productive research on it, and overall I am glad that people do in fact work on relevant stuff without getting bogged down in ensuring they can justify every last detail.
I agree it’s possible that people could learn to make a good case. I don’t expect it, because I don’t expect most people to try to learn to make a case that would convince me. You in particular might do so, but I’ve heard of a lot of “outreach to ML researchers” proposals that did not seem likely to do this.
I wonder if this is because people haven’t optimised for being able to make the case. You don’t really need to be able to make a comprehensive case for AI risk to do productive research on AI risk. For example, I can chip away at the technical issues without fully understanding the governance issues, as long as I roughly understand something like “coordination is hard, and thus finding technical solutions seems good”.
Put differently: The fact that there are (in your estimation) few people who can make the case well doesn’t mean that it’s very hard to make the case well. E.g., for me personally, I think I could not make a case for AI risk right now that would convince you. But I think I could relatively easily learn to do so (in maybe one to three months???)
(I’ve edited the quote to say it’s 2⁄19.)
I agree you don’t need to have a comprehensive case for risk to do productive research on it, and overall I am glad that people do in fact work on relevant stuff without getting bogged down in ensuring they can justify every last detail.
I agree it’s possible that people could learn to make a good case. I don’t expect it, because I don’t expect most people to try to learn to make a case that would convince me. You in particular might do so, but I’ve heard of a lot of “outreach to ML researchers” proposals that did not seem likely to do this.