My two cents: People often rely too much on whether someone is “x-risk-pilled” and not enough on evaluating their actual beliefs/skills/knowledge/competence . For example, a lot of people could pass some sort of “I care about existential risks from AI” test without necessarily making it a priority or having particularly thoughtful views on how to reduce such risks.
Here are some other frames:
Suppose a Senator said “Alice, what are some things I need to know about AI or AI policy?” How would Alice respond?
Suppose a staffer said “Hey Alice, I have some questions about [AI2027, superintelligence strategy, some Bengio talk, pick your favorite reading/resource here].” Would Alice be able to have a coherent back-and-forth with the staffer for 15+ mins that goes beyond a surface level discussion?
Suppose a Senator said “Alice, you have free reign to work on anything you want in the technology portfolio—what do you want to work on?” How would Alice respond?
In my opinion, potential funders/supporters of AI policy organizations should be asking these kinds of questions. I don’t mean to suggest it’s never useful to directly assess how much someone “cares” about XYZ risks, but I do think that on-the-margin people tend to overrate that indicator and underrate other indicators.
Relatedly, I think people often do some sort of “is this person an EA” or is this person an “xrisk person”, and I would generally encourage people to try to use this sort of thinking less. It feels like AI policy discussions are getting sophisticated enough that we can actually Have Nuanced Conversations and evaluate people less on some sort of “do you play for the Right Team” axis and more on “what is your specific constellation of beliefs/skills/priorities/proposals” dimensions.
I would otherwise agree with you, but I think the AI alignment ecosystem has been burnt many times in the past over giving a bunch of money to people who said they cared about safety, but not asking enough questions about whether they actually believed “AI may kill everyone and that is a near or the number 1 priority of theirs”.
I’m not sure if we disagree— I think there are better ways to assess this than the way the “is this an xrisk person or not” tribal card often gets applied.
Example: “Among all the topics in AI policy and concerns around AI, what are your biggest priorities?” is a good question IMP.
Counterexample: “Do you think existential risk from advanced AI is important?” is a bad question IMO (especially in isolation).
It is very easy for people to say they care about “AI safety” without giving much indication of where it stands on their priority list, what sorts of ideas/plans they want to aim for, what threat models they are concerned about, if they are the kind of person who can have a 20+ min conversation about interesting readings or topics in the field, etc.
I suspect that people would get “burnt” less if they asked these kinds of questions instead of defaulting to some sort of “does this person care about safety” frame or “is this person Part of My Tribe” thing.
(On that latter point, it is rather often that I hear people say things like “Alice is amazing!” and then when I ask them about Alice’s beliefs or work they say something like “Oh I don’t know much about Alice’s work— I just know other people say Alice is amazing!”. I think it would be better for people to say “I think Alice is well-liked but I personally do not know much about her work or what kinds of things she believes/prioritizes.”)
My two cents: People often rely too much on whether someone is “x-risk-pilled” and not enough on evaluating their actual beliefs/skills/knowledge/competence . For example, a lot of people could pass some sort of “I care about existential risks from AI” test without necessarily making it a priority or having particularly thoughtful views on how to reduce such risks.
Here are some other frames:
Suppose a Senator said “Alice, what are some things I need to know about AI or AI policy?” How would Alice respond?
Suppose a staffer said “Hey Alice, I have some questions about [AI2027, superintelligence strategy, some Bengio talk, pick your favorite reading/resource here].” Would Alice be able to have a coherent back-and-forth with the staffer for 15+ mins that goes beyond a surface level discussion?
Suppose a Senator said “Alice, you have free reign to work on anything you want in the technology portfolio—what do you want to work on?” How would Alice respond?
In my opinion, potential funders/supporters of AI policy organizations should be asking these kinds of questions. I don’t mean to suggest it’s never useful to directly assess how much someone “cares” about XYZ risks, but I do think that on-the-margin people tend to overrate that indicator and underrate other indicators.
Relatedly, I think people often do some sort of “is this person an EA” or is this person an “xrisk person”, and I would generally encourage people to try to use this sort of thinking less. It feels like AI policy discussions are getting sophisticated enough that we can actually Have Nuanced Conversations and evaluate people less on some sort of “do you play for the Right Team” axis and more on “what is your specific constellation of beliefs/skills/priorities/proposals” dimensions.
I would otherwise agree with you, but I think the AI alignment ecosystem has been burnt many times in the past over giving a bunch of money to people who said they cared about safety, but not asking enough questions about whether they actually believed “AI may kill everyone and that is a near or the number 1 priority of theirs”.
I’m not sure if we disagree— I think there are better ways to assess this than the way the “is this an xrisk person or not” tribal card often gets applied.
Example: “Among all the topics in AI policy and concerns around AI, what are your biggest priorities?” is a good question IMP.
Counterexample: “Do you think existential risk from advanced AI is important?” is a bad question IMO (especially in isolation).
It is very easy for people to say they care about “AI safety” without giving much indication of where it stands on their priority list, what sorts of ideas/plans they want to aim for, what threat models they are concerned about, if they are the kind of person who can have a 20+ min conversation about interesting readings or topics in the field, etc.
I suspect that people would get “burnt” less if they asked these kinds of questions instead of defaulting to some sort of “does this person care about safety” frame or “is this person Part of My Tribe” thing.
(On that latter point, it is rather often that I hear people say things like “Alice is amazing!” and then when I ask them about Alice’s beliefs or work they say something like “Oh I don’t know much about Alice’s work— I just know other people say Alice is amazing!”. I think it would be better for people to say “I think Alice is well-liked but I personally do not know much about her work or what kinds of things she believes/prioritizes.”)
This seems like the opposite of a disagreement to me? Am I missing something?
Well Orpheus apparently agrees with me, so you probably understood the original comment better than I did!