Horizon Institute for Public Service is not x-risk-pilled
Someone saw my comment and reached out to say it would be useful for me to make a quick take/post highlighting this: many people in the space have not yet realized that Horizon people are not x-risk-pilled.
Edit: some people reached out to me to say that they’ve had different experiences (with a minority of Horizon people).
My sense is Horizon is intentionally a mixture of people who care about x-risk and people who broadly care about “tech policy going well”. IMO both are laudable goals.
My guess is Horizon Institute has other issues that make me not super excited about it, but I think this one is a reasonable call.
Importantly, AFAICT some Horizon fellows are actively working against x-risk (pulling the rope backwards, not sideways). So Horizon’s sign of impact is unclear to me. For a lot of people, “tech policy going well” means “regulations that don’t impede tech companies’ growth”.
My two cents: People often rely too much on whether someone is “x-risk-pilled” and not enough on evaluating their actual beliefs/skills/knowledge/competence . For example, a lot of people could pass some sort of “I care about existential risks from AI” test without necessarily making it a priority or having particularly thoughtful views on how to reduce such risks.
Here are some other frames:
Suppose a Senator said “Alice, what are some things I need to know about AI or AI policy?” How would Alice respond?
Suppose a staffer said “Hey Alice, I have some questions about [AI2027, superintelligence strategy, some Bengio talk, pick your favorite reading/resource here].” Would Alice be able to have a coherent back-and-forth with the staffer for 15+ mins that goes beyond a surface level discussion?
Suppose a Senator said “Alice, you have free reign to work on anything you want in the technology portfolio—what do you want to work on?” How would Alice respond?
In my opinion, potential funders/supporters of AI policy organizations should be asking these kinds of questions. I don’t mean to suggest it’s never useful to directly assess how much someone “cares” about XYZ risks, but I do think that on-the-margin people tend to overrate that indicator and underrate other indicators.
Relatedly, I think people often do some sort of “is this person an EA” or is this person an “xrisk person”, and I would generally encourage people to try to use this sort of thinking less. It feels like AI policy discussions are getting sophisticated enough that we can actually Have Nuanced Conversations and evaluate people less on some sort of “do you play for the Right Team” axis and more on “what is your specific constellation of beliefs/skills/priorities/proposals” dimensions.
I would otherwise agree with you, but I think the AI alignment ecosystem has been burnt many times in the past over giving a bunch of money to people who said they cared about safety, but not asking enough questions about whether they actually believed “AI may kill everyone and that is a near or the number 1 priority of theirs”.
I’m not sure if we disagree— I think there are better ways to assess this than the way the “is this an xrisk person or not” tribal card often gets applied.
Example: “Among all the topics in AI policy and concerns around AI, what are your biggest priorities?” is a good question IMP.
Counterexample: “Do you think existential risk from advanced AI is important?” is a bad question IMO (especially in isolation).
It is very easy for people to say they care about “AI safety” without giving much indication of where it stands on their priority list, what sorts of ideas/plans they want to aim for, what threat models they are concerned about, if they are the kind of person who can have a 20+ min conversation about interesting readings or topics in the field, etc.
I suspect that people would get “burnt” less if they asked these kinds of questions instead of defaulting to some sort of “does this person care about safety” frame or “is this person Part of My Tribe” thing.
(On that latter point, it is rather often that I hear people say things like “Alice is amazing!” and then when I ask them about Alice’s beliefs or work they say something like “Oh I don’t know much about Alice’s work— I just know other people say Alice is amazing!”. I think it would be better for people to say “I think Alice is well-liked but I personally do not know much about her work or what kinds of things she believes/prioritizes.”)
FWIW this is also my impression but I’m going off weak evidence (I wrote about my evidence here), and Horizon is pretty opaque so it’s hard to tell. A couple weeks ago I tried reaching out to them to talk about it but they haven’t responded.
Horizon Institute for Public Service is not x-risk-pilled
Someone saw my comment and reached out to say it would be useful for me to make a quick take/post highlighting this: many people in the space have not yet realized that Horizon people are not x-risk-pilled.
Edit: some people reached out to me to say that they’ve had different experiences (with a minority of Horizon people).
My sense is Horizon is intentionally a mixture of people who care about x-risk and people who broadly care about “tech policy going well”. IMO both are laudable goals.
My guess is Horizon Institute has other issues that make me not super excited about it, but I think this one is a reasonable call.
Importantly, AFAICT some Horizon fellows are actively working against x-risk (pulling the rope backwards, not sideways). So Horizon’s sign of impact is unclear to me. For a lot of people, “tech policy going well” means “regulations that don’t impede tech companies’ growth”.
My two cents: People often rely too much on whether someone is “x-risk-pilled” and not enough on evaluating their actual beliefs/skills/knowledge/competence . For example, a lot of people could pass some sort of “I care about existential risks from AI” test without necessarily making it a priority or having particularly thoughtful views on how to reduce such risks.
Here are some other frames:
Suppose a Senator said “Alice, what are some things I need to know about AI or AI policy?” How would Alice respond?
Suppose a staffer said “Hey Alice, I have some questions about [AI2027, superintelligence strategy, some Bengio talk, pick your favorite reading/resource here].” Would Alice be able to have a coherent back-and-forth with the staffer for 15+ mins that goes beyond a surface level discussion?
Suppose a Senator said “Alice, you have free reign to work on anything you want in the technology portfolio—what do you want to work on?” How would Alice respond?
In my opinion, potential funders/supporters of AI policy organizations should be asking these kinds of questions. I don’t mean to suggest it’s never useful to directly assess how much someone “cares” about XYZ risks, but I do think that on-the-margin people tend to overrate that indicator and underrate other indicators.
Relatedly, I think people often do some sort of “is this person an EA” or is this person an “xrisk person”, and I would generally encourage people to try to use this sort of thinking less. It feels like AI policy discussions are getting sophisticated enough that we can actually Have Nuanced Conversations and evaluate people less on some sort of “do you play for the Right Team” axis and more on “what is your specific constellation of beliefs/skills/priorities/proposals” dimensions.
I would otherwise agree with you, but I think the AI alignment ecosystem has been burnt many times in the past over giving a bunch of money to people who said they cared about safety, but not asking enough questions about whether they actually believed “AI may kill everyone and that is a near or the number 1 priority of theirs”.
I’m not sure if we disagree— I think there are better ways to assess this than the way the “is this an xrisk person or not” tribal card often gets applied.
Example: “Among all the topics in AI policy and concerns around AI, what are your biggest priorities?” is a good question IMP.
Counterexample: “Do you think existential risk from advanced AI is important?” is a bad question IMO (especially in isolation).
It is very easy for people to say they care about “AI safety” without giving much indication of where it stands on their priority list, what sorts of ideas/plans they want to aim for, what threat models they are concerned about, if they are the kind of person who can have a 20+ min conversation about interesting readings or topics in the field, etc.
I suspect that people would get “burnt” less if they asked these kinds of questions instead of defaulting to some sort of “does this person care about safety” frame or “is this person Part of My Tribe” thing.
(On that latter point, it is rather often that I hear people say things like “Alice is amazing!” and then when I ask them about Alice’s beliefs or work they say something like “Oh I don’t know much about Alice’s work— I just know other people say Alice is amazing!”. I think it would be better for people to say “I think Alice is well-liked but I personally do not know much about her work or what kinds of things she believes/prioritizes.”)
What leads to you believe this?
FWIW this is also my impression but I’m going off weak evidence (I wrote about my evidence here), and Horizon is pretty opaque so it’s hard to tell. A couple weeks ago I tried reaching out to them to talk about it but they haven’t responded.
Datapoint: I spoke to one Horizon fellow a couple of years ago and they did not care about x-risk.
Talking to many people.
As in, Horizon fellows / people who work at Horizon?
Some of those; and some people who talk to those.