What do you think about the success of AI 2027? It was very widely read, including by the vice president. That’s partly because it’s a striking narrative and, I presume, due to a decent amount of comms and press work. But it’s also backed by a lot of research which took up the majority of the effort, and I think that was instrumental to its success.
More generally, good research builds credibility, especially among other experts who also have a lot of credibility and can help amplify your message. Someone like Yoshua Bengio has a lot more influence for AI safety than a large number of dedicated AI safety advocates. And high-quality research could persuade the next Bengio.
I agree that people like Bengio can be very valuable assets for AI safety advocacy, although there are diminishing marginal returns—the first computer supergenius who likes your policies is transformative; the third is helpful; the tenth is mostly just a statistic in a survey and will not meaningfully change, e.g., journalists or staffers’ opinions about an issue.
If you think that technology or history will move far enough ahead that people like Bengio and Hinton will lose their relevance, then it might be a good idea to try to convince the next Bengio to support some AI safety policies. If that’s your strategy, then you should develop a short list of people who might be the next Bengio, and then go find them and talk to them in person. Once you’ve identified some leading young computer scientists and some questions that they’re uncertain and curious about, then you can go do research to try to help convince them to take a particular stance on those questions.
Just publishing AI governance research of general academic interest is very far removed from the goal of recruiting computer science superstars for AI x-risk advocacy.
I would call the main AI 2027 report an advocacy document—its function is to tell a compelling story about how AI could cause large harms in the very near future. Although the authors claim that a “slowdown” is not their real policy recommendation, the entire structure of the piece (a good ending with a green button and a bad ending with a red button) strongly pushes readers in the direction of thinking that it would be good if the White House “centralizes compute and brings in external oversight.” The story contains enough concrete details that particular White House officials (like the VP) could imagine doing this, so whether the authors intended it or not, the document acts like an advocacy pitch that asks those officials to take these actions.
The research supporting the AI 2027 report is pushed back to a secondary page that readers can check in on if they’re still interested after they finish reading the advocacy document. This is good practice, and I would like to see more researchers adopt this type of structure. Unfortunately, most of our research today doesn’t include an advocacy document at all, let alone as the primary focus of media engagement.
I think we should be careful not to overestimate the success of AI2027. “Vance has engaged with your work” is an impressive feat, but it’s still relatively far away from something like “Vance and others in the Admin have taken your work seriously enough to start to meaningfully change their actions or priorities based on it.” (That bar is very high, but my impression is that the AI2027 folks would be like “yea, that’s what would need to happen in order to steer toward meaningfully better futures.”)
My impression is that AI2027 will have (even) more success if it is accompanied by an ambitious policymaker outreach effort (e.g., lots of 1-1 meetings with relevant policymakers and staffers, writing specific pieces of legislation or EOs and forming a coalition around those ideas, publishing short FAQ memos that address misconceptions or objections they are hearing in their meetings with policymakers, etc.)
This isn’t to say that research is unnecessary—much of the success of AI2027 comes from Daniel (and others on the team) having dedicated much of their lives to research and deep understanding. There are plenty of Government Relations people who are decent at “general policy engagement” but will fail to provide useful answers when staffers ask things like “But why won’t we just code in the goals we want?”, or “But don’t you think the real thing here is about how quickly we diffuse the technology?”, or “Why don’t you think existing laws will work to prevent this?” or a whole host of other questions.
But on the margin, I would probably have Daniel/AI2027 spend more time on policymaker outreach and less time on additional research (especially now that AI2027 is done). There is some degree of influence one can have with the “write something that is thoroughly researched and hope it spreads organically” effort, and I think AI2027 has essentially saturated that. For additional influence, I expect it will be useful for Daniel (or other competent communicators on his team) to advance to “get really good at having meetings with the ~100-1000 most important people, understanding their worldviews, going back and forth with them, understanding their ideological or political constraints, and finding solutions/ideas/arguments that are tailored to these particular individuals.” This is still a very intellectual task in some ways, but it involves a lot more “having meetings” and “forming models of social/political reality” than the classic “sit in your room with a whiteboard and understand technical reality” stuff that we typically associate with research.
What do you think about the success of AI 2027? It was very widely read, including by the vice president. That’s partly because it’s a striking narrative and, I presume, due to a decent amount of comms and press work. But it’s also backed by a lot of research which took up the majority of the effort, and I think that was instrumental to its success.
More generally, good research builds credibility, especially among other experts who also have a lot of credibility and can help amplify your message. Someone like Yoshua Bengio has a lot more influence for AI safety than a large number of dedicated AI safety advocates. And high-quality research could persuade the next Bengio.
I agree that people like Bengio can be very valuable assets for AI safety advocacy, although there are diminishing marginal returns—the first computer supergenius who likes your policies is transformative; the third is helpful; the tenth is mostly just a statistic in a survey and will not meaningfully change, e.g., journalists or staffers’ opinions about an issue.
If you think that technology or history will move far enough ahead that people like Bengio and Hinton will lose their relevance, then it might be a good idea to try to convince the next Bengio to support some AI safety policies. If that’s your strategy, then you should develop a short list of people who might be the next Bengio, and then go find them and talk to them in person. Once you’ve identified some leading young computer scientists and some questions that they’re uncertain and curious about, then you can go do research to try to help convince them to take a particular stance on those questions.
Just publishing AI governance research of general academic interest is very far removed from the goal of recruiting computer science superstars for AI x-risk advocacy.
I would call the main AI 2027 report an advocacy document—its function is to tell a compelling story about how AI could cause large harms in the very near future. Although the authors claim that a “slowdown” is not their real policy recommendation, the entire structure of the piece (a good ending with a green button and a bad ending with a red button) strongly pushes readers in the direction of thinking that it would be good if the White House “centralizes compute and brings in external oversight.” The story contains enough concrete details that particular White House officials (like the VP) could imagine doing this, so whether the authors intended it or not, the document acts like an advocacy pitch that asks those officials to take these actions.
The research supporting the AI 2027 report is pushed back to a secondary page that readers can check in on if they’re still interested after they finish reading the advocacy document. This is good practice, and I would like to see more researchers adopt this type of structure. Unfortunately, most of our research today doesn’t include an advocacy document at all, let alone as the primary focus of media engagement.
I think we should be careful not to overestimate the success of AI2027. “Vance has engaged with your work” is an impressive feat, but it’s still relatively far away from something like “Vance and others in the Admin have taken your work seriously enough to start to meaningfully change their actions or priorities based on it.” (That bar is very high, but my impression is that the AI2027 folks would be like “yea, that’s what would need to happen in order to steer toward meaningfully better futures.”)
My impression is that AI2027 will have (even) more success if it is accompanied by an ambitious policymaker outreach effort (e.g., lots of 1-1 meetings with relevant policymakers and staffers, writing specific pieces of legislation or EOs and forming a coalition around those ideas, publishing short FAQ memos that address misconceptions or objections they are hearing in their meetings with policymakers, etc.)
This isn’t to say that research is unnecessary—much of the success of AI2027 comes from Daniel (and others on the team) having dedicated much of their lives to research and deep understanding. There are plenty of Government Relations people who are decent at “general policy engagement” but will fail to provide useful answers when staffers ask things like “But why won’t we just code in the goals we want?”, or “But don’t you think the real thing here is about how quickly we diffuse the technology?”, or “Why don’t you think existing laws will work to prevent this?” or a whole host of other questions.
But on the margin, I would probably have Daniel/AI2027 spend more time on policymaker outreach and less time on additional research (especially now that AI2027 is done). There is some degree of influence one can have with the “write something that is thoroughly researched and hope it spreads organically” effort, and I think AI2027 has essentially saturated that. For additional influence, I expect it will be useful for Daniel (or other competent communicators on his team) to advance to “get really good at having meetings with the ~100-1000 most important people, understanding their worldviews, going back and forth with them, understanding their ideological or political constraints, and finding solutions/ideas/arguments that are tailored to these particular individuals.” This is still a very intellectual task in some ways, but it involves a lot more “having meetings” and “forming models of social/political reality” than the classic “sit in your room with a whiteboard and understand technical reality” stuff that we typically associate with research.