I agree that people like Bengio can be very valuable assets for AI safety advocacy, although there are diminishing marginal returns—the first computer supergenius who likes your policies is transformative; the third is helpful; the tenth is mostly just a statistic in a survey and will not meaningfully change, e.g., journalists or staffers’ opinions about an issue.
If you think that technology or history will move far enough ahead that people like Bengio and Hinton will lose their relevance, then it might be a good idea to try to convince the next Bengio to support some AI safety policies. If that’s your strategy, then you should develop a short list of people who might be the next Bengio, and then go find them and talk to them in person. Once you’ve identified some leading young computer scientists and some questions that they’re uncertain and curious about, then you can go do research to try to help convince them to take a particular stance on those questions.
Just publishing AI governance research of general academic interest is very far removed from the goal of recruiting computer science superstars for AI x-risk advocacy.
I agree that people like Bengio can be very valuable assets for AI safety advocacy, although there are diminishing marginal returns—the first computer supergenius who likes your policies is transformative; the third is helpful; the tenth is mostly just a statistic in a survey and will not meaningfully change, e.g., journalists or staffers’ opinions about an issue.
If you think that technology or history will move far enough ahead that people like Bengio and Hinton will lose their relevance, then it might be a good idea to try to convince the next Bengio to support some AI safety policies. If that’s your strategy, then you should develop a short list of people who might be the next Bengio, and then go find them and talk to them in person. Once you’ve identified some leading young computer scientists and some questions that they’re uncertain and curious about, then you can go do research to try to help convince them to take a particular stance on those questions.
Just publishing AI governance research of general academic interest is very far removed from the goal of recruiting computer science superstars for AI x-risk advocacy.