Upvoted—I do think lack of a coherent, actionable strategy that actually achieves goals if successful is a general problem of many advocacy movements, not just AI. A few observations:
(1) Actually-successful historical advocacy movements that solved major problems usually did so incrementally over many iterations, taking the wins they could get at each moment while putting themselves in position to take advantage when further opportunities arose.
(2) Relatedly, don’t complain about incremental improvements (yours or others’). Celebrate them, or no one will want to work with you or compromise with you, and you won’t end up in position to get more wins later.
(3) Raising awareness isn’t a terminal goal or a solution, but it gives others a reason to pay attention to you at all. If you have actually good proposals for what to do about a problem, and are in a position to make the case that your proposals are effective and practical, then a perception that the problem is real and a solution is necessary can be very helpful. If a politician solves a major problem that is not yet a crisis, or is not seen as a crisis by their constituents, then solving the problem just looks like wasting money/time/effort to the people that decide if they get to keep their jobs.
(4) Don’t plan a path that leads to victory, plan so that all paths lead to victory. If you make a plan, any plan, to achieve an outcome that is sufficient, it will require many things to go right, and therefore will not work, for reasons you fail to anticipate, and it will also taint your further planning efforts along predetermined directions, limiting your ability to adapt to future opportunities and setbacks. Avoiding this failure mode is part of the upshot of seeking and celebrating incremental wins unreservedly and consistently, as long as those wins don’t cut off the path to further progress.
(5) Being seen to have a long-term plan that no one currently in power would support seems like a quick way to get shut out of a conversation unless you already have some form of power such that you’re hard to ignore.
I was so glad the other day to see Nate Soares talk about the importance of openly discussing x-risks, and also the recent congressional hearings that actually started to ask about real AI risks, because it’s an opening to push the conversation in useful directions. I genuinely worry that AI safety orgs and advocates will make the mistakes that e.g. climate change activists often make: Shut down proposals that are clearly net improvements likely to increase public support for further action, in favor of (in practice) counterproductively maintaining the status quo and turning people off. I started openly discussing x-risk with more and more people in my life last year, and found they have been quite receptive to it from people they knew and trusted to generally be reasonable.
I do think there is value in having organizations around with the kinds of plans you are discussing, but I don’t think, in general, those are the ones that actually get the opportunity to make big wins. I think they serve as generators of ideas that get filtered through more incremental and ‘moderate’ organizations over time, and make those other organizations seem like better partners to collaborate with. I don’t have good data for this, more a general intuition from looking at a few historical examples.
Thank you for sharing your thoughts! My responses:
(1) I believe most historical advocacy movements have required more time than we might have for AI safety. More comprehensive plans might speed things up. It might be valuable to examine what methods have worked for fast success in the past.
(2) Absolutely.
(3) Yeah, raising awareness seems like it might be a key part of most good plans.
(4) All paths leading to victory would be great, but I think even plans that would most likely fail are still valuable. They illuminate options and tie ultimate goals to concrete action. I find it very unlikely that failing plans are worse than no plans. Perhaps high standards for comprehensive plans might have contributed to the current shortage of plans. “Plans are worthless, but planning is everything.” Naturally I will aim for all-paths-lead-to-victory plans, but I won’t be shy in putting ideas out there that don’t live up to that standard.
(5) I don’t currently have much influence, so the risk would be sacrificing inclusion in future conversations. I think it’s worth the risk.
I would consider it a huge success if the ideas were filtered through other orgs, even if they just help make incremental progress. In general, I think the AI safety community might benefit from having comprehensive plans to discuss and critique and iterate on over time. It would be great if I could inspire more people to try.
(1) I agree, but don’t have confidence that this alternate approach results in faster progress. I hope I’m proven wrong.
(4) Also agreed, but I think this hinges on whether the failing plans are attempted in such a way that they close off other plans, either by affecting planning efforts or by affecting reactions to various efforts.
Upvoted—I do think lack of a coherent, actionable strategy that actually achieves goals if successful is a general problem of many advocacy movements, not just AI. A few observations:
(1) Actually-successful historical advocacy movements that solved major problems usually did so incrementally over many iterations, taking the wins they could get at each moment while putting themselves in position to take advantage when further opportunities arose.
(2) Relatedly, don’t complain about incremental improvements (yours or others’). Celebrate them, or no one will want to work with you or compromise with you, and you won’t end up in position to get more wins later.
(3) Raising awareness isn’t a terminal goal or a solution, but it gives others a reason to pay attention to you at all. If you have actually good proposals for what to do about a problem, and are in a position to make the case that your proposals are effective and practical, then a perception that the problem is real and a solution is necessary can be very helpful. If a politician solves a major problem that is not yet a crisis, or is not seen as a crisis by their constituents, then solving the problem just looks like wasting money/time/effort to the people that decide if they get to keep their jobs.
(4) Don’t plan a path that leads to victory, plan so that all paths lead to victory. If you make a plan, any plan, to achieve an outcome that is sufficient, it will require many things to go right, and therefore will not work, for reasons you fail to anticipate, and it will also taint your further planning efforts along predetermined directions, limiting your ability to adapt to future opportunities and setbacks. Avoiding this failure mode is part of the upshot of seeking and celebrating incremental wins unreservedly and consistently, as long as those wins don’t cut off the path to further progress.
(5) Being seen to have a long-term plan that no one currently in power would support seems like a quick way to get shut out of a conversation unless you already have some form of power such that you’re hard to ignore.
I was so glad the other day to see Nate Soares talk about the importance of openly discussing x-risks, and also the recent congressional hearings that actually started to ask about real AI risks, because it’s an opening to push the conversation in useful directions. I genuinely worry that AI safety orgs and advocates will make the mistakes that e.g. climate change activists often make: Shut down proposals that are clearly net improvements likely to increase public support for further action, in favor of (in practice) counterproductively maintaining the status quo and turning people off. I started openly discussing x-risk with more and more people in my life last year, and found they have been quite receptive to it from people they knew and trusted to generally be reasonable.
I do think there is value in having organizations around with the kinds of plans you are discussing, but I don’t think, in general, those are the ones that actually get the opportunity to make big wins. I think they serve as generators of ideas that get filtered through more incremental and ‘moderate’ organizations over time, and make those other organizations seem like better partners to collaborate with. I don’t have good data for this, more a general intuition from looking at a few historical examples.
Thank you for sharing your thoughts! My responses:
(1) I believe most historical advocacy movements have required more time than we might have for AI safety. More comprehensive plans might speed things up. It might be valuable to examine what methods have worked for fast success in the past.
(2) Absolutely.
(3) Yeah, raising awareness seems like it might be a key part of most good plans.
(4) All paths leading to victory would be great, but I think even plans that would most likely fail are still valuable. They illuminate options and tie ultimate goals to concrete action. I find it very unlikely that failing plans are worse than no plans. Perhaps high standards for comprehensive plans might have contributed to the current shortage of plans. “Plans are worthless, but planning is everything.” Naturally I will aim for all-paths-lead-to-victory plans, but I won’t be shy in putting ideas out there that don’t live up to that standard.
(5) I don’t currently have much influence, so the risk would be sacrificing inclusion in future conversations. I think it’s worth the risk.
I would consider it a huge success if the ideas were filtered through other orgs, even if they just help make incremental progress. In general, I think the AI safety community might benefit from having comprehensive plans to discuss and critique and iterate on over time. It would be great if I could inspire more people to try.
(1) I agree, but don’t have confidence that this alternate approach results in faster progress. I hope I’m proven wrong.
(4) Also agreed, but I think this hinges on whether the failing plans are attempted in such a way that they close off other plans, either by affecting planning efforts or by affecting reactions to various efforts.
(5) Fair enough.