Thank you for sharing your thoughts! My responses:
(1) I believe most historical advocacy movements have required more time than we might have for AI safety. More comprehensive plans might speed things up. It might be valuable to examine what methods have worked for fast success in the past.
(2) Absolutely.
(3) Yeah, raising awareness seems like it might be a key part of most good plans.
(4) All paths leading to victory would be great, but I think even plans that would most likely fail are still valuable. They illuminate options and tie ultimate goals to concrete action. I find it very unlikely that failing plans are worse than no plans. Perhaps high standards for comprehensive plans might have contributed to the current shortage of plans. “Plans are worthless, but planning is everything.” Naturally I will aim for all-paths-lead-to-victory plans, but I won’t be shy in putting ideas out there that don’t live up to that standard.
(5) I don’t currently have much influence, so the risk would be sacrificing inclusion in future conversations. I think it’s worth the risk.
I would consider it a huge success if the ideas were filtered through other orgs, even if they just help make incremental progress. In general, I think the AI safety community might benefit from having comprehensive plans to discuss and critique and iterate on over time. It would be great if I could inspire more people to try.
(1) I agree, but don’t have confidence that this alternate approach results in faster progress. I hope I’m proven wrong.
(4) Also agreed, but I think this hinges on whether the failing plans are attempted in such a way that they close off other plans, either by affecting planning efforts or by affecting reactions to various efforts.
Thank you for sharing your thoughts! My responses:
(1) I believe most historical advocacy movements have required more time than we might have for AI safety. More comprehensive plans might speed things up. It might be valuable to examine what methods have worked for fast success in the past.
(2) Absolutely.
(3) Yeah, raising awareness seems like it might be a key part of most good plans.
(4) All paths leading to victory would be great, but I think even plans that would most likely fail are still valuable. They illuminate options and tie ultimate goals to concrete action. I find it very unlikely that failing plans are worse than no plans. Perhaps high standards for comprehensive plans might have contributed to the current shortage of plans. “Plans are worthless, but planning is everything.” Naturally I will aim for all-paths-lead-to-victory plans, but I won’t be shy in putting ideas out there that don’t live up to that standard.
(5) I don’t currently have much influence, so the risk would be sacrificing inclusion in future conversations. I think it’s worth the risk.
I would consider it a huge success if the ideas were filtered through other orgs, even if they just help make incremental progress. In general, I think the AI safety community might benefit from having comprehensive plans to discuss and critique and iterate on over time. It would be great if I could inspire more people to try.
(1) I agree, but don’t have confidence that this alternate approach results in faster progress. I hope I’m proven wrong.
(4) Also agreed, but I think this hinges on whether the failing plans are attempted in such a way that they close off other plans, either by affecting planning efforts or by affecting reactions to various efforts.
(5) Fair enough.