This previous LessWrong article seems extremely relevant and basically sketches out an example of the rough “strategic portfolio” for AI risk that you are arguing for.
In line with some of myrecentposts, I’m starting to think there is a lot of value in:
Clearly defining consensus group strategy (among LW/EA/CG, for example) on “making the future go well.” This should include rough estimates from a variety of respected sources, a diverse portfolio of interventions, and explicitly communicated uncertainty / epistemic humility.
Designing info-UI tools to facilitate that process. Enabling effective deliberation, strategy adjustment, and maybe most importantly: easy-to-use interfaces for the general public. The goal being to convey community beliefs and disagreements in a very transparent and easy to understand way.
This is intentionally unspecific but I have outlined a couple particular ideas in previous posts and will continue to crystallize my suggestions / explain why I think this area has potential.
The AI futures model and related ecosystem is a great start but is limited to a handful of thinkers (Daniel and Eli) and a specific subset of information (forecasting timelines). Their work has already been quite impactful (read by JD Vance apparently) – why not work hard to apply and scale good information-interface-design to broader community strategy?
This previous LessWrong article seems extremely relevant and basically sketches out an example of the rough “strategic portfolio” for AI risk that you are arguing for.
In line with some of my recent posts, I’m starting to think there is a lot of value in:
Clearly defining consensus group strategy (among LW/EA/CG, for example) on “making the future go well.” This should include rough estimates from a variety of respected sources, a diverse portfolio of interventions, and explicitly communicated uncertainty / epistemic humility.
Designing info-UI tools to facilitate that process. Enabling effective deliberation, strategy adjustment, and maybe most importantly: easy-to-use interfaces for the general public. The goal being to convey community beliefs and disagreements in a very transparent and easy to understand way.
This is intentionally unspecific but I have outlined a couple particular ideas in previous posts and will continue to crystallize my suggestions / explain why I think this area has potential.
The AI futures model and related ecosystem is a great start but is limited to a handful of thinkers (Daniel and Eli) and a specific subset of information (forecasting timelines). Their work has already been quite impactful (read by JD Vance apparently) – why not work hard to apply and scale good information-interface-design to broader community strategy?