To a first approximation my futurism is time acceleration; so the risks are the typical risks sans AI, but the timescale is hyperexponential ala roodman. Even a more gradual takeoff would imply more risk to global stability on faster timescales than anything we’ve experience in history; the wrong AGI race winners could create various dystopias.
I can’t point to such a site, however you should be aware of AI Optimists, not sure if Jacob plans to write there. Also follow the work of Quentin Pope, Alex Turner, Nora Belrose etc. I expect the site would point out what they feel to be the most important risks. I don’t know of anyone rational, no matter how optimistic who doesn’t think there are substantial ones.
Do you have a post or blog post on the risks we do need to worry about?
No, and that’s a reasonable ask.
To a first approximation my futurism is time acceleration; so the risks are the typical risks sans AI, but the timescale is hyperexponential ala roodman. Even a more gradual takeoff would imply more risk to global stability on faster timescales than anything we’ve experience in history; the wrong AGI race winners could create various dystopias.
I can’t point to such a site, however you should be aware of AI Optimists, not sure if Jacob plans to write there. Also follow the work of Quentin Pope, Alex Turner, Nora Belrose etc. I expect the site would point out what they feel to be the most important risks. I don’t know of anyone rational, no matter how optimistic who doesn’t think there are substantial ones.