Thank you for the effort in categorising the scenarios! I am also interested in learning what could drift mankind from one epilogue to another. And one could also consider where existing high-qualityscenarios like the AI 2027 forecast[1] land on this scale. However, as I detailed in my quick take, the scenarios post-AI 2027 are mostlyslop or modifications of the forecast: alternate compute assumptions or attempts to include rogue replication, both of which just change P(mutual race)
This includes modifying the Race Ending by making Agent-4 more nice or spelling out is personality and the ways in which it’s misaligned, as done by me.
Yeah I think that figuring out how to move probability mass between these scenarios is probably a good next move, although at some point I think I may want to revisit how I’ve drawn the boundaries—they seem pretty neat atm but I think it fairly likely the future will throw us a curveball at some point.
Thank you for the effort in categorising the scenarios! I am also interested in learning what could drift mankind from one epilogue to another. And one could also consider where existing high-quality scenarios like the AI 2027 forecast[1] land on this scale. However, as I detailed in my quick take, the scenarios post-AI 2027 are mostly slop or modifications of the forecast: alternate compute assumptions or attempts to include rogue replication, both of which just change P(mutual race)
This includes modifying the Race Ending by making Agent-4 more nice or spelling out is personality and the ways in which it’s misaligned, as done by me.
Yeah I think that figuring out how to move probability mass between these scenarios is probably a good next move, although at some point I think I may want to revisit how I’ve drawn the boundaries—they seem pretty neat atm but I think it fairly likely the future will throw us a curveball at some point.