This discussion inspired me to create an app to highlight the assumptions that lead me to care more about s-risk than dying from AI (I’m likely to experience partial singularities, but not ais that harvest my atoms)
It is just is quickly whipped up by AI so I’ve not put much thought into the numbers myself. The shape of argument is what interested me.
If it was my day job to think about this stuff I could put more time into it .. it would be great if it was someone’s job to create things like this we could argue with and improve (or run our own versions that could be queried by people proposing policy interventions)
This discussion inspired me to create an app to highlight the assumptions that lead me to care more about s-risk than dying from AI (I’m likely to experience partial singularities, but not ais that harvest my atoms)
https://singularities-we-survive.streamlit.app/
It is just is quickly whipped up by AI so I’ve not put much thought into the numbers myself. The shape of argument is what interested me.
If it was my day job to think about this stuff I could put more time into it .. it would be great if it was someone’s job to create things like this we could argue with and improve (or run our own versions that could be queried by people proposing policy interventions)
Not going to share it now widely as is.
There have been a variety of models posted where you can adjust the probability assigned to various assumptions (1 2 3 4 5).
For anything more sophisticated than that, I’m not aware of a structured protocol that is superior to free discussion on a forum.