I always think it’s worth starting as you mean to go on. The overton window might shift. The range of actors you might have to worry about might increase as we get more compute overhang, that kind of thing. We don’t know what the computational limits for making a superintelligence.
But yeah more of a black swan thing than something to easily plan for.
There’s also the problem that scenarios with a hidden superintelligence are like scenarios with aliens (or with gods that do not show themselves unambiguously). If the world already contains a hidden superintelligence, it could be responsible for anything and everything.
Yes, it could be orchestrating a pause-AI movement because that’s the simplest way for it to remain hidden indefinitely, without being disturbed by rival AIs. But if that’s the case, then suspicious accelerationists who thereby redouble their own efforts to create their own super-AI, should find themselves constantly failing for one reason or another—unless you suppose that this hidden superintelligence is smart enough to secretly run a powerful pause-AI movement, but not smart enough to also spot mere humans outside the movement who are nonetheless trying to create super-AI, and thwart their efforts.
The simplest thing to suppose, is that once superintelligence exists, it’s game over, everything is now out of human hands. You can postulate forms of superintelligence that stay out of the way, but that seems artificial, and if even you have a superintelligence that’s just keeping quietly to itself for now, it can always take over and remake the world if something impels it to do so.
This discussion inspired me to create an app to highlight the assumptions that lead me to care more about s-risk than dying from AI (I’m likely to experience partial singularities, but not ais that harvest my atoms)
It is just is quickly whipped up by AI so I’ve not put much thought into the numbers myself. The shape of argument is what interested me.
If it was my day job to think about this stuff I could put more time into it .. it would be great if it was someone’s job to create things like this we could argue with and improve (or run our own versions that could be queried by people proposing policy interventions)
That depends on the person being an Occam’s razor type of person, not a decision maker under deep uncertainty or some other decision making system.
The dangerous thing occurs if a person or persons is blocked from normal collaborative development in ways that stretch normal explanation, but they can develop something by themselves that might be dangerous (due to special knowledge).
By stretch normal explanation I mean things like too much uniformity in the advice given to the blocked person gets (due to pause managing to dominate the info sphere, using unknown tactics ).
The paused people might not know what is going on, but that uncertainty allows explanations to breed, some of which might justify building super intelligences.
I always think it’s worth starting as you mean to go on. The overton window might shift. The range of actors you might have to worry about might increase as we get more compute overhang, that kind of thing. We don’t know what the computational limits for making a superintelligence.
But yeah more of a black swan thing than something to easily plan for.
There’s also the problem that scenarios with a hidden superintelligence are like scenarios with aliens (or with gods that do not show themselves unambiguously). If the world already contains a hidden superintelligence, it could be responsible for anything and everything.
Yes, it could be orchestrating a pause-AI movement because that’s the simplest way for it to remain hidden indefinitely, without being disturbed by rival AIs. But if that’s the case, then suspicious accelerationists who thereby redouble their own efforts to create their own super-AI, should find themselves constantly failing for one reason or another—unless you suppose that this hidden superintelligence is smart enough to secretly run a powerful pause-AI movement, but not smart enough to also spot mere humans outside the movement who are nonetheless trying to create super-AI, and thwart their efforts.
The simplest thing to suppose, is that once superintelligence exists, it’s game over, everything is now out of human hands. You can postulate forms of superintelligence that stay out of the way, but that seems artificial, and if even you have a superintelligence that’s just keeping quietly to itself for now, it can always take over and remake the world if something impels it to do so.
This discussion inspired me to create an app to highlight the assumptions that lead me to care more about s-risk than dying from AI (I’m likely to experience partial singularities, but not ais that harvest my atoms)
https://singularities-we-survive.streamlit.app/
It is just is quickly whipped up by AI so I’ve not put much thought into the numbers myself. The shape of argument is what interested me.
If it was my day job to think about this stuff I could put more time into it .. it would be great if it was someone’s job to create things like this we could argue with and improve (or run our own versions that could be queried by people proposing policy interventions)
Not going to share it now widely as is.
There have been a variety of models posted where you can adjust the probability assigned to various assumptions (1 2 3 4 5).
For anything more sophisticated than that, I’m not aware of a structured protocol that is superior to free discussion on a forum.
That depends on the person being an Occam’s razor type of person, not a decision maker under deep uncertainty or some other decision making system.
The dangerous thing occurs if a person or persons is blocked from normal collaborative development in ways that stretch normal explanation, but they can develop something by themselves that might be dangerous (due to special knowledge).
By stretch normal explanation I mean things like too much uniformity in the advice given to the blocked person gets (due to pause managing to dominate the info sphere, using unknown tactics ).
The paused people might not know what is going on, but that uncertainty allows explanations to breed, some of which might justify building super intelligences.