Ok so he does admit it’s something completely politically unviable because it’s probably tyrannical or straight up lesser-evil-but-still-pretty-evil. At which point I’m not even sure if not saying it out loud doesn’t make it sound even more ominous. Point stands, “pivotal act” can’t possibly be a viable strategy and in fact its ethical soundness altogether is questionable unless it’s really just a forced binary choice between that and extinction.
“Outside Overton window”≠ “evil”. Like, “let’s defer to prediction markets in major policy choices” was pretty out of it most of history and probably even today.
As far as I remember, “melting all GPUs” is not an actual pivotal act because it is not minimal: it’s too hard to align ASI to build nanobots for this and operate in environment safely. And I think we can conclude that actual PA should be pretty tame, because, sure, melting all GPUs is scary and major property destruction, but it’s nothing close to “establishing mind-controlling surveillance dictatorship”.
Another example of possible PA is invention of superhuman intelligence enhancement, but it’s still not minimal.
True, but would you really be ashamed of saying “let’s defer to prediction markets in major policy choices” out loud? That might get you some laughs and wouldn’t be taken very seriously but most people wouldn’t be outright outraged.
And I think we can conclude that actual PA should be pretty tame, because, sure, melting all GPUs is scary and major property destruction, but it’s nothing close to “establishing mind-controlling surveillance dictatorship”.
True to a point, but it’s still something that people would strongly object to—since you can’t even prove the counterfactual that without it we’d be all dead. And in addition, there is a more serious aspect to it, which is military hardware that uses GPUs. And technically destroying that in other countries is an act of war or at least sabotage.
Another example of possible PA is invention of superhuman intelligence enhancement, but it’s still not minimal.
I doubt that would solve anything. Intelligence does not equal wisdom; some fool would still probably just use it to build AGI faster.
If you can prove that it’s you who melt all GPUs stealthy using AI-developed nanotech, it should be pretty obvious that the same AI without safety measures can kill everyone.
Scott Alexander once wrote that while it’s probably not wise to build AI organisation around pivotal act, if you find yourself in position where you can do it, you should do it, because, assuming you are not special genius decades ahead in AI development, if you can do pivotal act, someone else in AI can kill everyone.
I mean intelligence in wide sense, including wisdom, security mindset and self-control. And obviously, if I could build AI that can provide me such enhancement, I would enhance myself to solve full value-alignment problem, not give enhancement to random unchecked fools.
Yes, but that “I can’t let someone else handle this, I’ll do it myself behind their backs” generalized attitude is how actually we do get 100% all offed, no pivotal acts whatsoever. It’s delusion to think it leaves a measurable, non-infinitesimal window to actually succeeding—it does not. It simply leads to everyone racing and eventually someone who’s more reckless and thus faster “winning”. Or at best, it leads to a pivotal act by someone who then absolutely goes on to abuse their newfound power because no one can be inherently trusted with that level of control. That’s the best of the two worlds, but still bad.
Not quite.
If you live in the world where you can let others handle this, you can’t be in position to perform pivotal act, because others will successfully coordinate around not giving anyone (including you) unilateral capability to launch ASI.
And otherwise, if you find yourself in situation “there is a red button to melt all GPUs”, it means that others utterly failed to coordinate and you should pick the least bad world that remains possible.
Ok so he does admit it’s something completely politically unviable because it’s probably tyrannical or straight up lesser-evil-but-still-pretty-evil. At which point I’m not even sure if not saying it out loud doesn’t make it sound even more ominous. Point stands, “pivotal act” can’t possibly be a viable strategy and in fact its ethical soundness altogether is questionable unless it’s really just a forced binary choice between that and extinction.
“Outside Overton window”≠ “evil”. Like, “let’s defer to prediction markets in major policy choices” was pretty out of it most of history and probably even today.
As far as I remember, “melting all GPUs” is not an actual pivotal act because it is not minimal: it’s too hard to align ASI to build nanobots for this and operate in environment safely. And I think we can conclude that actual PA should be pretty tame, because, sure, melting all GPUs is scary and major property destruction, but it’s nothing close to “establishing mind-controlling surveillance dictatorship”.
Another example of possible PA is invention of superhuman intelligence enhancement, but it’s still not minimal.
True, but would you really be ashamed of saying “let’s defer to prediction markets in major policy choices” out loud? That might get you some laughs and wouldn’t be taken very seriously but most people wouldn’t be outright outraged.
True to a point, but it’s still something that people would strongly object to—since you can’t even prove the counterfactual that without it we’d be all dead. And in addition, there is a more serious aspect to it, which is military hardware that uses GPUs. And technically destroying that in other countries is an act of war or at least sabotage.
I doubt that would solve anything. Intelligence does not equal wisdom; some fool would still probably just use it to build AGI faster.
If you can prove that it’s you who melt all GPUs stealthy using AI-developed nanotech, it should be pretty obvious that the same AI without safety measures can kill everyone.
Scott Alexander once wrote that while it’s probably not wise to build AI organisation around pivotal act, if you find yourself in position where you can do it, you should do it, because, assuming you are not special genius decades ahead in AI development, if you can do pivotal act, someone else in AI can kill everyone.
I mean intelligence in wide sense, including wisdom, security mindset and self-control. And obviously, if I could build AI that can provide me such enhancement, I would enhance myself to solve full value-alignment problem, not give enhancement to random unchecked fools.
Yes, but that “I can’t let someone else handle this, I’ll do it myself behind their backs” generalized attitude is how actually we do get 100% all offed, no pivotal acts whatsoever. It’s delusion to think it leaves a measurable, non-infinitesimal window to actually succeeding—it does not. It simply leads to everyone racing and eventually someone who’s more reckless and thus faster “winning”. Or at best, it leads to a pivotal act by someone who then absolutely goes on to abuse their newfound power because no one can be inherently trusted with that level of control. That’s the best of the two worlds, but still bad.
Not quite. If you live in the world where you can let others handle this, you can’t be in position to perform pivotal act, because others will successfully coordinate around not giving anyone (including you) unilateral capability to launch ASI. And otherwise, if you find yourself in situation “there is a red button to melt all GPUs”, it means that others utterly failed to coordinate and you should pick the least bad world that remains possible.