If you are really convinced that 1) AGI is coming really fast. 2) Work on alignment has basically no chances to break through in time. 3) Un aligned AGI result in quick and complete annihilation for humankind 4) You firmly believe in a utilitarism/consequentialism. which seems to me to be Eliezer’s model, then you should focus your efforts to launch an all out nuclear war between the USA and China, which would be very unlikely to destroy humanity. You could even move MIRI to New-Zealand or something so that work on alignment can continue after the nuclear blast.
Personally, I think that we can do better than starting a nuclear war (which, after all, just delays the problem, and probably leaves civilization in an even WORSE place to solve alignment when the problem eventually rears its head again—although your idea about disaster-proofing MIRI and other AI safety orgs is interesting), as I said in a reply to that comment.
Trying to reduce Earth’s supply of compute (including through military means), and do other things to slow down the field of AI (up to and including the kind of stuff that we’d need to do to stop the proliferation of Nick Bostrom’s “easy nukes”) seems promising. Then with the extra time that buys, we can make differential progress in other areas:
Alignment research, including searching for whole new AGI paradigms that are easier to align.
Human enhancement via genetic engineering, BCIs, brain emulation, cloning John Von Neumann, or whatever.
Better governance tech (prediction markets, voting systems, etc), so that the world can be governed more wisely on issues of AI risk and everything else.
But just as I said in that comment thread, “I’m not sure if MIRI / LessWrong / etc want to encourage lots of public speculation about potentially divisive AGI ‘nonpharmaceutical interventions’ like fomenting nuclear war. I think it’s an understandably sensitive area, which people would prefer to discuss privately.”
Trying to reduce the amount of compute risks increasing hardware overhang once that compute is rebuilt. I think trying to slow down capabilities research (e.g. by getting a job at an AI lab and being obstructive) is probably better.
edit: meh idk. Whether or not this improves things depends on how much compute you can destroy & for how long, ml scaling, politics, etc etc. But the current world of “only big labs with lots of compute budget can achieve SOTA” (arguable, but possibly more true in the future) and less easy stuff to do to get better performance (scaling) both seem good.
If you are really convinced that
1) AGI is coming really fast.
2) Work on alignment has basically no chances to break through in time.
3) Un aligned AGI result in quick and complete annihilation for humankind
4) You firmly believe in a utilitarism/consequentialism.
which seems to me to be Eliezer’s model,
then you should focus your efforts to launch an all out nuclear war between the USA and China, which would be very unlikely to destroy humanity.
You could even move MIRI to New-Zealand or something so that work on alignment can continue after the nuclear blast.
See the similar comment here.
Personally, I think that we can do better than starting a nuclear war (which, after all, just delays the problem, and probably leaves civilization in an even WORSE place to solve alignment when the problem eventually rears its head again—although your idea about disaster-proofing MIRI and other AI safety orgs is interesting), as I said in a reply to that comment. Trying to reduce Earth’s supply of compute (including through military means), and do other things to slow down the field of AI (up to and including the kind of stuff that we’d need to do to stop the proliferation of Nick Bostrom’s “easy nukes”) seems promising. Then with the extra time that buys, we can make differential progress in other areas:
Alignment research, including searching for whole new AGI paradigms that are easier to align.
Human enhancement via genetic engineering, BCIs, brain emulation, cloning John Von Neumann, or whatever.
Better governance tech (prediction markets, voting systems, etc), so that the world can be governed more wisely on issues of AI risk and everything else.
But just as I said in that comment thread, “I’m not sure if MIRI / LessWrong / etc want to encourage lots of public speculation about potentially divisive AGI ‘nonpharmaceutical interventions’ like fomenting nuclear war. I think it’s an understandably sensitive area, which people would prefer to discuss privately.”
Trying to reduce the amount of compute risks increasing hardware overhang once that compute is rebuilt. I think trying to slow down capabilities research (e.g. by getting a job at an AI lab and being obstructive) is probably better.
edit: meh idk. Whether or not this improves things depends on how much compute you can destroy & for how long, ml scaling, politics, etc etc. But the current world of “only big labs with lots of compute budget can achieve SOTA” (arguable, but possibly more true in the future) and less easy stuff to do to get better performance (scaling) both seem good.
So, start making the diplomatic situation around Taiwan as bad as possible? ;)