I wrote about this on EA Forum a few days ago. I’m glad others are starting to think about this. I do think archiving all existing alignment work is very important and perhaps equally important as efforts to keep alive people who represent existing experts & talent in the field. It would be much better for them to be able to continue their work than for new people to attempt to pick off where they left off, especially since many things like intuitions honed over time etc. may not be readily learnable.
I’m increasingly inclined to think that a massive “shock” in the near future (like a nuclear war or a severe pandemic) which effectively halts economic progress, perhaps for a few decades or more, then restarts it at a lower baseline, may be one of the few remaining scenarios we can reasonably expect to survive AGI, taking into account the grim present strategic situation as Eliezer outlined in the recent sequence. Such a world might especially favour alignment since AI work (prosaic AI especially) seems to be much more capital intensive than alignment work, so in a post-shock world with less capital available it would be disadvantaged or impossible to continue carrying out at all. There are a few other reasons such a catastrophic shock may actually increase our collective odds of success re: AI risk, such as a greatly reduced population implying fewer AGI projects & race pressures, etc., morbid as it is.
Assuming your beliefs as stated above are truly held, why shouldn’t I be worried that you’ll try to deliberately induce such a “shock,” and thereby undertake action to kill a significant percentage of the (currently living) population?
(Apologies for being horribly blunt, not sure how else to word this)
I wrote about this on EA Forum a few days ago. I’m glad others are starting to think about this. I do think archiving all existing alignment work is very important and perhaps equally important as efforts to keep alive people who represent existing experts & talent in the field. It would be much better for them to be able to continue their work than for new people to attempt to pick off where they left off, especially since many things like intuitions honed over time etc. may not be readily learnable.
I’m increasingly inclined to think that a massive “shock” in the near future (like a nuclear war or a severe pandemic) which effectively halts economic progress, perhaps for a few decades or more, then restarts it at a lower baseline, may be one of the few remaining scenarios we can reasonably expect to survive AGI, taking into account the grim present strategic situation as Eliezer outlined in the recent sequence. Such a world might especially favour alignment since AI work (prosaic AI especially) seems to be much more capital intensive than alignment work, so in a post-shock world with less capital available it would be disadvantaged or impossible to continue carrying out at all. There are a few other reasons such a catastrophic shock may actually increase our collective odds of success re: AI risk, such as a greatly reduced population implying fewer AGI projects & race pressures, etc., morbid as it is.
Given this, the OP’s project is doubly important.
Assuming your beliefs as stated above are truly held, why shouldn’t I be worried that you’ll try to deliberately induce such a “shock,” and thereby undertake action to kill a significant percentage of the (currently living) population?
(Apologies for being horribly blunt, not sure how else to word this)