Has any serious AI Safety research org thought about situating themselves so that they could continue to function after a nuclear war?
Wait, hear me out.
A global thermonuclear war would set AI timelines back by at least a decade, for all of the obvious reasons. So an AI Safety org that survived would have additional precious years to work on the alignment problem, compared to orgs in the worlds where we avoid that war.
So it seems to me that at least one org with short timelines ought to move to New Zealand or at least move farther away from cities.
(Yes, I know MIRI was pondering leaving the Bay Area for underspecified reasons. I’d love to know what their thinking was regarding this effect, but I don’t expect they’d reveal it.)
I’m not sure I get this enough for it to even be a crux, but what’s the intuition behind this?
My guess for your argument is that you see it as analogous to the way a CNN beats out a fully-connected one at image recognition, because it cuts down massively on the number of possible models, compatibly with the known structure of the problem.
But that raises the question, why are these biology-inspired networks more likely to be better representations of general intelligence than something like transformers? Genuinely curious what you’ll say here.
(Wisdom of evolution only carries so much weight for me, because the human brain is under constraints like collocation of neurons that prevent evolution from building things that artificial architectures can do.)