For example, I expect both of them to happen after you’ve already died if you didn’t solve AI alignment, so that interval doesn’t affect strategic questions about AI alignment.
Ah, gotcha that wasn’t clear to me, and further reframes the disagreement to me pretty considerably (and your position as I understand it makes more sense to me now). Will think on that.
(You had said “crazy things are happening” but I assumed this was “the sort of crazy thing where you can’t predict what will happen” vs “the crazy thing where most humans are dead)
I’m actually fairly curious what you consider some plausible scenarios in which I might be dead before overwhelmingly superior intelligence is at play.
Ah, gotcha that wasn’t clear to me, and further reframes the disagreement to me pretty considerably (and your position as I understand it makes more sense to me now). Will think on that.
(You had said “crazy things are happening” but I assumed this was “the sort of crazy thing where you can’t predict what will happen” vs “the crazy thing where most humans are dead)
I’m actually fairly curious what you consider some plausible scenarios in which I might be dead before overwhelmingly superior intelligence is at play.