I strongly believe the alignment problem is fundamentally impossible, another form of an undecidable problem. I, however, would prefer to die with dignity. I study methods of minimizing the chances of being wiped out after the advent of ASI.
My current line of research is computational neuroscience for human cognitive augmentation. I work on the heavily flawed theory that the higher the intelligence waterline of humanity, the better the chances we have ASI employs us as part of its goals, instead of ‘recycling’ us as biomass.
As much as I agree that things are about to get really weird, that first diagram is a bit too optimistic. There is a limit to how much data humanity has available to train AI (here), and it seems doubtful we can make use of data x1000 times more effectively in such a short span of time. For all we know, there could be yet another AI winter coming—I don’t think we will get that lucky, though.