Well, an aligned Singularity would probably be relatively pleasant, since the entities fueling it would consider causing this sort of vast distress a negative and try to avoid it. Indeed, if you trust them not to drown you, there would be no need for this sort of frantic grasping-at-straws.
An unaligned Singularity would probably also be more pleasant, since the entities fueling it would likely try to make it look aligned, with the span of time between the treacherous turn and everyone dying likely being short.
This scenario covers a sort of “neutral-alignment/non-controlled” Singularity, where there’s no specific superintelligent actor (or coalition) in control of the whole process, and it’s instead guided by… market forces, I guess? With AGI labs continually releasing new models for private/corporate use, providing the tools/opportunities you can try to grasp to avoid drowning. I think this is roughly how things would go under “mainstream” models of AI progress (e. g., AI 2027). (I don’t expect it to actually go this way, I don’t think LLMs can power the Singularity.)
Well, an aligned Singularity would probably be relatively pleasant, since the entities fueling it would consider causing this sort of vast distress a negative and try to avoid it. Indeed, if you trust them not to drown you, there would be no need for this sort of frantic grasping-at-straws.
An unaligned Singularity would probably also be more pleasant, since the entities fueling it would likely try to make it look aligned, with the span of time between the treacherous turn and everyone dying likely being short.
This scenario covers a sort of “neutral-alignment/non-controlled” Singularity, where there’s no specific superintelligent actor (or coalition) in control of the whole process, and it’s instead guided by… market forces, I guess? With AGI labs continually releasing new models for private/corporate use, providing the tools/opportunities you can try to grasp to avoid drowning. I think this is roughly how things would go under “mainstream” models of AI progress (e. g., AI 2027). (I don’t expect it to actually go this way, I don’t think LLMs can power the Singularity.)