Actually, even if the LLMs do scale to the AGI, we might find that a civilisation run by the AGI is unlikely to appear. The current state of the world energy industry and computation technology might fail to allow the AGI to generate answers to many tasks that are necessary to sustain the energy industry itself. Attempts to optimize the AGI would require it to be more energy efficient, which appears to lead it to be neuromorphic, which in turn could imply that the AGIs running the civilisation are to be split into many brains, resemble the humanity and be easily controllable. Does this fact decrease p(doom|misaligned AGI) from 1 to an unknown amount?
Actually, even if the LLMs do scale to the AGI, we might find that a civilisation run by the AGI is unlikely to appear. The current state of the world energy industry and computation technology might fail to allow the AGI to generate answers to many tasks that are necessary to sustain the energy industry itself. Attempts to optimize the AGI would require it to be more energy efficient, which appears to lead it to be neuromorphic, which in turn could imply that the AGIs running the civilisation are to be split into many brains, resemble the humanity and be easily controllable. Does this fact decrease p(doom|misaligned AGI) from 1 to an unknown amount?