I think focusing on the risk in the “AGI achieved” branch which is unlikely given LLM paradigm obscures the fact that there’s x risk in the “aggressively RL an LLM paradigm in a narrow branch that has sufficiently powerful actuators”. The labs are locked in an RL race to the bottom now, and it’s not clear to me that a narrow ASI with sufficiently good coding ability is handleable.
I think focusing on the risk in the “AGI achieved” branch which is unlikely given LLM paradigm obscures the fact that there’s x risk in the “aggressively RL an LLM paradigm in a narrow branch that has sufficiently powerful actuators”. The labs are locked in an RL race to the bottom now, and it’s not clear to me that a narrow ASI with sufficiently good coding ability is handleable.