How have we updated p(doom) on the idea that LLMs are very different than hypothesized AI?
Actually. what were your predictions? “Hypothesized AI”, as far as I understood you, is only a final step—AGI that kills us. Path to it can be very weird. I think that before GPT many people could say “my peak of probability distribution lies on model-based RL as path to AGI”, but they still had very fat and long tails in this distribution.
it seems like we’re spending all the weirdness points on preventing the training of a language model that at the end of the day will be slightly better than GPT-4.
The point of slowing down AI is not preventing training next model, the point is to slow down AI. There is no right moment to slow down AI in future, because there is no fire alarm for AI (i.e., there is no formally defined threshold in capabilities that can logically convince everyone to halt development of AI until we solve alignment problem), right moment is “right now” and that was true for every moment of time since the moment we realized that AI can kill us all (somewhen in 1960s?).
Actually. what were your predictions? “Hypothesized AI”, as far as I understood you, is only a final step—AGI that kills us. Path to it can be very weird. I think that before GPT many people could say “my peak of probability distribution lies on model-based RL as path to AGI”, but they still had very fat and long tails in this distribution.
The point of slowing down AI is not preventing training next model, the point is to slow down AI. There is no right moment to slow down AI in future, because there is no fire alarm for AI (i.e., there is no formally defined threshold in capabilities that can logically convince everyone to halt development of AI until we solve alignment problem), right moment is “right now” and that was true for every moment of time since the moment we realized that AI can kill us all (somewhen in 1960s?).