I think it’s time for more people in AI Policy to start advocating for an AI pause.
It seems very plausible to me that we could be within 2-5 years of recursively self-improving AGI, and we might get an AGI-light computer virus before then (Think ChaosGPT v2).
Pausing AI development actually seems like a pretty reasonable thing to most normal people. The regulatory capacity of the US government is the most functional piece, and bureaucrats put in charge of regulating something love to slow down progress.
The hardware and software aspects need to be targeted. There should be strict limits placed on training new state-of-the-art models and a program to limit sales of graphics cards and other hardware that can train the latest models.
I think it’s time for more people in AI Policy to start advocating for an AI pause.
It seems very plausible to me that we could be within 2-5 years of recursively self-improving AGI, and we might get an AGI-light computer virus before then (Think ChaosGPT v2).
Pausing AI development actually seems like a pretty reasonable thing to most normal people. The regulatory capacity of the US government is the most functional piece, and bureaucrats put in charge of regulating something love to slow down progress.
The hardware and software aspects need to be targeted. There should be strict limits placed on training new state-of-the-art models and a program to limit sales of graphics cards and other hardware that can train the latest models.