A strict and swift non-permanent pause is certainly one option. Use very aggressive regulation. Perspires slightly.
Pause all development and research everywhere and all at once. Don’t stop totally. Don’t be anti-accelerationism. Just slow the rate acceleration a bit.
Unpause when sufficient safety research has been conducted, safety measures have been identified and agreed upon, and can be practically implemented.
This should take approximately 5-50 years. If the world can mobilize the way it did during the COVID-19 pandemic, 5 years feels reasonable.
The world isn’t so bad that we can’t wait for AGI—or whatever the actual goal of the rat race is.
I would suggest that once paused, the following strategy is used to unpause.
Define what “safe” is, technically.
Announce an incentive that the first organization to create a “safe” AI system is granted some first mover protections. Something like one year of operation before releasing the guidelines of the safe system.
A strict and swift non-permanent pause is certainly one option. Use very aggressive regulation. Perspires slightly.
Pause all development and research everywhere and all at once. Don’t stop totally. Don’t be anti-accelerationism. Just slow the rate acceleration a bit.
Unpause when sufficient safety research has been conducted, safety measures have been identified and agreed upon, and can be practically implemented.
This should take approximately 5-50 years. If the world can mobilize the way it did during the COVID-19 pandemic, 5 years feels reasonable.
The world isn’t so bad that we can’t wait for AGI—or whatever the actual goal of the rat race is.
I would suggest that once paused, the following strategy is used to unpause.
Define what “safe” is, technically.
Announce an incentive that the first organization to create a “safe” AI system is granted some first mover protections. Something like one year of operation before releasing the guidelines of the safe system.
The chicken of tomorrow contest but for AI. Helping future people, but with less delicious flavor.
Implement strict policies and monitoring around AI research & development. Similar to nuclear weapons manufacturing.