I’m pleased this got some traction. One of my largest concerns with AI policy development is that state level decision makers will not recognize the threat until catastrophic damage has been done.
Identifying the need for chemical, biological, and nuclear warfare treaties was fairly universal, as there were real examples of their risks available for all to see. Without that tangible evidence, there’s a risk of incremental disaster like we’re seeing with climate change policies.
A policy accelerationist is probably my biggest concern. A group that creates a problem in order to highlight the need to protect against even larger disasters. Like the Gruinard Island soil incidents.
Any movement towards redlines and international safeguards is a good thing.
I’m pleased this got some traction. One of my largest concerns with AI policy development is that state level decision makers will not recognize the threat until catastrophic damage has been done.
Identifying the need for chemical, biological, and nuclear warfare treaties was fairly universal, as there were real examples of their risks available for all to see. Without that tangible evidence, there’s a risk of incremental disaster like we’re seeing with climate change policies.
A policy accelerationist is probably my biggest concern. A group that creates a problem in order to highlight the need to protect against even larger disasters. Like the Gruinard Island soil incidents.
Any movement towards redlines and international safeguards is a good thing.