TL;DR: I am rather confident that involving governments in regulating AI development will make the world less safe, not more, because governments are basically incompetent and the representatives are beholden to special interests.
Governments can do very few things well, some number of things passably, and a lot of things terribly. We tolerate it because the alternative is generally worse. Regulations are needed to keep people and companies from burning the commons, and to create more commons. The environment, infrastructure, public health, military, financial stability all require enforceable regulations. Even there the governments are doing pretty poorly (remember FDA, CDC and their siblings in other countries, as well as WHO?), but having a free-for-all would likely be worse.
How the proponents of AI regulations expect it to go: the governments make it hard to hoard GPUs or to do unsanctioned AI research, or to use the whole internet to train the model without permission. Sort of like HIPAA in the US makes it harder to obtain someone’s personal health data without their permission.
How it is likely to go: Companies will be pretending that their all new more capable and more dangerous AI model is just a bit improved old model, to avoid costly re-certification (see the Boeing 737 MAX 8 debacle).
Upcoming AI regulations are likely to make for an unsafer world
TL;DR: I am rather confident that involving governments in regulating AI development will make the world less safe, not more, because governments are basically incompetent and the representatives are beholden to special interests.
Governments can do very few things well, some number of things passably, and a lot of things terribly. We tolerate it because the alternative is generally worse. Regulations are needed to keep people and companies from burning the commons, and to create more commons. The environment, infrastructure, public health, military, financial stability all require enforceable regulations. Even there the governments are doing pretty poorly (remember FDA, CDC and their siblings in other countries, as well as WHO?), but having a free-for-all would likely be worse.
How the proponents of AI regulations expect it to go: the governments make it hard to hoard GPUs or to do unsanctioned AI research, or to use the whole internet to train the model without permission. Sort of like HIPAA in the US makes it harder to obtain someone’s personal health data without their permission.
How it is likely to go: Companies will be pretending that their all new more capable and more dangerous AI model is just a bit improved old model, to avoid costly re-certification (see the Boeing 737 MAX 8 debacle).