I do not necessarily disagree with this, coming from a legal / compliance background. If you see any of my profiles, I constantly complain about “performative compliance” and “compliance theatre”. Painfully present across the legal and governance sectors.
That said: can you provide examples of activism or regulatory efforts that you do agree with? What does a “non fake” regulatory effort look like?
I don’t think it would be okay to dismiss your take entirely, but it would be great to see what solutions you’d propose too. This is why I disagree in principle, because there are no specific points to contribute to.
In Europe, paradoxically, some of the people “close enough to the bureaucracy” that pushed for the AI Act to include GenAI providers, were OpenAI-adjacent.
But I will rescue this:
“(b) the regulatory targets themselves are aimed at things which seem easy to target (e.g. training FLOP limitations) rather than actually stopping advanced AI”
BigTech is too powerful to lobby against. “Stopping advanced AI” per se would contravene many market regulations (unless we define exactly what you mean by advanced AI and the undeniable dangers to people’s lives). Regulators can only prohibit development of products up to certain point. They cannot just decide to “stop” development of technologies arbitrarily. But the AI Act does prohibit many types of AI systems already: Article 5: Prohibited AI Practices | EU Artificial Intelligence Act.
Those are considered to create unacceptable risks to people’s lives and human rights.
Then there’s the AI regulation activists and lobbyists. They lobby and protest and stuff, pretending like they’re pushing for regulations on AI, but really they’re mostly networking and trying to improve their social status with DC People. Even if they do manage to pass any regulations on AI, those will also be mostly fake, because (a) these people are generally not getting deep into the bureaucracy which would actually implement any regulations, and (b) the regulatory targets themselves are aimed at things which seem easy to target (e.g. training FLOP limitations) rather than actually stopping advanced AI. The activists and lobbyists are nominally enemies of OpenAI, but in practice they all benefit from pushing the same narrative, and benefit from pretending that everyone involved isn’t faking everything all the time.
I do not necessarily disagree with this, coming from a legal / compliance background. If you see any of my profiles, I constantly complain about “performative compliance” and “compliance theatre”. Painfully present across the legal and governance sectors.
That said: can you provide examples of activism or regulatory efforts that you do agree with? What does a “non fake” regulatory effort look like?
I don’t think it would be okay to dismiss your take entirely, but it would be great to see what solutions you’d propose too. This is why I disagree in principle, because there are no specific points to contribute to.
In Europe, paradoxically, some of the people “close enough to the bureaucracy” that pushed for the AI Act to include GenAI providers, were OpenAI-adjacent.
But I will rescue this:
“(b) the regulatory targets themselves are aimed at things which seem easy to target (e.g. training FLOP limitations) rather than actually stopping advanced AI”
BigTech is too powerful to lobby against. “Stopping advanced AI” per se would contravene many market regulations (unless we define exactly what you mean by advanced AI and the undeniable dangers to people’s lives). Regulators can only prohibit development of products up to certain point. They cannot just decide to “stop” development of technologies arbitrarily. But the AI Act does prohibit many types of AI systems already: Article 5: Prohibited AI Practices | EU Artificial Intelligence Act.
Those are considered to create unacceptable risks to people’s lives and human rights.