Good point, I should have made those two separate bullet points:
Then there’s the AI regulation lobbyists. They lobby and stuff, pretending like they’re pushing for regulations on AI, but really they’re mostly networking and trying to improve their social status with DC People. Even if they do manage to pass any regulations on AI, those will also be mostly fake, because (a) these people are generally not getting deep into the bureaucracy which would actually implement any regulations, and (b) the regulatory targets themselves are aimed at things which seem easy to target (e.g. training FLOP limitations) rather than actually stopping advanced AI. The activists and lobbyists are nominally enemies of OpenAI, but in practice they all benefit from pushing the same narrative, and benefit from pretending that everyone involved isn’t faking everything all the time.
Also, there’s the AI regulation activists, who e.g. organize protests. Like ~98% of protests in general, such activity is mostly performative and not the sort of thing anyone would end up doing if they were seriously reasoning through how best to spend their time in order to achieve policy goals. Calling it “fake” feels almost redundant. Insofar as these protests have any impact, it’s via creating an excuse for friendly journalists to write stories about the dangers of AI (itself an activity which mostly feeds the narrative, and has dubious real impact).
(As with the top level, epistemic status:I don’t fully endorse all this, but I think it’s a pretty major mistake to not at least have a model like this sandboxed in one’s head and check it regularly.)
Oh, if you’re in the business of compiling a comprehensive taxonomy of ways the current AI thing may be fake, you should also add:
Vibe coders and “10x’d engineers”, who (on this model) would be falling into one of the failure modes outlined here: producing applications/features that didn’t need to exist, creating pointless code bloat (which helpfully show up in productivity metrics like “volume of code produced” or “number of commits”), or “automatically generating” entire codebases in a way that feels magical, then spending so much time bugfixing them it eats up ~all perceived productivity gains.
e/acc and other Twitter AI fans, who act like they’re bleeding-edge transhumanist visionaries/analysts/business gurus/startup founders, but who are just shitposters/attention-seekers who will wander off and never look back the moment the hype dies down.
Good point, I should have made those two separate bullet points:
Then there’s the AI regulation lobbyists. They lobby and stuff, pretending like they’re pushing for regulations on AI, but really they’re mostly networking and trying to improve their social status with DC People. Even if they do manage to pass any regulations on AI, those will also be mostly fake, because (a) these people are generally not getting deep into the bureaucracy which would actually implement any regulations, and (b) the regulatory targets themselves are aimed at things which seem easy to target (e.g. training FLOP limitations) rather than actually stopping advanced AI. The activists and lobbyists are nominally enemies of OpenAI, but in practice they all benefit from pushing the same narrative, and benefit from pretending that everyone involved isn’t faking everything all the time.
Also, there’s the AI regulation activists, who e.g. organize protests. Like ~98% of protests in general, such activity is mostly performative and not the sort of thing anyone would end up doing if they were seriously reasoning through how best to spend their time in order to achieve policy goals. Calling it “fake” feels almost redundant. Insofar as these protests have any impact, it’s via creating an excuse for friendly journalists to write stories about the dangers of AI (itself an activity which mostly feeds the narrative, and has dubious real impact).
(As with the top level, epistemic status: I don’t fully endorse all this, but I think it’s a pretty major mistake to not at least have a model like this sandboxed in one’s head and check it regularly.)
Oh, if you’re in the business of compiling a comprehensive taxonomy of ways the current AI thing may be fake, you should also add:
Vibe coders and “10x’d engineers”, who (on this model) would be falling into one of the failure modes outlined here: producing applications/features that didn’t need to exist, creating pointless code bloat (which helpfully show up in productivity metrics like “volume of code produced” or “number of commits”), or “automatically generating” entire codebases in a way that feels magical, then spending so much time bugfixing them it eats up ~all perceived productivity gains.
e/acc and other Twitter AI fans, who act like they’re bleeding-edge transhumanist visionaries/analysts/business gurus/startup founders, but who are just shitposters/attention-seekers who will wander off and never look back the moment the hype dies down.
True, but I feel a bit bad about punching that far down.