I have a very simple opinion about why pushes for AI regulation fail. Ready?
Because nobody knows what they are asking for, or what they are asking for is absurd.
Here are the absurdities:
“Stop all AI research forever”
“Stop all AI research until it’s safe to do, and I say when it’s safe (i will never say it’s safe)”
“Stop all AI research except mine, where I sit around and think about it”
“Pause all AI research until we have a functional genetic engineering project so our smarter descendants can do AI research”
Once we get past absurd asks we get into easily coopted asks, like “restrict compute” which turns into “declare an arms race with the largest industrial power on the planet” and “monitor training runs” which turns into “tell every potential competitor you are going to have the government crush them”.
What will convince me there is a sane block pushing for AI-related regulations is when they propose a regulation that is sane. I cannot emphasize enough how much these things have failed at the drafting stage. The part everyone is allegedly good at, that part, where you write down a good idea that it might be a good idea to do? That’s the part where this has failed.
At a meta level, “publishing, in 2025, a public complaint about OpenPhil’s publicly promoted timelines and how those may have influenced their funding choices” does not seem like it serves any defensible goal.
Let’s suppose the underlying question is “why did OpenPhil give money to OpenAI in 2017”. (Or, conversely, not give money to some other venture in a similar timeframe). Why is this, currently, significantly important? What plausible goal is served by trying to answer this question more precisely?
If it’s because they had long timelines, it tells you that short timeline arguments were not effective, which hopefully everyone already knows. This has been robustly demonstrated across most meaningful groups of people controlling either significant money or government clout. It is not information. I would not update on this.
If they did this because they had short timelines, they believed in whatever Sam was selling for that. I would not update on this either. It is hopefully well understood, by now, that Sam is good at selling things. “You could parachute him into an island full of cannibals and come back in 5 years and he’d be the king.”
If they did this for non-timeline reasons, I might update on, idk, some nebulous impression about how OpenPhil’s bureaucracy worked before the year 2020 or so. How good Sam (or another principal) was at convincing people to give them money. I don’t see how this is an important fact about the world.
Generally my model is that when people do not seem to be behaving optimally, they are behaving close to optimal for something, but that something is not the goal I imagine they are pursuing. I am imagining a goal like “being able to influence future events more effectively”, but I can’t see how that’s served here, so I imagine we’re optimizing for something else.