This is an important point. AI alignment/safety organizations take money as input and write very abstract papers as their output, which usually have no immediate applications. I agree it may appear very unproductive.
However, if we think from first principles, a lot of other things are like that. For instance, when you go to school, you study the works of Shakespeare, you learn to play the guitar, and you learn how Spanish pronouns work. These things appear to be a complete waste of time. If 50 million students in the US spend 1 hour a day on these kinds of activities, and each hour is valued at only $10, that’s $180 billion/year.
But we know these things are not a waste of time, because in hindsight, when you study how students grow up, this work somehow helps them later in life.
Lots of things appear useless, but are valuable in hindsight for reasons beyond the intuitive set of reasons we evolved to understand.
Studying the nucleus of atoms might appear like a useless curiosity, if you didn’t know it’ll lead to nuclear energy. There are no real world applications for a long time but suddenly there are enormous applications.
Pasteur’s studies on fermentation might appear limited to modest winemaking improvements, but it led to the discovery of germ theory which saved countless lives.
The stone age people studying weird rocks may have discovered obsidian and copper. Those who studied the strange seeds that plants produce may have discovered agriculture.
We don’t know how valuable this alignment work is. We should cope with this uncertainty probabilistically: if there is a 50% chance it will help us, the benefits per cost is halved, but that doesn’t reduce ideal spending to zero.
It’s true that risk alone isn’t a good way to decide budgets. You’re even more correct that convincing demands to spend money are something politicians learn to ignore out of necessity.
But while risk alone isn’t a good way to decide budgets, you have to admit that lots of budget items have the purpose of addressing risk. For example, flood barriers address hurricane/typhoon rick. Structural upgrades address earthquake risk. Some preparations also address pandemic risk.
If you accept that some budget items are meant to address risk, shouldn’t you also accept that the amount of spending should be somewhat proportional to the amount of risk? In that case, if the risk of NATO getting invaded is similar in amount to the rogue AGI risk, then the military spending to protect against invasion should be similar in amount to the spending to protect against rogue ASI.
I admit that politicians might not be rational enough to understand this, and there is a substantial probability this statement will fail. But it is still worth trying. The cost is a mere signature and the benefit may be avoiding a massive miscalculation.
Making this statement doesn’t prevent others from making an even better statement. Many AI experts have signed multiple statements, e.g. the “Statement on AI Risk,” and “Pause Giant AI Experiments.” Some politicians and people are more convinced by one argument, while others are more convinced by another argument, so it helps to have different kinds of arguments backed by many signatories. Encouraging AI safety spending doesn’t conflict with encouraging AI regulation. I think the competition between different arguments isn’t actually that bad.