If you spend 8000 times less on AI alignment (compared to the military),
You must also believe that AI risk is 8000 times less (than military risk).
Why?
We know how to effectively spend money on the military: get more of what we have and do R&D to make better stuff. The only limit on effective military spending is all the other things that the money is needed for, i.e. having a country worth defending.
It is not clear to me how to buy AI safety. Money is useless without something to spend it on. What would you buy with your suggested level of funding?
I think that’s a very important question, and I don’t know the answer for what we should buy.
However, suppose not knowing what you should spend on, dramatically decreases the total amount you should spend (e.g. by 10x). If that was really true in general, then imagine a country with a large military discovers that its enemies are building very powerful drone swarm weapons, which can easily destroy all its tanks, aircraft carriers, and so forth very cheaply.
Military experts are all confused and in disagreement how to counter these drone swarms, just like the AI alignment community. Some of them say that resistance is futile, and the country is “doomed.” Others have speculative ideas like using lasers. Still others say that lasers are stupid, because the enemy can simply launch the swarms in bad weather and the lasers won’t reach them. Just like with AI alignment, there are no proven solutions, and every solution tested against drone swarms are destroyed pathetically.
Should the military increase its budget, or decrease its budget, since no one knows what you can spend money on to counter the drone swarms?
I think the moderate, cool headed response is to spend a similar amount, exploring all the possibilities, even without having any ideas which are proven to work.
Uncertainty means the expected risk reduction is high
If we are uncertain about the nature of the risk, we might assume that 50%, spending more money reduces the risk by a reasonable amount (similar to risks we do understand), and possibly even more due to discovering brand new solutions instead of getting marginal gains on existing solutions. And 50%, spending more money is utterly useless, because we are at the mercy of luck.
Therefore, the efficiency of spending on AI risk should be at least half the efficiency of spending on military risk, or at least within the same order of magnitude. This argument argues over orders of magnitude.
If increasing the time for alignment by pausing AI can work, so can increasing the money for alignment
Given that we effectively have a race between capabilities and alignment, the relative spending on capabilities and alignment seems important.
A 2x capabilities decrease should be similar in effect to a 2x alignment increase, or at least a 4x alignment increase.
The only case where decreasing capabilities funding works far better than increasing alignment funding, is if we decrease capabilities funding to zero, using extremely forceful worldwide regulation and surveillance. But that would also require governments to freak out about AI risk (prioritize it as highly as military risk), and benefit from this letter.
Money isn’t magic. It’s nothing more than the slack in the system of exchange. You have to start from some idea of what the work is that needs to happen. That seems to me to be lacking. Are there any other proposals on the table against doom but “shut it all down”?
Suppose you had literally no ideas at all how to counter drone swarms, and you were really bad at judging other people’s ideas for countering drone swarms. In that case, would you, upon discovering that your countries adversaries developed drone swarms, (making your current tanks and ships obsolete), decide to give up on military spending, and cut military spending by 100 times?
Please say you would or explain why not.
My opinion is that you can’t give up (i.e. admit there is a big problem but spend extremely little on it) until you fully understood the nature of the problem with certainty.
Money isn’t magic, but it determines the number of smart people working on the problem. If I was a misaligned superintelligence, I would be pretty scared of a greater amount of human intelligence working to stop me from being born in the first place. They get only one try, but they might actually stumble across something that works.
Suppose you had literally no ideas at all how to counter drone swarms, and you were really bad at judging other people’s ideas for countering drone swarms.
In that case, I would be unqualified to do anything, and I would be wondering how I got into a position where people were asking me for advice. If I couldn’t pass the buck to someone competent, I’d look for competent people, get their recommendations, try as best I could to judge them, and turn on the money tap accordingly. But I can’t wave a magic wand, and where there was a pile of money there is now a pile of anti-drone technology.
If everyone else is also unqualified because the problem is so new, and every defence they experimented with got obliterated by drone swarms, then you would agree they should just give up, and admit military risk remains a big problem but spend far less on it, right?
So if no one else knew how to counter drone swarms, and every defence they experimented with got obliterated by drone swarms,
…then by hypothesis, you’re screwed. But you’re making up this scenario, and this is where you’ve brought the imaginary protagonists to. You’re denying them a solution, while insisting they should spend money on a solution.
I think just because every defence they experimented with got obliterated by drone swarms, doesn’t mean they should stop trying, because they might figure out something new in the future.
It’s a natural part of life to work on a problem without any idea what the solution will be like. The first people who studied biology had no clue what modern medicine would look like, but their work was still valuable.
Being unable to imagine a solution does not prove a solution doesn’t exist.
At some point there has to be concrete plans, yes without concrete plans nothing can happen.
I’m probably not the best person in the world to decide how the money should be spent, but one vague possibility is this:
Some money is spent on making AI labs implement risk reduction measures, such as simply making their network more secure against hacking, and implementing AI alignment ideas and AI control ideas which show promise but are expensive.
Some money is given to organizations and researchers who apply for grants. Universities might study AI alignment in the same way they study other arts and sciences.
Some money is spent on teaching people about AI risk so that they’re more educated? I guess this is really hard since the field itself disagrees on what is correct so it’s unclear what you teach.
Some money is saved in a form of war chest. E.g. if we get really close to superintelligence, or catch AI red handed, we might take drastic measures. We might have to immediately shut down AI, but if society is extremely dependent on it we might need to spend a lot of money helping people who feel uprooted by the shutdown. In order to make a shutdown less politically difficult, people who lose their jobs may be temporarily compensated, and businesses relying on AI may bought rather than forced into bankruptcy.
Probably not good enough for you :/ but I imagine someone else can come up with a better plan.
Why?
We know how to effectively spend money on the military: get more of what we have and do R&D to make better stuff. The only limit on effective military spending is all the other things that the money is needed for, i.e. having a country worth defending.
It is not clear to me how to buy AI safety. Money is useless without something to spend it on. What would you buy with your suggested level of funding?
I think that’s a very important question, and I don’t know the answer for what we should buy.
However, suppose not knowing what you should spend on, dramatically decreases the total amount you should spend (e.g. by 10x). If that was really true in general, then imagine a country with a large military discovers that its enemies are building very powerful drone swarm weapons, which can easily destroy all its tanks, aircraft carriers, and so forth very cheaply.
Military experts are all confused and in disagreement how to counter these drone swarms, just like the AI alignment community. Some of them say that resistance is futile, and the country is “doomed.” Others have speculative ideas like using lasers. Still others say that lasers are stupid, because the enemy can simply launch the swarms in bad weather and the lasers won’t reach them. Just like with AI alignment, there are no proven solutions, and every solution tested against drone swarms are destroyed pathetically.
Should the military increase its budget, or decrease its budget, since no one knows what you can spend money on to counter the drone swarms?
I think the moderate, cool headed response is to spend a similar amount, exploring all the possibilities, even without having any ideas which are proven to work.
Uncertainty means the expected risk reduction is high
If we are uncertain about the nature of the risk, we might assume that 50%, spending more money reduces the risk by a reasonable amount (similar to risks we do understand), and possibly even more due to discovering brand new solutions instead of getting marginal gains on existing solutions. And 50%, spending more money is utterly useless, because we are at the mercy of luck.
Therefore, the efficiency of spending on AI risk should be at least half the efficiency of spending on military risk, or at least within the same order of magnitude. This argument argues over orders of magnitude.
If increasing the time for alignment by pausing AI can work, so can increasing the money for alignment
Given that we effectively have a race between capabilities and alignment, the relative spending on capabilities and alignment seems important.
A 2x capabilities decrease should be similar in effect to a 2x alignment increase, or at least a 4x alignment increase.
The only case where decreasing capabilities funding works far better than increasing alignment funding, is if we decrease capabilities funding to zero, using extremely forceful worldwide regulation and surveillance. But that would also require governments to freak out about AI risk (prioritize it as highly as military risk), and benefit from this letter.
Money isn’t magic. It’s nothing more than the slack in the system of exchange. You have to start from some idea of what the work is that needs to happen. That seems to me to be lacking. Are there any other proposals on the table against doom but “shut it all down”?
Suppose you had literally no ideas at all how to counter drone swarms, and you were really bad at judging other people’s ideas for countering drone swarms. In that case, would you, upon discovering that your countries adversaries developed drone swarms, (making your current tanks and ships obsolete), decide to give up on military spending, and cut military spending by 100 times?
Please say you would or explain why not.
My opinion is that you can’t give up (i.e. admit there is a big problem but spend extremely little on it) until you fully understood the nature of the problem with certainty.
Money isn’t magic, but it determines the number of smart people working on the problem. If I was a misaligned superintelligence, I would be pretty scared of a greater amount of human intelligence working to stop me from being born in the first place. They get only one try, but they might actually stumble across something that works.
In that case, I would be unqualified to do anything, and I would be wondering how I got into a position where people were asking me for advice. If I couldn’t pass the buck to someone competent, I’d look for competent people, get their recommendations, try as best I could to judge them, and turn on the money tap accordingly. But I can’t wave a magic wand, and where there was a pile of money there is now a pile of anti-drone technology.
Neither can anyone in AI alignment.
If everyone else is also unqualified because the problem is so new, and every defence they experimented with got obliterated by drone swarms, then you would agree they should just give up, and admit military risk remains a big problem but spend far less on it, right?
…then by hypothesis, you’re screwed. But you’re making up this scenario, and this is where you’ve brought the imaginary protagonists to. You’re denying them a solution, while insisting they should spend money on a solution.
I think just because every defence they experimented with got obliterated by drone swarms, doesn’t mean they should stop trying, because they might figure out something new in the future.
It’s a natural part of life to work on a problem without any idea what the solution will be like. The first people who studied biology had no clue what modern medicine would look like, but their work was still valuable.
Being unable to imagine a solution does not prove a solution doesn’t exist.
Sure, never give up, die with dignity if it comes to that. None of that translates into a budget. Concrete plans translate into a budget.
At some point there has to be concrete plans, yes without concrete plans nothing can happen.
I’m probably not the best person in the world to decide how the money should be spent, but one vague possibility is this:
Some money is spent on making AI labs implement risk reduction measures, such as simply making their network more secure against hacking, and implementing AI alignment ideas and AI control ideas which show promise but are expensive.
Some money is given to organizations and researchers who apply for grants. Universities might study AI alignment in the same way they study other arts and sciences.
Some money is spent on teaching people about AI risk so that they’re more educated? I guess this is really hard since the field itself disagrees on what is correct so it’s unclear what you teach.
Some money is saved in a form of war chest. E.g. if we get really close to superintelligence, or catch AI red handed, we might take drastic measures. We might have to immediately shut down AI, but if society is extremely dependent on it we might need to spend a lot of money helping people who feel uprooted by the shutdown. In order to make a shutdown less politically difficult, people who lose their jobs may be temporarily compensated, and businesses relying on AI may bought rather than forced into bankruptcy.
Probably not good enough for you :/ but I imagine someone else can come up with a better plan.