If everyone else is also unqualified because the problem is so new, and every defence they experimented with got obliterated by drone swarms, then you would agree they should just give up, and admit military risk remains a big problem but spend far less on it, right?
So if no one else knew how to counter drone swarms, and every defence they experimented with got obliterated by drone swarms,
…then by hypothesis, you’re screwed. But you’re making up this scenario, and this is where you’ve brought the imaginary protagonists to. You’re denying them a solution, while insisting they should spend money on a solution.
I think just because every defence they experimented with got obliterated by drone swarms, doesn’t mean they should stop trying, because they might figure out something new in the future.
It’s a natural part of life to work on a problem without any idea what the solution will be like. The first people who studied biology had no clue what modern medicine would look like, but their work was still valuable.
Being unable to imagine a solution does not prove a solution doesn’t exist.
At some point there has to be concrete plans, yes without concrete plans nothing can happen.
I’m probably not the best person in the world to decide how the money should be spent, but one vague possibility is this:
Some money is spent on making AI labs implement risk reduction measures, such as simply making their network more secure against hacking, and implementing AI alignment ideas and AI control ideas which show promise but are expensive.
Some money is given to organizations and researchers who apply for grants. Universities might study AI alignment in the same way they study other arts and sciences.
Some money is spent on teaching people about AI risk so that they’re more educated? I guess this is really hard since the field itself disagrees on what is correct so it’s unclear what you teach.
Some money is saved in a form of war chest. E.g. if we get really close to superintelligence, or catch AI red handed, we might take drastic measures. We might have to immediately shut down AI, but if society is extremely dependent on it we might need to spend a lot of money helping people who feel uprooted by the shutdown. In order to make a shutdown less politically difficult, people who lose their jobs may be temporarily compensated, and businesses relying on AI may bought rather than forced into bankruptcy.
Probably not good enough for you :/ but I imagine someone else can come up with a better plan.
If everyone else is also unqualified because the problem is so new, and every defence they experimented with got obliterated by drone swarms, then you would agree they should just give up, and admit military risk remains a big problem but spend far less on it, right?
…then by hypothesis, you’re screwed. But you’re making up this scenario, and this is where you’ve brought the imaginary protagonists to. You’re denying them a solution, while insisting they should spend money on a solution.
I think just because every defence they experimented with got obliterated by drone swarms, doesn’t mean they should stop trying, because they might figure out something new in the future.
It’s a natural part of life to work on a problem without any idea what the solution will be like. The first people who studied biology had no clue what modern medicine would look like, but their work was still valuable.
Being unable to imagine a solution does not prove a solution doesn’t exist.
Sure, never give up, die with dignity if it comes to that. None of that translates into a budget. Concrete plans translate into a budget.
At some point there has to be concrete plans, yes without concrete plans nothing can happen.
I’m probably not the best person in the world to decide how the money should be spent, but one vague possibility is this:
Some money is spent on making AI labs implement risk reduction measures, such as simply making their network more secure against hacking, and implementing AI alignment ideas and AI control ideas which show promise but are expensive.
Some money is given to organizations and researchers who apply for grants. Universities might study AI alignment in the same way they study other arts and sciences.
Some money is spent on teaching people about AI risk so that they’re more educated? I guess this is really hard since the field itself disagrees on what is correct so it’s unclear what you teach.
Some money is saved in a form of war chest. E.g. if we get really close to superintelligence, or catch AI red handed, we might take drastic measures. We might have to immediately shut down AI, but if society is extremely dependent on it we might need to spend a lot of money helping people who feel uprooted by the shutdown. In order to make a shutdown less politically difficult, people who lose their jobs may be temporarily compensated, and businesses relying on AI may bought rather than forced into bankruptcy.
Probably not good enough for you :/ but I imagine someone else can come up with a better plan.