It is unnecessary to postulate that CEOs and governments will be “overthrown” by rogue AI. Board members in the future will insist that their company appoint an AI to run the company because they think they’ll get better returns that way. Congressmen will use them to manage their campaigns and draft their laws. Heads of state will use them to manage their militaries and police agencies. If someone objects that their AI is really unreliable or doesn’t look like it shares their values, someone else on the board will say “But $NFGM is doing the same thing; we obviously need to stay competitive with them” and that will be the end of the debate. Deep technical safety concerns about mesaoptimizers will not even be brought up during the meeting. AIs will just slowly capture all of our institutions and begin to write and enforce our laws because we design and build them for that purpose. We are actually that stupid.
You should not say that this is the only concern; in fact you should explicitly state that it’s not the only one. But you should mention this first, because it’s way more understandable to lots of people than the idea that superintelligent machines will have hard power and manage to overturn the federal government directly, for some reason.
It is unnecessary to postulate that CEOs and governments will be “overthrown” by rogue AI. Board members in the future will insist that their company appoint an AI to run the company because they think they’ll get better returns that way. Congressmen will use them to manage their campaigns and draft their laws. Heads of state will use them to manage their militaries and police agencies. If someone objects that their AI is really unreliable or doesn’t look like it shares their values, someone else on the board will say “But $NFGM is doing the same thing; we obviously need to stay competitive with them” and that will be the end of the debate. Deep technical safety concerns about mesaoptimizers will not even be brought up during the meeting. AIs will just slowly capture all of our institutions and begin to write and enforce our laws because we design and build them for that purpose. We are actually that stupid.
You should not say that this is the only concern; in fact you should explicitly state that it’s not the only one. But you should mention this first, because it’s way more understandable to lots of people than the idea that superintelligent machines will have hard power and manage to overturn the federal government directly, for some reason.