governments would botch the process by not realizing the risks at hand.
To be fair, so would private companies and individuals.
It’s also possible that governments would use the AI for malevolent, totalitarian purposes.
It’s less likely IMO that a government would launch a completely independent top secret AI project with the explicit goal of “take over and optimize existence”, relying on FOOMing and first-mover advantage.
More likely, an existing highly funded arm of the government—the military, the intelligence service, the homeland department, the financial services—will try to build an AI that will be told to further their narrow goals. Starting from “build a superweapon”, “spy on the enemy premier”, “put down a revolution”, “fix the economy”, all the way to “destroy all other militaries”, “gather all information”, “control all citizens”, and “control all money”.
In such a scenario, the AI not only won’t be told to optimize for “all people” or “all nations”, but it won’t even be told to optimize for “all interests of our country”.
To be fair, so would private companies and individuals.
Yes, perhaps more so. :) The main point in the post was that risks of botching the process increase in a competitive scenario where you’re pressed for time.
To be fair, so would private companies and individuals.
It’s less likely IMO that a government would launch a completely independent top secret AI project with the explicit goal of “take over and optimize existence”, relying on FOOMing and first-mover advantage.
More likely, an existing highly funded arm of the government—the military, the intelligence service, the homeland department, the financial services—will try to build an AI that will be told to further their narrow goals. Starting from “build a superweapon”, “spy on the enemy premier”, “put down a revolution”, “fix the economy”, all the way to “destroy all other militaries”, “gather all information”, “control all citizens”, and “control all money”.
In such a scenario, the AI not only won’t be told to optimize for “all people” or “all nations”, but it won’t even be told to optimize for “all interests of our country”.
Yes, perhaps more so. :) The main point in the post was that risks of botching the process increase in a competitive scenario where you’re pressed for time.