Suppose you develop the first AGI. It fooms. The AI tells you that it is capable of gaining total cosmic power by hacking physics in a millisecond. (Being an aligned AI, its waiting for your instructions before doing that.) It also tells you that the second AI project is only 1 day behind, and they have screwed up alignment.
Options.
Do nothing. Unfriendly AI gains total cosmic power tomorrow.
Lightspeed bubble of hedonium. All humans are uploaded into a virtual utopia by femtobots. The sun is fully disassembled for raw materials whithin 10 minutes of you giving the order.
Subtly break their AI. A cyberattack that stops their AI from doing anything, and otherwise has no effect.
Use the total cosmic power to do something powerful and scary. Randomly blow up mars. Tell the world that you did this using AI, and therefore AI should be regulated. Watch 20 hours of headless chicken flailing before the world ends.
Blow up mars and then use your amazing brainwashing capabilities to get regulations passed and enforced within 24 hours.
Something else.
Personally I think that 2 and 3 would be the options to consider.
In fact, before you get to AGI, your company will probably develop other surprising capabilities, and you can demonstrate those capabilities to neutral-but-influential outsiders who previously did not believe those capabilities were possible or concerning.
Which neutral but influential observers? Politicians that only know how to play signalling games and are utterly mentally incapable of engaging with objective reality in any way? There is now cabal of powerful people who will start acting competently and benevolently the moment they get unambiguous evidence of “intelligence is powerful”. A lot of the smart people who know about AI have already realized this. The people who haven’t realized will often not be very helpful. Sure, you can get a bit of a boost. You could get MIRI a bit of extra funding.
Lets work our way backwards. Lets imagine the future contains a utopia that lasts billions of years, and contains many many humanlike agents. Why doesn’t superintelligent AI created by the humans destroy this utopia.
Every single human capable of destroying the world chooses not to. Requires at least a good bit of education. Quite possibly advanced brain nanotech to stop even one person going mad.
Unfriendly Superintelligence won’t destroy the world, our friendly superintelligence will keep it in check. Sure, possible. The longer you leave the unfriendly superintelligence on, the more risk and collateral. Best time to stop it is before it turns on.
FAI in all your computer. Try it and you just get an “oops, that code is an unfriendly superintelligence” error.
Some earlier step doesn’t work, eg there are no human programable computers in this world. And some force stops humans making them.
All the humans are too dumb. The innumerate IQ 80 humans have no chance of making AI.
Government of humans. Building ASI would take a lot of tech development. The human run government puts strict limits on that. Building any neural net is very illegal. Somehow the government doesn’t get replaced by a pro AI government even on these timescales.
Suppose you develop the first AGI. It fooms. The AI tells you that it is capable of gaining total cosmic power by hacking physics in a millisecond. (Being an aligned AI, its waiting for your instructions before doing that.) It also tells you that the second AI project is only 1 day behind, and they have screwed up alignment.
Options.
Do nothing. Unfriendly AI gains total cosmic power tomorrow.
Lightspeed bubble of hedonium. All humans are uploaded into a virtual utopia by femtobots. The sun is fully disassembled for raw materials whithin 10 minutes of you giving the order.
Subtly break their AI. A cyberattack that stops their AI from doing anything, and otherwise has no effect.
Use the total cosmic power to do something powerful and scary. Randomly blow up mars. Tell the world that you did this using AI, and therefore AI should be regulated. Watch 20 hours of headless chicken flailing before the world ends.
Blow up mars and then use your amazing brainwashing capabilities to get regulations passed and enforced within 24 hours.
Something else.
Personally I think that 2 and 3 would be the options to consider.
Which neutral but influential observers? Politicians that only know how to play signalling games and are utterly mentally incapable of engaging with objective reality in any way? There is now cabal of powerful people who will start acting competently and benevolently the moment they get unambiguous evidence of “intelligence is powerful”. A lot of the smart people who know about AI have already realized this. The people who haven’t realized will often not be very helpful. Sure, you can get a bit of a boost. You could get MIRI a bit of extra funding.
Lets work our way backwards. Lets imagine the future contains a utopia that lasts billions of years, and contains many many humanlike agents. Why doesn’t superintelligent AI created by the humans destroy this utopia.
Every single human capable of destroying the world chooses not to. Requires at least a good bit of education. Quite possibly advanced brain nanotech to stop even one person going mad.
Unfriendly Superintelligence won’t destroy the world, our friendly superintelligence will keep it in check. Sure, possible. The longer you leave the unfriendly superintelligence on, the more risk and collateral. Best time to stop it is before it turns on.
FAI in all your computer. Try it and you just get an “oops, that code is an unfriendly superintelligence” error.
Some earlier step doesn’t work, eg there are no human programable computers in this world. And some force stops humans making them.
All the humans are too dumb. The innumerate IQ 80 humans have no chance of making AI.
Government of humans. Building ASI would take a lot of tech development. The human run government puts strict limits on that. Building any neural net is very illegal. Somehow the government doesn’t get replaced by a pro AI government even on these timescales.