Humanity can probably also survive an AI accident.
There is no reason to assume that any AI failure would lead by default to a catastrophic scenario where evil robots who look suspiciously like the former governor of California hunt down the last survivors of humanity after most of it has been wiped out by a global nuclear attack.
But it depends on how likely this failure mode is. Are we talking about something like the possibility of the nuclear tests igniting the atmosphere, or the LHC creating a black hole?
They were ruled out (with some probability of error) theoretically once people already had working designs, and using knowledge obtained from experimentation on smaller designs.
Nobody here is suggesting to wire the first experimental AGI to the nuclear missiles launch systems. The point is that you need a good idea about what a working AGI design will look like before you can say anything meaningful about its safety. Experimentation, with reasonable safety measures, will be most likely needed before a full-fledged design can be produced.
Humanity can probably also survive an AI accident.
There is no reason to assume that any AI failure would lead by default to a catastrophic scenario where evil robots who look suspiciously like the former governor of California hunt down the last survivors of humanity after most of it has been wiped out by a global nuclear attack.
Not any failure. But the existence of failure modes like superintelligent paperclip maximizers is sufficient to make this technology different.
But it depends on how likely this failure mode is. Are we talking about something like the possibility of the nuclear tests igniting the atmosphere, or the LHC creating a black hole?
Actually those are excellent examples. Those possibilities were ruled out theoretically. No one was crazy enough to check it experimentally first.
They were ruled out (with some probability of error) theoretically once people already had working designs, and using knowledge obtained from experimentation on smaller designs.
Nobody here is suggesting to wire the first experimental AGI to the nuclear missiles launch systems. The point is that you need a good idea about what a working AGI design will look like before you can say anything meaningful about its safety.
Experimentation, with reasonable safety measures, will be most likely needed before a full-fledged design can be produced.
It appears to me that before you start any given experiment you must have sufficient theoretical backing that this particular experiment is safe.