1: We don’t know that AI can go FOOM. It may be just as hard to prevent self improving AI from wireheading (when it becomes super-intelligent) as it is to ensure friendliness. Note: perfect wireheading has infinite utility according to agent prone to wireheading; the length of wireheading experience in time (or it’s volume in space) is then irrelevant. The whole premise of fear of UFAI is that intelligence (human intelligence) can have faulty self improvement.
2: We don’t know that the AI would likely to be substantially unfriendly. Other humans, and especially groups of humans (corporations, governments) are non-you non-friendly-to-you intelligences too, with historical examples of extreme unfriendliness (i’m going to coin a law that the (un)friendly intelligence discussion is incomplete without mention of nazis), yet they can be friendly enough—permitting you to live normal life while paying taxes (but note the military draft). It is plausible enough that the AI would be friendly enough. Humans would be cheap to store.
3: We may get there by mind uploading, which seems to me like the safest option (and a botched attempt at FAI as a very dangerous one).
4: We don’t actually know if FAI attempt is more, or less dangerous than messy AI like ‘replicate function of cortical columns, simulate a lot of cortical columns’. It could just as well be more dangerous.
The argumentation everywhere has very low external probabilities, and so acting upon that argumentation has rather low utility values.
Well, the critique I have:
1: We don’t know that AI can go FOOM. It may be just as hard to prevent self improving AI from wireheading (when it becomes super-intelligent) as it is to ensure friendliness. Note: perfect wireheading has infinite utility according to agent prone to wireheading; the length of wireheading experience in time (or it’s volume in space) is then irrelevant. The whole premise of fear of UFAI is that intelligence (human intelligence) can have faulty self improvement.
2: We don’t know that the AI would likely to be substantially unfriendly. Other humans, and especially groups of humans (corporations, governments) are non-you non-friendly-to-you intelligences too, with historical examples of extreme unfriendliness (i’m going to coin a law that the (un)friendly intelligence discussion is incomplete without mention of nazis), yet they can be friendly enough—permitting you to live normal life while paying taxes (but note the military draft). It is plausible enough that the AI would be friendly enough. Humans would be cheap to store.
3: We may get there by mind uploading, which seems to me like the safest option (and a botched attempt at FAI as a very dangerous one).
4: We don’t actually know if FAI attempt is more, or less dangerous than messy AI like ‘replicate function of cortical columns, simulate a lot of cortical columns’. It could just as well be more dangerous.
The argumentation everywhere has very low external probabilities, and so acting upon that argumentation has rather low utility values.