Eliezer, sharing causal parentage with us sounds like a plausible heuristic for ranking things in terms of similarity to us, but in many important senses an AI could share a great deal of causal parentage with us. So you still need a more detailed argument to rank AI low.
Which AI? A Friendly AI shares goal-systemic shape with us due to a direct causal link: humans successfully shaping the FAI. The law against deciding what you want, applies when you don’t have control over the outcome—for intermediate cases, where you have partial control, you have to switch on decide-what-you-want for the decision nodes and switch it off for everything else. You can be optimistic about FAI to the extent you expect nice humans to exert successful shaping influence on it.
Even so, I’ve got to say that I don’t think an FAI would be one-tenth as anthropomorphic as most “AIs” depicted in even high-class science fiction; if an FAI had any sort of understandable appearance it would be because that was what we needed, not by its nature.
I don’t want to play burden-of-proof tennis, but I’m not sure how to avoid it in a case like this; human-nice outcomes tend to occur when human-nice goal systems are at work shaping it. The causal link seems obvious enough. In the presence of a paperclip-maximizer this causal link is deleted and the outcome goes back to default-alien. If you say that, even in the absence of a human-nice goal system, the link to nice outcomes is maintained, and you don’t say why, and you demand that I prove it can’t happen, I’m not sure what else to say, except for the obvious: you are made of materials that can be used to make paperclips. If this isn’t what you’re saying, and it probably isn’t, I’m afraid you’ll have to spell it out in more detail.
Spindizzy and sophiesdad, I’ve spent quite a while ramming headlong into the problem of preventing the end of the world. Doing things the obvious way has a great deal to be said for it; but it’s been slow going, and some help would be nice. Research help, in particular, seems to me to probably require someone to read all this stuff at the age of 15 and then study on their own for 7 years after that, so I figured I’d better get started on the writing now.
Which AI? A Friendly AI shares goal-systemic shape with us due to a direct causal link: humans successfully shaping the FAI. The law against deciding what you want, applies when you don’t have control over the outcome—for intermediate cases, where you have partial control, you have to switch on decide-what-you-want for the decision nodes and switch it off for everything else. You can be optimistic about FAI to the extent you expect nice humans to exert successful shaping influence on it.
Even so, I’ve got to say that I don’t think an FAI would be one-tenth as anthropomorphic as most “AIs” depicted in even high-class science fiction; if an FAI had any sort of understandable appearance it would be because that was what we needed, not by its nature.
I don’t want to play burden-of-proof tennis, but I’m not sure how to avoid it in a case like this; human-nice outcomes tend to occur when human-nice goal systems are at work shaping it. The causal link seems obvious enough. In the presence of a paperclip-maximizer this causal link is deleted and the outcome goes back to default-alien. If you say that, even in the absence of a human-nice goal system, the link to nice outcomes is maintained, and you don’t say why, and you demand that I prove it can’t happen, I’m not sure what else to say, except for the obvious: you are made of materials that can be used to make paperclips. If this isn’t what you’re saying, and it probably isn’t, I’m afraid you’ll have to spell it out in more detail.
Spindizzy and sophiesdad, I’ve spent quite a while ramming headlong into the problem of preventing the end of the world. Doing things the obvious way has a great deal to be said for it; but it’s been slow going, and some help would be nice. Research help, in particular, seems to me to probably require someone to read all this stuff at the age of 15 and then study on their own for 7 years after that, so I figured I’d better get started on the writing now.