Why you propose to call it ‘destroying itself’ and ‘suicidal’ though?
What is left of your argument if we ban apriori special treatment of the t coordinate by AI (why should it care about the length of the bliss in time rather than volume of the bliss in space?), and use of loaded concepts to which our own intelligence has strong aversion like ‘destroying itself’?
Also, btw, for the FAI there’s the problem that they may want to wirehead you.
Easy to go too far, a perfect wireheaded bliss is an end state—there’s no way but downhill when you are on top of a hill. End state as in, no further updates of any note; the clock ticking perhaps and that’s it.
No, I’m discussing a variety of different behaviors people call “wireheading” that might emerge from different AI architectures, in the alternative.
Why you propose to call it ‘destroying itself’ and ‘suicidal’ though?
What is left of your argument if we ban apriori special treatment of the t coordinate by AI (why should it care about the length of the bliss in time rather than volume of the bliss in space?), and use of loaded concepts to which our own intelligence has strong aversion like ‘destroying itself’?
Also, btw, for the FAI there’s the problem that they may want to wirehead you.
Of the ways an AI could go bad, wireheading everyone is a fairly mild one.
Easy to go too far, a perfect wireheaded bliss is an end state—there’s no way but downhill when you are on top of a hill. End state as in, no further updates of any note; the clock ticking perhaps and that’s it.