If you want to facilitate communication, I recommend that you stop using the word “friendly” in this context on this site. There’s a lot of talk on this site of “Friendly AI”, by which is meant something relatively specific. You are using “friendly” in the more general sense implied by the English word. This is likely to cause rather a lot of confusion.
You’re right that if strategy 1 optimizes for good stuff happening to everyone I care and strategy 2 optimizes for good stuff happening to everyone whether I care about them or not, then strategy 1 will (if done sufficiently powerfully) result in people I don’t care about having good stuff taken away from them, and strategy 2 will result in everyone I care about getting less good stuff than strategy 1 will.
You seem to be saying that I therefore ought to prefer that strategy 2 be implemented, rather than strategy 1. Is that right?
You seem to be saying that you yourself prefer that strategy 2 be implemented, rather than strategy 1. Is that right?
Nope, I’m saying strategy 2 is better for humanity. Of course personally I’d prefer strategy 1 but I’m honest enough with myself to know that certain individuals would find their utility functions severely degraded if I had an all powerful AI working for me and if I don’t trust myself to be in charge then I don’t trust any other human unless it’s someone like Ghandi.
Couple of things:
If you want to facilitate communication, I recommend that you stop using the word “friendly” in this context on this site. There’s a lot of talk on this site of “Friendly AI”, by which is meant something relatively specific. You are using “friendly” in the more general sense implied by the English word. This is likely to cause rather a lot of confusion.
You’re right that if strategy 1 optimizes for good stuff happening to everyone I care and strategy 2 optimizes for good stuff happening to everyone whether I care about them or not, then strategy 1 will (if done sufficiently powerfully) result in people I don’t care about having good stuff taken away from them, and strategy 2 will result in everyone I care about getting less good stuff than strategy 1 will.
You seem to be saying that I therefore ought to prefer that strategy 2 be implemented, rather than strategy 1. Is that right?
You seem to be saying that you yourself prefer that strategy 2 be implemented, rather than strategy 1. Is that right?
Fair enough. I will read the wiki.
Yes
Not saying anything about your preferences.
Nope, I’m saying strategy 2 is better for humanity. Of course personally I’d prefer strategy 1 but I’m honest enough with myself to know that certain individuals would find their utility functions severely degraded if I had an all powerful AI working for me and if I don’t trust myself to be in charge then I don’t trust any other human unless it’s someone like Ghandi.