Using “good” to only refer to what is actually good is however vastly better, as precision goes. What I am taking issue to here is the careless equivocation between maximising pleasure and good intentions. A correct description of the “nanny AI” scenario would read something like this:
[The AI] has bad intentions (it was programmed to maximise human pleasure), and indeed by using its superior intelligence it successfully achieves that goal and does in fact maximise human pleasure—by connecting all human brains up to dopamine drips.
Of course it is true that a AI programmed to do what is good would most likely generally increase happiness (and even pleasure) to some extent, but to conclude from that that these things are interchangeable is pure folly.
Using “good” to only refer to what is actually good is however vastly better, as precision goes. What I am taking issue to here is the careless equivocation between maximising pleasure and good intentions. A correct description of the “nanny AI” scenario would read something like this:
Of course it is true that a AI programmed to do what is good would most likely generally increase happiness (and even pleasure) to some extent, but to conclude from that that these things are interchangeable is pure folly.