Well, I do take issue to even people at FLI describing UFAI as having “good intentions”. It disguises a challengeable inductive inference. It certainly sounds less absurd to claim that an AI with a pleasure maximisation goal is likely to connect brains to dopamine drips, than that one with “good” intentions would do so. Even if you then assert that you were only using “good” in a colloquial sense, and you actually meant “bad” all along.
Well, I do take issue to even people at FLI describing UFAI as having “good intentions”. It disguises a challengeable inductive inference. It certainly sounds less absurd to claim that an AI with a pleasure maximisation goal is likely to connect brains to dopamine drips, than that one with “good” intentions would do so. Even if you then assert that you were only using “good” in a colloquial sense, and you actually meant “bad” all along.