Actually, the issue is technical terms, vs normal usage.
When using technical terms it is important to stick to the convention.
In normal usage, however, we rely on context to supply disambiguating information.
The word “intention” is not a technical term. And, in the context in which I used it the meaning was clear to most people on LW who commented.
For clarity, the intended meaning was that it should distinguish a type of AI whose goals say something like “Kill my enemies and make your creator rich” or “Destroy all living things”. Those would not be AIs with “good intentions” because they would have been deliberately set up to do bad things.
Most people who write about these scenarios use one or another choice of words to try to indicate that the issue being considered is whether an AI that was programmed with “prima facie good intentions” might nevertheless carry out those “prima facie good intentions” in such a way as to actually do something that we humans consider horrible. Different commentators have chosen different ways to get that idea across—some of them said “good intentions,” none of them to my knowledge said “prima facie good intentions” and many used some other very, very similar form of words to “good intentions”. In all of the essays and news reports and papers I have seen there is some attempt to convey the idea that we are not addressing an overtly evil AI.
As I said, in almost all cases, commentors have picked that usage up straight away.
Actually, the issue is technical terms, vs normal usage.
When using technical terms it is important to stick to the convention.
In normal usage, however, we rely on context to supply disambiguating information.
The word “intention” is not a technical term. And, in the context in which I used it the meaning was clear to most people on LW who commented.
For clarity, the intended meaning was that it should distinguish a type of AI whose goals say something like “Kill my enemies and make your creator rich” or “Destroy all living things”. Those would not be AIs with “good intentions” because they would have been deliberately set up to do bad things.
Most people who write about these scenarios use one or another choice of words to try to indicate that the issue being considered is whether an AI that was programmed with “prima facie good intentions” might nevertheless carry out those “prima facie good intentions” in such a way as to actually do something that we humans consider horrible. Different commentators have chosen different ways to get that idea across—some of them said “good intentions,” none of them to my knowledge said “prima facie good intentions” and many used some other very, very similar form of words to “good intentions”. In all of the essays and news reports and papers I have seen there is some attempt to convey the idea that we are not addressing an overtly evil AI.
As I said, in almost all cases, commentors have picked that usage up straight away.