The words “maximise paperclips” cover many different possibilities. But any particular AI with a goal that can be described by those words will have whatever specific goal it actually has. It has no decisions to make about that, no philosophical problems to solve, any more than an autopilot in a plane needs to wonder what an “optimal flight path” is, e.g. whether to minimise fuel consumption or flight time. It does whatever the task built into it is.
What is in the mind of someone looking at the AI and what is in the AI are different things. The issues you bring up stem from the former, and amount to nothing more than the fuzziness of all language, but the latter just is whatever it is.
And if the AI is powerful enough and devotes everything to “making paperclips”, it won’t matter to us what precisely it is trying to do, it will still blithely take the atoms of our bodies for its own purposes.
The words “maximise paperclips” cover many different possibilities. But any particular AI with a goal that can be described by those words will have whatever specific goal it actually has. It has no decisions to make about that, no philosophical problems to solve, any more than an autopilot in a plane needs to wonder what an “optimal flight path” is, e.g. whether to minimise fuel consumption or flight time. It does whatever the task built into it is.
What is in the mind of someone looking at the AI and what is in the AI are different things. The issues you bring up stem from the former, and amount to nothing more than the fuzziness of all language, but the latter just is whatever it is.
And if the AI is powerful enough and devotes everything to “making paperclips”, it won’t matter to us what precisely it is trying to do, it will still blithely take the atoms of our bodies for its own purposes.