Isn’t this just a case of the values the Roomba was designed to maximize being different from the values it actually maximizes? Consider the following:
We could consider simple optimizers like humans as falling into the category: “mind without a clear belief/values distinction”; they definitely do a lot of signal processing and feature extraction and control theory, but they don’t really have values. The human would happily have sex with a condom thinking that it was maximizing its fitness.
i.e. Roombas are program executers, not cleanliness maximizers.
I suppose the counter is that humans don’t have a clear belief/values distinction.
Isn’t this just a case of the values the Roomba was designed to maximize being different from the values it actually maximizes? Consider the following:
i.e. Roombas are program executers, not cleanliness maximizers.
I suppose the counter is that humans don’t have a clear belief/values distinction.