I notice that I’m a bit confused, especially when reading, “programming a machine superintelligence to maximize pleasure.” What would this mean?
It also seems like some arguments are going on in the comments about the definition of “like”, “pleasure”, “desire” etc. I’m tempted ask everyone to pull out the taboo game on these words here.
A helpful direction I see this article pointing toward, though, is how we personally evaluate an AI’s behavior. Of course, by no means does an AI have to mimic human internal workings 100%, so taking the way we DO work, how can we use that knowledge to construct an AI that interacts with with us in good way?
I don’t know what “good way” means here though. That’s an excellent question/point I got from the article though.
You might be interested in Allais paradox, which is an example of humans in fact demonstrating behavior which doesn’t maximize any utility function. If you’re aware of the Von Neumann-Morgenstern utility function characterization, this becomes clearer than just knowing what a utility function is.