I believe that it is practically impossible to systematically and consistently assign utility to world states. I believe that utility can not even be grounded and therefore defined. I don’t think that there exists anything like “human preferences” and therefore human utility functions, apart from purely theoretical highly complex and therefore computationally intractable approximations. I don’t think that there is anything like a “self” that can be used to define what constitutes a human being, not practically anyway. I don’t believe that it is practically possible to decide what is morally right and wrong in the long term, not even for a superintelligence.
Strange stuff.
Surely “right” and “wrong” make the most sense in the context of a specified moral system.
If you are using those terms outside such a context, it usually implies some kind of moral realism—in which case, one wonders what sort of moral realism you have in mind.
Strange stuff.
Surely “right” and “wrong” make the most sense in the context of a specified moral system.
If you are using those terms outside such a context, it usually implies some kind of moral realism—in which case, one wonders what sort of moral realism you have in mind.