Thank you- that paper is extremely relevant and I appreciate the link.
To reiterate, mostly for my own benefit:
As Tegmark says- whether we’re talking about a foundation to ethics, or a “final goal”, or we simply want to not be confused about what’s worth wanting, we need to figure out what makes one brain-state innately preferable to another, and ultimately this boils down to arrangements of particles. But what makes one arrangement of particles superior to another? (This is not to give credence to moral relativism- I do believe this has a crisp answer).
These are great questions. I’m not sure they have answers. But they seem extremely pertinent to making a good AGI.
Tegmark’s paper here: http://arxiv.org/pdf/1409.0813.pdf seems to be poking in the same direction.
Neglecting these questions is, IMO, tantamount to moral relativism or nihilism.
Thank you- that paper is extremely relevant and I appreciate the link.
To reiterate, mostly for my own benefit: As Tegmark says- whether we’re talking about a foundation to ethics, or a “final goal”, or we simply want to not be confused about what’s worth wanting, we need to figure out what makes one brain-state innately preferable to another, and ultimately this boils down to arrangements of particles. But what makes one arrangement of particles superior to another? (This is not to give credence to moral relativism- I do believe this has a crisp answer).