This could potentially solve lots of long-standing thorny problems in consequentialism, like wireheading and the fiendish difficulty of defining happiness/utility, and how even making the tiniest of mistake in that definition can be precisely catastrophic.
“optimise human preference (with appropriate humility about what that means (implied))” solves this better.
While I do think that optionality is more definable than utility, it’s still not trivial. I have ideas on how to calculate it, but not full clarity yet. I’m reaching out to find more people who have thoughts in this direction: are there some of you who might already believe that the greatest good might come from giving the most amount of (meaningfully different) choices to agents?
I’m fairly sure you’re going to need to assume a notion of human preference to tell you which choices are meaningful to humans.
“optimise human preference (with appropriate humility about what that means (implied))” solves this better.
I’m fairly sure you’re going to need to assume a notion of human preference to tell you which choices are meaningful to humans.