To answer your second question: an obscurantist might want to act as if it did not know certain propositions, but CEV extrapolates desires on the basis of knowledge that might include those same propositions, the ignorance of which constitutes a core part of the obscurantist’s identity.
What definition of CEV do you use that you get around the “were more the people we wished we were (...) extrapolated as we wish that extrapolated, interpreted as we wish that interpreted” part of CEV (page 6 here), which would block such an extrapolation against the obscurantist’s desires?
CEV against the obscurantist’s desires is a contradictio in terminis.
What definition of CEV do you use that you get around the “were more the people we wished we were (...) extrapolated as we wish that extrapolated, interpreted as we wish that interpreted” part of CEV (page 6 here), which would block such an extrapolation against the obscurantist’s desires?
None, as I simply don’t get around that part of CEV.
CEV against the obscurantist’s desires is a contradictio in terminis.
Indeed it is, but so could be CEV of the obscurantist’s desires in the first place; that’s one of the issues I’m raising, to which I genuinely don’t know the answer.
To see how that could happen, consider the following analogy. Let q ::= “all literals in this conjunction are true” in the unsatisfiable conjunction ‘p ∧ ¬p ∧ q’; here ‘p’ stands for “if we knew more”—a statement taken from the same paragraph you quoted—while ‘¬p’ and ‘q’ stand for consequences of the remaining of CEV’s requisites.
I’m not sure how this chimes with “Do this force us to renounce to the idea of personal CEV [emphasis mine]? Hardly so.”
There are infinitely many possible ways of extrapolating desires. But if you don’t get around the part of “more the people we wished we were” (etc.), let’s not call your musings on extrapolating CEV, because it doesn’t fit the major criteria.
If an obscurantist (or anyone else for that matter) does not wish for his desires to change in any way, there is no personal CEV of him. Simple as that.
There may be other sensible ways of extrapolating / streamlining a utility function. It’s an open question, and one that’s much bigger than just CEV, the CEV part (as it’s defined) is often answered easily enough.
Assume there’s no personal CEV for certain obscurantists, then we are left with a theory that’s supposed to tells us how to make people happy—i.e. CEV—and the example of an agent who cannot be made happy through their personal CEV—i.e. an obscurantist; as the whole point of CEV is desire-satisfaction, if that fails to occur then the proposal isn’t exactly fulfilling its role. You’re correct that my musings aren’t only on CEV, as they relate to the bigger question of what is a correct desire-satisfaction theory of well-being, which in turn might require figuring out how to extrapolate utility functions.
What definition of CEV do you use that you get around the “were more the people we wished we were (...) extrapolated as we wish that extrapolated, interpreted as we wish that interpreted” part of CEV (page 6 here), which would block such an extrapolation against the obscurantist’s desires?
CEV against the obscurantist’s desires is a contradictio in terminis.
None, as I simply don’t get around that part of CEV.
Indeed it is, but so could be CEV of the obscurantist’s desires in the first place; that’s one of the issues I’m raising, to which I genuinely don’t know the answer. To see how that could happen, consider the following analogy. Let q ::= “all literals in this conjunction are true” in the unsatisfiable conjunction ‘p ∧ ¬p ∧ q’; here ‘p’ stands for “if we knew more”—a statement taken from the same paragraph you quoted—while ‘¬p’ and ‘q’ stand for consequences of the remaining of CEV’s requisites.
I’m not sure how this chimes with “Do this force us to renounce to the idea of personal CEV [emphasis mine]? Hardly so.”
There are infinitely many possible ways of extrapolating desires. But if you don’t get around the part of “more the people we wished we were” (etc.), let’s not call your musings on extrapolating CEV, because it doesn’t fit the major criteria.
If an obscurantist (or anyone else for that matter) does not wish for his desires to change in any way, there is no personal CEV of him. Simple as that.
There may be other sensible ways of extrapolating / streamlining a utility function. It’s an open question, and one that’s much bigger than just CEV, the CEV part (as it’s defined) is often answered easily enough.
Assume there’s no personal CEV for certain obscurantists, then we are left with a theory that’s supposed to tells us how to make people happy—i.e. CEV—and the example of an agent who cannot be made happy through their personal CEV—i.e. an obscurantist; as the whole point of CEV is desire-satisfaction, if that fails to occur then the proposal isn’t exactly fulfilling its role. You’re correct that my musings aren’t only on CEV, as they relate to the bigger question of what is a correct desire-satisfaction theory of well-being, which in turn might require figuring out how to extrapolate utility functions.