Really? As far as I can tell, the consensus for Bayesian updating and expected utility maximization among professional philosophers is near total. Most of them haven’t heard of UDT yet, but on Less Wrong and at SIAI there also seems to be a consensus that UDT is, if not quite right, at least on the right track.
From my (anecdotal but varied) experience talking to professional philosophers about them, I’d (off-the-cuff) estimate 80% are not familiar with expected utility maximization (in the sense of multiplying the probability of outcome by the utility) or Bayesian updating, and of the rest, a significant portion think that the Bayesian approach to probability is wrong or nonsensical, or that “expected utility maximization” is obviously wrongheaded because it sounds like Utilitarianism.
“Utilitarianism” is a term for a specific concept hogging a perfectly good name that could be used for something more general: utility-based decision making.
From my (anecdotal but varied) experience talking to professional philosophers about them, I’d (off-the-cuff) estimate 80% are not familiar with expected utility maximization (in the sense of multiplying the probability of outcome by the utility) or Bayesian updating, and of the rest, a significant portion think that the Bayesian approach to probability is wrong or nonsensical, or that “expected utility maximization” is obviously wrongheaded because it sounds like Utilitarianism.
That matches my anecdotal and varied experience, and as we know, the singular of anecdote is ‘update’ and the plural is ‘update more’.
Should I quote you for this one, or was it someone else originally?
“Utilitarianism” is a term for a specific concept hogging a perfectly good name that could be used for something more general: utility-based decision making.