Err, Bayesian probability doesn’t have anything special for morality either. People on LW tend to be moral non-realists, ie people who deny that there is objective moral knowledge, if that’s what you’re talking about (not sure- sorry!), but that’s completely orthogonal to this discussion: there’s nothing in Bayesianism that leads inevitably to non-realism. (Also, I’m not convinced that moral realism is right, so saying “Bayesianism leads to moral non-realism” isn’t a very effective argument.)
Bayesian epistemology doesn’t create moral knowledge because it only functions when fed in observation data (or assumptions). I get a lot of conflicting statements here, but some people tell me they only care about prediction, they are instrumentalists, and that is what Bayes stuff is for, and they don’t regard it as a bad thing that it doesn’t address morality at all.
Now what you have in mind, I think, is that if you make a ton of assumptions you could then talk about morality using Bayes. Popperism doesn’t require a bunch of arbitrary starting assumptions to create moral knowledge, it just can deal with it directly.
If I’m wrong, explain how you can deal with figuring out, e.g., what are good moral values to have (without assuming a utility function or something).
As I tried to say (and probably explained really poorly- sorry!), the LW consensus is that morality is not objective. Therefore, the idea of figuring out what good moral values would be is, according to moral non-realism, impossible: any decision about what a good moral value is must rely on your pre-existing values, if an objective morality is not out there to be discovered. Using this as a criticism of Bayesianism is sorta like criticizing thermodynamics because it claims it’s impossible to exactly specify the position and velocity of each particle: not only is the criticism unrelated to the subject matter, but satisfying it would require the theory to do something that is to the best of our knowledge incorrect.
Err, Bayesian probability doesn’t have anything special for morality either. People on LW tend to be moral non-realists, ie people who deny that there is objective moral knowledge, if that’s what you’re talking about (not sure- sorry!), but that’s completely orthogonal to this discussion: there’s nothing in Bayesianism that leads inevitably to non-realism. (Also, I’m not convinced that moral realism is right, so saying “Bayesianism leads to moral non-realism” isn’t a very effective argument.)
Bayesian epistemology doesn’t create moral knowledge because it only functions when fed in observation data (or assumptions). I get a lot of conflicting statements here, but some people tell me they only care about prediction, they are instrumentalists, and that is what Bayes stuff is for, and they don’t regard it as a bad thing that it doesn’t address morality at all.
Now what you have in mind, I think, is that if you make a ton of assumptions you could then talk about morality using Bayes. Popperism doesn’t require a bunch of arbitrary starting assumptions to create moral knowledge, it just can deal with it directly.
If I’m wrong, explain how you can deal with figuring out, e.g., what are good moral values to have (without assuming a utility function or something).
As I tried to say (and probably explained really poorly- sorry!), the LW consensus is that morality is not objective. Therefore, the idea of figuring out what good moral values would be is, according to moral non-realism, impossible: any decision about what a good moral value is must rely on your pre-existing values, if an objective morality is not out there to be discovered. Using this as a criticism of Bayesianism is sorta like criticizing thermodynamics because it claims it’s impossible to exactly specify the position and velocity of each particle: not only is the criticism unrelated to the subject matter, but satisfying it would require the theory to do something that is to the best of our knowledge incorrect.