We might have trouble communicating across an two way inferential barrier as we make significantly different assumptions. But we are both being sincere so I’ll try to give an outline to what I am saying:
I expect my future ethical intuitions to be reflectively inconsistent when multiplied out.
Reflectively inconsistent ethical systems, when followed, will have consequences that are suboptimal according to any given preferences over possible states of the universe.
Wedrifid-would-want to have a reflective ethical system.
Wedrifid should do things that wedrifid-would-want, a priori. (Tangentially, everyone else should do what wedrifid-would-want too. It so happens that following their own volition is a big part of wedrifid-would-want but the very nature of should makes all should claims quite presumptive.)
Wedrifid should not base his ethical theories around predicting future ethical intuitions.
Allow me to replace ‘ethical intuitions’ with, lets say, “Coherent Extrapolated Ethical Volition”. That may make me more comfortable getting closer to where I think your position is. But even then I wouldn’t want to match my ethical judgments now with predicted future ethical intuitions. This is a somewhat analogous to the discussion in A Much Better Life?. My ethical theories should match my (coherent) intuitions now, not the intuitions of that other guy called wedrifid who is in the future.
I should add: Something we may agree on is that we can use normal techniques of rational inquiry to better elicit what our Present-time Coherent Extrapolated Ethical Volition is. Since the process of acquiring evidence does take time our effective positions may be similar. We may be in, as pjebey would put it, ‘Violent Agreement’. ‘Should’ claims do that sometimes. :)
Hi Jack,
We might have trouble communicating across an two way inferential barrier as we make significantly different assumptions. But we are both being sincere so I’ll try to give an outline to what I am saying:
I expect my future ethical intuitions to be reflectively inconsistent when multiplied out.
Reflectively inconsistent ethical systems, when followed, will have consequences that are suboptimal according to any given preferences over possible states of the universe.
Wedrifid-would-want to have a reflective ethical system.
Wedrifid should do things that wedrifid-would-want, a priori. (Tangentially, everyone else should do what wedrifid-would-want too. It so happens that following their own volition is a big part of wedrifid-would-want but the very nature of should makes all should claims quite presumptive.)
Wedrifid should not base his ethical theories around predicting future ethical intuitions.
Allow me to replace ‘ethical intuitions’ with, lets say, “Coherent Extrapolated Ethical Volition”. That may make me more comfortable getting closer to where I think your position is. But even then I wouldn’t want to match my ethical judgments now with predicted future ethical intuitions. This is a somewhat analogous to the discussion in A Much Better Life?. My ethical theories should match my (coherent) intuitions now, not the intuitions of that other guy called wedrifid who is in the future.
I should add: Something we may agree on is that we can use normal techniques of rational inquiry to better elicit what our Present-time Coherent Extrapolated Ethical Volition is. Since the process of acquiring evidence does take time our effective positions may be similar. We may be in, as pjebey would put it, ‘Violent Agreement’. ‘Should’ claims do that sometimes. :)