I mean, if it’s about looking for post-hoc rationalizations, what’s even the point of pretending there’s a consistent ethical system?
Hmm, I would not describe it as rationalization in the motivated reasoning sense.
My model of this process is that most of my ethical intuitions are mostly a black-box and often contradictory, but still in the end contain a lot more information about what I deem good than any of the explicit reasoning I am capable of. If however, I find an explicit model which manages to explain my intuitions sufficiently well, I am willing to update or override my intuitions.
I would in the end accept an argument that goes against some of my intuitions if it is strong enough. But I will also strive to find a theory which manages to combine all the intuitions into a functioning whole.
In this case, I have an intuition towards negative utilitarianism, which really dislikes utility monsters, but I also have noticed the tendency that I land closer to symmetric utilitarianism when I use explicit reasoning. Due to this, the likely options are that after further reflection I
would be convinced that utility monsters are fine, actually.
would come to believe that there are strong utilitarian arguments to have a policy against utility monsters such that in practice they would almost always be bad
would shift in some other direction
and my intuition for negative utilitarianism would prefer cases 2 or 3.
So the above description was what was going on in my mind, and combined with the always-present possibility that I am bullshitting myself, led to the formulation I used :)
Hmm, I would not describe it as rationalization in the motivated reasoning sense.
My model of this process is that most of my ethical intuitions are mostly a black-box and often contradictory, but still in the end contain a lot more information about what I deem good than any of the explicit reasoning I am capable of. If however, I find an explicit model which manages to explain my intuitions sufficiently well, I am willing to update or override my intuitions. I would in the end accept an argument that goes against some of my intuitions if it is strong enough. But I will also strive to find a theory which manages to combine all the intuitions into a functioning whole.
In this case, I have an intuition towards negative utilitarianism, which really dislikes utility monsters, but I also have noticed the tendency that I land closer to symmetric utilitarianism when I use explicit reasoning. Due to this, the likely options are that after further reflection I
would be convinced that utility monsters are fine, actually.
would come to believe that there are strong utilitarian arguments to have a policy against utility monsters such that in practice they would almost always be bad
would shift in some other direction
and my intuition for negative utilitarianism would prefer cases 2 or 3.
So the above description was what was going on in my mind, and combined with the always-present possibility that I am bullshitting myself, led to the formulation I used :)