Yeah, but most people don’t come up with a moral system that arrives at undesirable consequences in typical circumstances. Ditto for going against human intuitions/culture.
They’re different questions.
Now I’m curious. Is your answer to them different? Could you please answer both of those hypotheticals?
ETA: If your answer is different, then isn’t your morality in fact based solely on the consequences and not some innate thing that comes along with personhood?
does Alicorn’s nearest counterpart who grew up in such a world share her opinions?
Almost certainly, she does not. Otherworldly-Alicorn-Counterpart (OAC) has a very different causal history from me. I would not be surprised to find any two opinions differ between me and OAC, including ethical opinions. She probably doesn’t even like chocolate chip cookie dough ice cream.
if the Alicorn from this world were transported to a world like this, would she modify her ethics to suit the new context?
No. However: after an adjustment period in which I became accustomed to the new world, my epistemic state about the likely consequences of various actions would change, and that epistemic state has moral force in my system as it stands. The system doesn’t have to change at all for a change in circumstance and accompanying new consequential regularities to motivate changes in my behavior, as long as I have my eyes open. This doesn’t make my morality “based on consequences”; it just means that my intentions are informed by my expectations which are influenced by inductive reasoning from the past.
I guess the question I meant to ask was: In a world where your deontology would lead to horrible consequences, do you think it is likely for someone to come up with a totally different deontology that just happens to have good consequences most of the time in that world?
A ridiculous example: If an orphanage exploded every time someone did nothing in a moral dilemma, wouldn’t OAC be likely to invent a moral system saying inaction is more bad than action? Wouldn’t OAC also likely believe that inaction is inherently bad? I doubt OAC would say, “I privilege the null action, but since orphanages explode every time we do nothing, we have to weigh those consequences against that (lack of) action.”
Your right not to be killed has a list of exceptions. To me this indicates a layer of simpler rules underneath. Your preference for inaction has exceptions for suitably bad consequences. To me this seems like you’re peeking at consequentialism whenever the consequences of your deontology are bad enough to go against your intuitions.
I guess the question I meant to ask was: In a world where your deontology would lead to horrible consequences, do you think it is likely for someone to come up with a totally different deontology that just happens to have good consequences most of the time in that world?
It seems likely indeed that someone would do that.
If an orphanage exploded every time someone did nothing in a moral dilemma
I think that in this case, one ought to go about getting the orphans into foster homes as quickly as possible.
One thing that’s very complicated and not fully fleshed out that I didn’t mention is that, in certain cases, one might be obliged to waive one’s own rights, such that failing to do so is a contextually relevant wrong act and forfeits the rights anyway. It seems plausible that this could apply to cases where failing to waive some right will lead to an orphanage exploding.
Yeah, but most people don’t come up with a moral system that arrives at undesirable consequences in typical circumstances. Ditto for going against human intuitions/culture.
Now I’m curious. Is your answer to them different? Could you please answer both of those hypotheticals?
ETA: If your answer is different, then isn’t your morality in fact based solely on the consequences and not some innate thing that comes along with personhood?
Almost certainly, she does not. Otherworldly-Alicorn-Counterpart (OAC) has a very different causal history from me. I would not be surprised to find any two opinions differ between me and OAC, including ethical opinions. She probably doesn’t even like chocolate chip cookie dough ice cream.
No. However: after an adjustment period in which I became accustomed to the new world, my epistemic state about the likely consequences of various actions would change, and that epistemic state has moral force in my system as it stands. The system doesn’t have to change at all for a change in circumstance and accompanying new consequential regularities to motivate changes in my behavior, as long as I have my eyes open. This doesn’t make my morality “based on consequences”; it just means that my intentions are informed by my expectations which are influenced by inductive reasoning from the past.
I guess the question I meant to ask was: In a world where your deontology would lead to horrible consequences, do you think it is likely for someone to come up with a totally different deontology that just happens to have good consequences most of the time in that world?
A ridiculous example: If an orphanage exploded every time someone did nothing in a moral dilemma, wouldn’t OAC be likely to invent a moral system saying inaction is more bad than action? Wouldn’t OAC also likely believe that inaction is inherently bad? I doubt OAC would say, “I privilege the null action, but since orphanages explode every time we do nothing, we have to weigh those consequences against that (lack of) action.”
Your right not to be killed has a list of exceptions. To me this indicates a layer of simpler rules underneath. Your preference for inaction has exceptions for suitably bad consequences. To me this seems like you’re peeking at consequentialism whenever the consequences of your deontology are bad enough to go against your intuitions.
It seems likely indeed that someone would do that.
I think that in this case, one ought to go about getting the orphans into foster homes as quickly as possible.
One thing that’s very complicated and not fully fleshed out that I didn’t mention is that, in certain cases, one might be obliged to waive one’s own rights, such that failing to do so is a contextually relevant wrong act and forfeits the rights anyway. It seems plausible that this could apply to cases where failing to waive some right will lead to an orphanage exploding.