I think you are making a lot of assumptions about what I think and believe. I also think you’re coming dangerously close to being perceived as a troll, at least by me.
U-ism and D-ology are rival answers to the question “what is the right way to resolve conflicts of interest?”
Oh! So that’s what they’re supposed to be? Good, then clearly neither—rejoice, people of the Earth, the answer has been found! Mathematically you literally cannot do better than Pareto-optimal choices.
The real question, of course, is how to put meaningful numbers into the game theory formula, how to calculate the utility of the agents, how to determine the correct utility functions for each agent.
My answer to this is that there is already a set of utility functions implemented in each humans’ brains, and this set of utility functions can itself be considered a separate sub-game, and if you find solutions to all the problems in this subgame you’ll end up with a reflectively coherent CEV-like (“ideal” from now on) utility function for this one human, and then that’s the utility function you use for that agent in the big game board / decision tree / payoff matrix / what-have-you of moral dilemmas and conflicts of interest.
So now what we need is better insights and more research into what these sets of utility functions look like, how close to completion they are, and how similar they are across different humans.
Note that I’ve never even heard of a single human capable of knowing or always acting on their “ideal utility function”. All sample humans I’ve ever seen also have other mechanisms interfering or taking over which makes it so that they don’t always act even according to their current utility set, let alone their ideal one.
I don’t know why you would want to say you have an explanation of morality when you are an error theorist. (...) I also don’t know why you are an error theorist.
I don’t know what being an “error theorist” entails, but you claiming that I am one seems valid evidence that I am one so far, so sure. Whatever labels float your boat, as long as you aren’t trying to sneak in connotations about me or committing the noncentral fallacy. (notice that I accidentally snuck in the connotation that, if you are committing this fallacy, you may be using “worst argument in the world”)
And I can’t say that f, ma and a mean something in f=ma? When you apply maths, the variables mean something. That’s what application is. In U-ism, the input it happiness, or lifeyears, or soemthig, and the output is a decision that is put into practice.
Sure. Now to re-state my earlier question: which formulations of U and D can have truth values, and what pieces of evidence would falsify each?
The formulation for f=ma is that the force applied to an object is equal to the product of the object’s mass and its acceleration, for certain appropriate units of measurements. You can experimentally verify this by pushing objects, literally. If for some reason we ran a well-designed, controlled experiment and suddenly more massive objects started accelerating more than less massive objects with the same amount of force, or more generally the physical behavior didn’t correspond to that equation, the equation would be false.
Oh! So that’s what they’re supposed to be? Good, then clearly neither—rejoice, people of the Earth, the answer has been found! Mathematically you literally cannot do better than Pareto-optimal choices
Assuming that everything of interest can be quantified,that the quantities can be aggregated and compated, and assuming that anyone can take any amount of loss for the greater good...ie assuming all the stuff that utiliatarins assume and that their opponents don’t.
My answer to this is that there is already a set of utility functions implemented in each humans’ brains, and this set of utility functions can itself be considered a separate sub-game, and if you find solutions to all the problems in this subgame you’ll end up with a reflectively coherent CEV-like (“ideal” from now on) utility function for this one human, and then that’s the utility function you use for that agent in the big game board / decision tree / payoff matrix / what-have-you of moral dilemmas and conflicts of interest.
No. You cant leap from “a reflectively coherent CEV-like [..] utility function for this one human” to a solution
of conflicts of interest between agents. All you have is a set of exquisite model of individual interests, and no
way of combining them, or trading them off.
I don’t know what being an “error theorist” entails,
Strictly speaking, you are a metaethical error theorist. You think there is no meaning to the truth of falsehood of metathical claims.
Now to re-state my earlier question: which formulations of U and D can have truth values, and what pieces of evidence would falsify each?
Any two theoris which have differing logical structure can have truth values, since they can be judged by coherence, etc, and any two theories which make differnt objectlevle predictions can likelwise have truth values.
U and D pass both criteria with flying colours.
And if CEV is not a meaningul metaethical theory, why bother with it? If you can’t say that the output of a grand CEV number crunch is what someone should actually do, what is the point?
The formulation for f=ma is that the force applied to an object is equal to the product of the object’s mass and its acceleration, for certain appropriate units of measurements. You can experimentally verify this by pushing objects, literally.
I know. And you detemine the truth factors of other theories (eg maths) non-empirically. Or you can use a mixture. How were you porposing to test CEV?
Assuming that everything of interest can be quantified,that the quantities can be aggregated and compated, and assuming that anyone can take any amount of loss for the greater good...ie assuming all the stuff that utiliatarins assume and that their opponents don’t.
(...)
No. You cant leap from “a reflectively coherent CEV-like [..] utility function for this one human” to a solution of conflicts of interest between agents. All you have is a set of exquisite model of individual interests, and no way of combining them, or trading them off.
Two individual interests: Making paperclips and saving human lives. Prisoners’ dilemma between the two. Is there any sort of theory of morality that will “solve” the problem or do better than number-crunching for Pareto optimality?
Even things that cannot be quantified can be quantified. I can quantify non-quantifiable things with “1” and “0″. Then I can count them. Then I can compare them: I’d rather have Unquantifiable-A than Unquantifiable-B, unless there’s also Unquantifiable-C, so B < A < B+C. I can add any number of unquantifiables and/or unbreakable rules, and devise a numerical system that encodes all my comparative preferences in which higher numbers are better. Then I can use this to find numbers to put on my Prisoners Dilemma matrix or any other game-theoretic system and situation.
Relevant claim from an earlier comment of mine, reworded: There does not exist any “objective”, human-independent method of comparing and trading the values within human morality functions.
Game Theory is the science of figuring out what to do in case you have different agents with incompatible utility functions. It provides solutions and formalisms both when comparisons between agents’ payoffs are impossible and when they are possible. Isn’t this exactly what you’re looking for? All that’s left is applied stuff—figuring out what exactly each individual cares about, which things all humans care about so that we can simplify some calculations, and so on. That’s obviously the most time-consuming, research-intensive part, too.
any two theories which make differnt objectlevle predictions can likelwise have truth values.
Would you mind giving three examples of cases where Deontology being true gives different predictions than Consequentialism being true? This is another extension of the original question posed, which you’ve been dodging.
Would you mind giving three examples of cases where Deontology being true gives different predictions than Consequentialism being true?
Deontology says you should push the fat man under the trolley, and various other examples that are well known
in the literature.
This is another extension of the original question posed, which you’ve been dodging.
I have not been “dodging” it. The question seemed to frame the issue as one of being able to predict events that are observed passively. That misses the point on several levels. For one thing, it is not the case that empirical proof
is the only kind of proof. For another, no moral theory “does” anything unless you act on it. And that includes CEV.
Deontology says you should push the fat man under the trolley, and various other examples that are well known in the literature.
This would still be the case, even if Deonotology was false. This leads me to strongly suspect that whether or not it is true is a meaningless question. There is no test I can think of which would determine its veracity.
Actually you deontology says you should NOT push the fat man . Consequentialism says you should.
This would still be the case, even if Deonotology was false
it is hard to make sense of that. If a theory is correct, then what it states is correct. D and U make opposite recommendations about the fat man, so you cannot say that they are both indiffernt with regard to your rather firm intuition about this case.
There is no test I can think of which would determine its veracity.
Once again I will state: moral theories are tested by their ability to match moral intuition, and by their internal consistency, etc.
Compute CEV. Then actually do learn and become this better person that was modeled to compute the CEV. See if you prefer the CEV or any other possible utility function.
Asymptotic estimations could also be made IFF utility function spaces are continuous and can be mapped by similarity: If as you learn more true things from a random sample and ordering of all possible true things you could learn, gain more brain computing power, and gain a better capacity for self-reflection, your preferences tend towards CEV-predicted preferences, then CEV is almost certainly true.
If a theory is correct, then what it states is correct. D and U make opposite recommendations about the fat man, so you cannot say that they are both indiffernt with regard to your rather firm intuition about this case.
D(x) and U(y) make opposite recommendations. x and y are different intuitions from different people, and these intuitions may or may not match the actual morality functions inside the brains of their proponents.
I can find no measure of which recommendation is “correct” other than inside my own brain somewhere. This directly implies that it is “correct for Frank’s Brain”, not “correct universally” or “correct across all humans”.
Based on this reasoning, if I use my moral intuition to reason about the the fat man trolley problem problem using D() and find the conclusion correct within context, then D is correct for me, and the same goes for U(). So let’s try it!
My primary deontological rule: When there exist counterfactual possible futures where the expected number of deaths is lower than all other possible futures, always take the course of action which leads to this less-expected-deaths future. (In simple words: Do Not Kill, where inaction that leads to death is considered Killing).
A train is going to hit five people. There is a fat man which I can push down to save the five people with 90% probability. (let’s just assume I’m really good at quickly estimating this kind of physics within this thought experiment)
If I don’t push the fat man, 5 people die with 99% probability (shit happens). If I push the fat man, 1 person dies with 99% probabilty (shit happens), and the 5 others still die with 10% probability.
Expected deaths of not-pushing: 4.95.
Expected deaths of pushing: 1.49.
I apply the deontological rule. That fat man is doomed.
Now let’s try the utilitarian vers—Oh wait. That’s already what we did. We created a deontological rule that says to pick the highest expected utility action, and that’s also what utilitarianism tells me to do.
See what I mean when I say there is no meaningful distinction? If you calibrate your rules consistently, all “moral theories” I see philosophers arguing about produce the same output. Equal output, in fact.
So to return to the earlier point: D(trolley, Frank’s Rule) is correct where trolley is the problem and Frank’s is the rules I find most moral. U(trolley, Frank’s Utility Function) is also correct. D(trolley, ARBITRARY RULE PICKED AT RANDOM FROM ALL POSSIBLE RULES) is incorrect for me. Likewise, U(trolley, ARBITRARY UTILITY FUNCTION PICKED AT RANDOM FROM ALL POSSIBLE UTILITY FUNCTION) is incorrect for me.
This means that U(trolley) and D(trolley) cannot be “correct” or “incorrect, because in typical Functional Programming fashion, U(trolley) and D(trolley) return curried functions, that is, they return a function of a certain type which takes a rule (for D) or a utility function (for U) and returns a recommendation based on this for the trolley problem.
To reiterate some previous claims of mine, in which I am fairly confident, in the above jargon: There does not exist any single-parameter U(x) or D(x) functions that return a single truth-valuable recommendation without any rules or utility functions as input. All deontological systems rely on the rules supplied to them, and all utilitarian systems rely on the utility functions supplied to them. There exists a utility function equivalent to each possible rule, and there exists a rule equivalent to each possible utility function.
The rules or the utility functions are inside human brains. And either can be expressed in terms of the other interchangeably—which we use is merely a matter of convenience as one will correspond to the brain’s algorithm more easily than the other.
My primary deontological rule: When there exist counterfactual possible futures where the expected number of deaths is lower than all other possible futures, always take the course of action which leads to this less-expected-deaths future. (In simple words: Do Not Kill, where inaction that leads to death is considered Killing).
I suspect that defining deontology as obeying the single rule “maximize utility” would be a non-central redefinition of the term. something most deontologists would find unacceptable.
The simplified “Do Not Kill” formulation sounds very much like most deontological rules I’ve heard of (AFAIK, “Do not kill.” is a bread-and-butter standard deontological rule). It also happens to be a rule which I explicitly attempt to implement in my everyday life in exactly the format I’ve exposed—it’s not just a toy example, this is actually my primary “deontological” rule as far as I can tell.
And to me there is no difference between “Pull the trigger” or “Remain immobile” when both are extremely likely to lead to the death of someone. To me, both are “Kill”. So if not pulling the trigger leads to one death, and pulling the trigger leads to three deaths, both options are horrible, but I still really prefer not pulling the trigger.
So if for some inexplicable reason it’s really certain that the fat man will save the workers and that there is no better solution (this is an extremely unlikely proposition, and by default would not trust myself to have searched the whole space of possible options), then I would prefer pushing the fat man.
If I considered standing by and watching people die because I did nothing to not be “Kill”, then I would enforce that rule, and my utility function would also different. And then I wouldn’t push the fat man either way, whether I calculate it with utility functions or whether I follow the rule “Do Not Kill”.
I agree that it’s non-central, but IME most “central” rules I’ve heard of are really simple wordings that obfuscate the complexity and black boxes that are really going on in the human brain. At the base level, “do not kill” and “do not steal” are extremely complex. I trust that this part isn’t controversial except in naive philosophical journals of armchair philosophizing.
to me there is no difference between “Pull the trigger” or “Remain immobile” when both are extremely likely to lead to the death of someone. To me, both are “Kill”.
I believe that this is where many deontologists would label you a consequentialist.
most “central” rules I’ve heard of are really simple wordings that obfuscate the complexity and black boxes that are really going on in the human brain. At the base level, “do not kill” and “do not steal” are extremely complex. I trust that this part isn’t controversial except in naive philosophical journals of armchair philosophizing.
There are certainly the complex edge cases, like minimum necessary self-defense and such, but in most scenarios the application of the rules is pretty simple. Moreover, “inaction = negative action” is quite non-standard. In fact, even if I believe that in your example pushing the fat man would be the “right thing to do”, I do not alieve it (i.e. I would probably not do it if push came to shove, so to speak).
I believe that this is where many deontologists would label you a consequentialist.
With all due respect to all parties involved, if that’s how it works I would label the respective hypothetical individuals who would label me that “a bunch of hypocrites”. They’re no less consequentialist, in my view, since they hide behind words the fact that they have to make the assumption that pulling a trigger will lead to the consequence of a bullet coming out of it which will lead to the complex consequence of someone’s life ending.
I wish I could be more clear and specific, but it is difficult to discuss and argue all the concepts I have in mind as they are not all completely clear to me, and the level of emotional involvement I have in the whole topic of morality (as, I expect, do most people) along with the sheer amount of fun I’m having in here are certainly not helping mental clarity and debiasing. (yes, I find discussions, arguments, debates etc. of this type quite fun, most of the time)
In fact, even if I believe that in your example pushing the fat man would be the “right thing to do”, I do not alieve it (i.e. I would probably not do it if push came to shove, so to speak).
I’m not sure it’s just a question of not alieving it. There are many good reasons not to believe evidence that this will work, and even more good reasons to believe there is probably a better option, and many reasons why it could be extremely detrimental to you in the long term to push down a fat man onto train tracks, and if push come to shove it might end up being the more rational action in a real-life situation similar to the thought experiment.
Actually you deontology says you should NOT push the fat man . Consequentialism says you should.
I’m quite aware of that.
it is hard to make sense of that. If a theory is correct, then what it states is correct. D and U make opposite recommendations about the fat man, so you cannot say that they are both indiffernt with regard to your rather firm intuition about this case.
At this point, I simply must tap out. I’m at a loss at how else to explain what you seem to be consistently missing in my questions, but DaFranker is doing a very good job of it, so I’ll just stop trying.
moral theories are tested by their ability to match moral intuition,
Really? This is news to me. I guess Moore was right all along...
TL;DR: You should push the fat man if and only if X. You should not push the fat man if and only if ¬X.
X can be derived into a rule to use with D(X’) to compute whether you should push or not. X can also be derived into a utility function to use with U(X’) to compute whether you should push or not. The answer in either case doesn’t depend on U or D, it depends on your derivation of X, which itself depends on X.
This is shown by the assumption that for all reasonable a, there exists a g(a) where U(a) = D(g(a)). Since, by their ambiguity and vague definitions, both U() and D() seem to cover an infinite domain and are the equivalent of turing-complete, this assumption seems very natural.
I think you are making a lot of assumptions about what I think and believe. I also think you’re coming dangerously close to being perceived as a troll, at least by me.
Oh! So that’s what they’re supposed to be? Good, then clearly neither—rejoice, people of the Earth, the answer has been found! Mathematically you literally cannot do better than Pareto-optimal choices.
The real question, of course, is how to put meaningful numbers into the game theory formula, how to calculate the utility of the agents, how to determine the correct utility functions for each agent.
My answer to this is that there is already a set of utility functions implemented in each humans’ brains, and this set of utility functions can itself be considered a separate sub-game, and if you find solutions to all the problems in this subgame you’ll end up with a reflectively coherent CEV-like (“ideal” from now on) utility function for this one human, and then that’s the utility function you use for that agent in the big game board / decision tree / payoff matrix / what-have-you of moral dilemmas and conflicts of interest.
So now what we need is better insights and more research into what these sets of utility functions look like, how close to completion they are, and how similar they are across different humans.
Note that I’ve never even heard of a single human capable of knowing or always acting on their “ideal utility function”. All sample humans I’ve ever seen also have other mechanisms interfering or taking over which makes it so that they don’t always act even according to their current utility set, let alone their ideal one.
I don’t know what being an “error theorist” entails, but you claiming that I am one seems valid evidence that I am one so far, so sure. Whatever labels float your boat, as long as you aren’t trying to sneak in connotations about me or committing the noncentral fallacy. (notice that I accidentally snuck in the connotation that, if you are committing this fallacy, you may be using “worst argument in the world”)
Sure. Now to re-state my earlier question: which formulations of U and D can have truth values, and what pieces of evidence would falsify each?
The formulation for f=ma is that the force applied to an object is equal to the product of the object’s mass and its acceleration, for certain appropriate units of measurements. You can experimentally verify this by pushing objects, literally. If for some reason we ran a well-designed, controlled experiment and suddenly more massive objects started accelerating more than less massive objects with the same amount of force, or more generally the physical behavior didn’t correspond to that equation, the equation would be false.
Assuming that everything of interest can be quantified,that the quantities can be aggregated and compated, and assuming that anyone can take any amount of loss for the greater good...ie assuming all the stuff that utiliatarins assume and that their opponents don’t.
No. You cant leap from “a reflectively coherent CEV-like [..] utility function for this one human” to a solution of conflicts of interest between agents. All you have is a set of exquisite model of individual interests, and no way of combining them, or trading them off.
Strictly speaking, you are a metaethical error theorist. You think there is no meaning to the truth of falsehood of metathical claims.
Any two theoris which have differing logical structure can have truth values, since they can be judged by coherence, etc, and any two theories which make differnt objectlevle predictions can likelwise have truth values. U and D pass both criteria with flying colours.
And if CEV is not a meaningul metaethical theory, why bother with it? If you can’t say that the output of a grand CEV number crunch is what someone should actually do, what is the point?
I know. And you detemine the truth factors of other theories (eg maths) non-empirically. Or you can use a mixture. How were you porposing to test CEV?
That is simply false.
Two individual interests: Making paperclips and saving human lives. Prisoners’ dilemma between the two. Is there any sort of theory of morality that will “solve” the problem or do better than number-crunching for Pareto optimality?
Even things that cannot be quantified can be quantified. I can quantify non-quantifiable things with “1” and “0″. Then I can count them. Then I can compare them: I’d rather have Unquantifiable-A than Unquantifiable-B, unless there’s also Unquantifiable-C, so B < A < B+C. I can add any number of unquantifiables and/or unbreakable rules, and devise a numerical system that encodes all my comparative preferences in which higher numbers are better. Then I can use this to find numbers to put on my Prisoners Dilemma matrix or any other game-theoretic system and situation.
Relevant claim from an earlier comment of mine, reworded: There does not exist any “objective”, human-independent method of comparing and trading the values within human morality functions.
Game Theory is the science of figuring out what to do in case you have different agents with incompatible utility functions. It provides solutions and formalisms both when comparisons between agents’ payoffs are impossible and when they are possible. Isn’t this exactly what you’re looking for? All that’s left is applied stuff—figuring out what exactly each individual cares about, which things all humans care about so that we can simplify some calculations, and so on. That’s obviously the most time-consuming, research-intensive part, too.
Would you mind giving three examples of cases where Deontology being true gives different predictions than Consequentialism being true? This is another extension of the original question posed, which you’ve been dodging.
Deontology says you should push the fat man under the trolley, and various other examples that are well known in the literature.
I have not been “dodging” it. The question seemed to frame the issue as one of being able to predict events that are observed passively. That misses the point on several levels. For one thing, it is not the case that empirical proof is the only kind of proof. For another, no moral theory “does” anything unless you act on it. And that includes CEV.
This would still be the case, even if Deonotology was false. This leads me to strongly suspect that whether or not it is true is a meaningless question. There is no test I can think of which would determine its veracity.
Actually you deontology says you should NOT push the fat man . Consequentialism says you should.
it is hard to make sense of that. If a theory is correct, then what it states is correct. D and U make opposite recommendations about the fat man, so you cannot say that they are both indiffernt with regard to your rather firm intuition about this case.
Once again I will state: moral theories are tested by their ability to match moral intuition, and by their internal consistency, etc.
Once again, I will ask: how would you test CEV?
Compute CEV. Then actually do learn and become this better person that was modeled to compute the CEV. See if you prefer the CEV or any other possible utility function.
Asymptotic estimations could also be made IFF utility function spaces are continuous and can be mapped by similarity: If as you learn more true things from a random sample and ordering of all possible true things you could learn, gain more brain computing power, and gain a better capacity for self-reflection, your preferences tend towards CEV-predicted preferences, then CEV is almost certainly true.
D(x) and U(y) make opposite recommendations. x and y are different intuitions from different people, and these intuitions may or may not match the actual morality functions inside the brains of their proponents.
I can find no measure of which recommendation is “correct” other than inside my own brain somewhere. This directly implies that it is “correct for Frank’s Brain”, not “correct universally” or “correct across all humans”.
Based on this reasoning, if I use my moral intuition to reason about the the fat man trolley problem problem using D() and find the conclusion correct within context, then D is correct for me, and the same goes for U(). So let’s try it!
My primary deontological rule: When there exist counterfactual possible futures where the expected number of deaths is lower than all other possible futures, always take the course of action which leads to this less-expected-deaths future. (In simple words: Do Not Kill, where inaction that leads to death is considered Killing).
A train is going to hit five people. There is a fat man which I can push down to save the five people with 90% probability. (let’s just assume I’m really good at quickly estimating this kind of physics within this thought experiment)
If I don’t push the fat man, 5 people die with 99% probability (shit happens). If I push the fat man, 1 person dies with 99% probabilty (shit happens), and the 5 others still die with 10% probability.
Expected deaths of not-pushing: 4.95.
Expected deaths of pushing: 1.49.
I apply the deontological rule. That fat man is doomed.
Now let’s try the utilitarian vers—Oh wait. That’s already what we did. We created a deontological rule that says to pick the highest expected utility action, and that’s also what utilitarianism tells me to do.
See what I mean when I say there is no meaningful distinction? If you calibrate your rules consistently, all “moral theories” I see philosophers arguing about produce the same output. Equal output, in fact.
So to return to the earlier point: D(trolley, Frank’s Rule) is correct where trolley is the problem and Frank’s is the rules I find most moral. U(trolley, Frank’s Utility Function) is also correct. D(trolley, ARBITRARY RULE PICKED AT RANDOM FROM ALL POSSIBLE RULES) is incorrect for me. Likewise, U(trolley, ARBITRARY UTILITY FUNCTION PICKED AT RANDOM FROM ALL POSSIBLE UTILITY FUNCTION) is incorrect for me.
This means that U(trolley) and D(trolley) cannot be “correct” or “incorrect, because in typical Functional Programming fashion, U(trolley) and D(trolley) return curried functions, that is, they return a function of a certain type which takes a rule (for D) or a utility function (for U) and returns a recommendation based on this for the trolley problem.
To reiterate some previous claims of mine, in which I am fairly confident, in the above jargon: There does not exist any single-parameter U(x) or D(x) functions that return a single truth-valuable recommendation without any rules or utility functions as input. All deontological systems rely on the rules supplied to them, and all utilitarian systems rely on the utility functions supplied to them. There exists a utility function equivalent to each possible rule, and there exists a rule equivalent to each possible utility function.
The rules or the utility functions are inside human brains. And either can be expressed in terms of the other interchangeably—which we use is merely a matter of convenience as one will correspond to the brain’s algorithm more easily than the other.
I suspect that defining deontology as obeying the single rule “maximize utility” would be a non-central redefinition of the term. something most deontologists would find unacceptable.
The simplified “Do Not Kill” formulation sounds very much like most deontological rules I’ve heard of (AFAIK, “Do not kill.” is a bread-and-butter standard deontological rule). It also happens to be a rule which I explicitly attempt to implement in my everyday life in exactly the format I’ve exposed—it’s not just a toy example, this is actually my primary “deontological” rule as far as I can tell.
And to me there is no difference between “Pull the trigger” or “Remain immobile” when both are extremely likely to lead to the death of someone. To me, both are “Kill”. So if not pulling the trigger leads to one death, and pulling the trigger leads to three deaths, both options are horrible, but I still really prefer not pulling the trigger.
So if for some inexplicable reason it’s really certain that the fat man will save the workers and that there is no better solution (this is an extremely unlikely proposition, and by default would not trust myself to have searched the whole space of possible options), then I would prefer pushing the fat man.
If I considered standing by and watching people die because I did nothing to not be “Kill”, then I would enforce that rule, and my utility function would also different. And then I wouldn’t push the fat man either way, whether I calculate it with utility functions or whether I follow the rule “Do Not Kill”.
I agree that it’s non-central, but IME most “central” rules I’ve heard of are really simple wordings that obfuscate the complexity and black boxes that are really going on in the human brain. At the base level, “do not kill” and “do not steal” are extremely complex. I trust that this part isn’t controversial except in naive philosophical journals of armchair philosophizing.
I believe that this is where many deontologists would label you a consequentialist.
There are certainly the complex edge cases, like minimum necessary self-defense and such, but in most scenarios the application of the rules is pretty simple. Moreover, “inaction = negative action” is quite non-standard. In fact, even if I believe that in your example pushing the fat man would be the “right thing to do”, I do not alieve it (i.e. I would probably not do it if push came to shove, so to speak).
With all due respect to all parties involved, if that’s how it works I would label the respective hypothetical individuals who would label me that “a bunch of hypocrites”. They’re no less consequentialist, in my view, since they hide behind words the fact that they have to make the assumption that pulling a trigger will lead to the consequence of a bullet coming out of it which will lead to the complex consequence of someone’s life ending.
I wish I could be more clear and specific, but it is difficult to discuss and argue all the concepts I have in mind as they are not all completely clear to me, and the level of emotional involvement I have in the whole topic of morality (as, I expect, do most people) along with the sheer amount of fun I’m having in here are certainly not helping mental clarity and debiasing. (yes, I find discussions, arguments, debates etc. of this type quite fun, most of the time)
I’m not sure it’s just a question of not alieving it. There are many good reasons not to believe evidence that this will work, and even more good reasons to believe there is probably a better option, and many reasons why it could be extremely detrimental to you in the long term to push down a fat man onto train tracks, and if push come to shove it might end up being the more rational action in a real-life situation similar to the thought experiment.
I’m quite aware of that.
At this point, I simply must tap out. I’m at a loss at how else to explain what you seem to be consistently missing in my questions, but DaFranker is doing a very good job of it, so I’ll just stop trying.
Really? This is news to me. I guess Moore was right all along...
You have proof that you should push the fat man?
Lengthy breakdown of my response.
TL;DR: You should push the fat man if and only if X. You should not push the fat man if and only if ¬X.
X can be derived into a rule to use with D(X’) to compute whether you should push or not. X can also be derived into a utility function to use with U(X’) to compute whether you should push or not. The answer in either case doesn’t depend on U or D, it depends on your derivation of X, which itself depends on X.
This is shown by the assumption that for all reasonable a, there exists a g(a) where U(a) = D(g(a)). Since, by their ambiguity and vague definitions, both U() and D() seem to cover an infinite domain and are the equivalent of turing-complete, this assumption seems very natural.