I remember reading about an experiment performed by behavioral economists where person A divides some cash and person B either accepts their division and gets their allocated share of the money or rejects it and neither party gets their allocated share. You could say the consequentialist solution is to always accept the division of money, which most folks don’t do, so this could make a good trial exercise. On the other hand, if person A is someone person B is going to have repeated interactions with, one could argue that the social capital of training person A to divide things fairly might be worth forgoing cash… So maybe it wouldn’t work in a scenario where the class meets again and again? (Unless things were anonymized somehow...)
There is also the Newcomb’s Problem aspect to this, where having taught the class about consequentialism will make it appear as though you have made everyone who is Person B worse off.
Reading up on experiments behavioral economists have done in general seems like it could be a good source of ideas.
I predict that if a stranger tried a one-shot Ultimatum game against Eliezer with a 99-1 split in the stranger’s favor, EY would refuse it on TDT grounds. Thus any person who knows Eliezer subscribes to TDT wouldn’t offer a manifestly unfair split to him.
Right, this article appears on the surface to endorse causal decision theory, which we know Eliezer doesn’t in fact endorse. Mostly that’s fine, but there are occasions where CDT will make the wrong call, such as the examples you point out.
I can’t help but think that the best way to actually get people to be consequentialist is similar to the way to actually get people to be atheists: convince them that all the cool kids are consequentialist. This probably contributed to me becoming more consequentialist, in the form of reading about behavioral economics studies where people did silly and irrational things and wanting to not be one of the silly and irrational ones.
You could say the consequentialist solution is to always accept the division of money, which most folks don’t do, so this could make a good trial exercise.
I would strongly recommend against going this direction. Consequentialism is about methodology, not particular results. As soon as you say “the consequentialist always accepts” the clever students will get a funny look on their face, as they try to cost out and compare the immediate gain and long-term loss.
Consider Kohlberg’s stages of moral development, which doesn’t care about the conclusion drawn but does care about the stated justification for the conclusion.
I remember reading about an experiment performed by behavioral economists where person A divides some cash and person B either accepts their division and gets their allocated share of the money or rejects it and neither party gets their allocated share. You could say the consequentialist solution is to always accept the division of money, which most folks don’t do, so this could make a good trial exercise. On the other hand, if person A is someone person B is going to have repeated interactions with, one could argue that the social capital of training person A to divide things fairly might be worth forgoing cash… So maybe it wouldn’t work in a scenario where the class meets again and again? (Unless things were anonymized somehow...)
There is also the Newcomb’s Problem aspect to this, where having taught the class about consequentialism will make it appear as though you have made everyone who is Person B worse off.
Reading up on experiments behavioral economists have done in general seems like it could be a good source of ideas.
http://en.wikipedia.org/wiki/Ultimatum_game
I predict that if a stranger tried a one-shot Ultimatum game against Eliezer with a 99-1 split in the stranger’s favor, EY would refuse it on TDT grounds. Thus any person who knows Eliezer subscribes to TDT wouldn’t offer a manifestly unfair split to him.
You could structure the game so that the person making the offer and the person receiving it were paired randomly after the offer was specified.
Right, this article appears on the surface to endorse causal decision theory, which we know Eliezer doesn’t in fact endorse. Mostly that’s fine, but there are occasions where CDT will make the wrong call, such as the examples you point out.
I can’t help but think that the best way to actually get people to be consequentialist is similar to the way to actually get people to be atheists: convince them that all the cool kids are consequentialist. This probably contributed to me becoming more consequentialist, in the form of reading about behavioral economics studies where people did silly and irrational things and wanting to not be one of the silly and irrational ones.
I would strongly recommend against going this direction. Consequentialism is about methodology, not particular results. As soon as you say “the consequentialist always accepts” the clever students will get a funny look on their face, as they try to cost out and compare the immediate gain and long-term loss.
Consider Kohlberg’s stages of moral development, which doesn’t care about the conclusion drawn but does care about the stated justification for the conclusion.