Took the survey, and finally registered after lurking for 6 months.
I liked the defect/cooperate question. I defected because it was the rational way to try to ‘win’ the contest. However, if one had a different goal such as “make Less Wrong look cooperative” rather than “win this contest”, then cooperating would be the rational choice. I suppose that if I win, I’ll use the money to make my first donation to CFAR and/or MIRI.
Now that I have finished it, I wish I had taken more time on a couple of the questions. I answered the Newcomb’s Box problem the opposite of my intent, because I mixed up what 2-box and 1-box mean in the problem (been years since I thought about that problem). I would 1-box, but I answered 2-box in the survey because I misremembered how the problem worked.
Heh. I also didn’t care about the $60, and realised that taking the time to work out an optimal strategy would cost more of my time than the expected value of doing so.
So I fell back on a character-ethics heuristic and cooperated. Bounded rationality at work. Whoever wins can thank me later for my sloth.
Lol, I cooperated because $60 was not a large enough sum of money for me to really care about trying to win it, and in the calibration I assumed most people would feel similarly. Reading your reasoning here, however, it is possible I should have accounted more strongly for people who like to win just for the sake of winning, a group that may be larger here than in the general population :p.
Edit: actually that’s not really what I mean. I mean people who want to make a rational choice to maximum the probability of winning for its own sake, even if they don’t actually care about the prize. I prefer someone gets $60 and is pleasantly surprised to have won, than I get $1. I predict that overall happiness is increased more this way, at negligible cost to myself. Even if the person who wins defected.
Agreed, I think that the rational action in this scenario depends on one’s goal, and there are different things you could choose as your goal here. I also think I shouldve set a higher value for my 90% confidence of the number of people who would cooperate, because its quite possible that a lot more peopel than I expected would choose alternate goals for this other than ‘winning’.
If you had to play Newcomb’s problem against the Less Wrong community as Omega, would you one-box or two-box? The community would vote as to whether to put the money in the second box or not; whichever choice got more votes would determine whether the money was in the second box or not. Each player from the community would be rewarded individually if e guessed your choice correctly.
Took the survey, and finally registered after lurking for 6 months.
I liked the defect/cooperate question. I defected because it was the rational way to try to ‘win’ the contest. However, if one had a different goal such as “make Less Wrong look cooperative” rather than “win this contest”, then cooperating would be the rational choice. I suppose that if I win, I’ll use the money to make my first donation to CFAR and/or MIRI.
Now that I have finished it, I wish I had taken more time on a couple of the questions. I answered the Newcomb’s Box problem the opposite of my intent, because I mixed up what 2-box and 1-box mean in the problem (been years since I thought about that problem). I would 1-box, but I answered 2-box in the survey because I misremembered how the problem worked.
Heh. I also didn’t care about the $60, and realised that taking the time to work out an optimal strategy would cost more of my time than the expected value of doing so.
So I fell back on a character-ethics heuristic and cooperated. Bounded rationality at work. Whoever wins can thank me later for my sloth.
Same thats pretty much why I choose cooperate.
Lol, I cooperated because $60 was not a large enough sum of money for me to really care about trying to win it, and in the calibration I assumed most people would feel similarly. Reading your reasoning here, however, it is possible I should have accounted more strongly for people who like to win just for the sake of winning, a group that may be larger here than in the general population :p.
Edit: actually that’s not really what I mean. I mean people who want to make a rational choice to maximum the probability of winning for its own sake, even if they don’t actually care about the prize. I prefer someone gets $60 and is pleasantly surprised to have won, than I get $1. I predict that overall happiness is increased more this way, at negligible cost to myself. Even if the person who wins defected.
Agreed, I think that the rational action in this scenario depends on one’s goal, and there are different things you could choose as your goal here.
I also think I shouldve set a higher value for my 90% confidence of the number of people who would cooperate, because its quite possible that a lot more peopel than I expected would choose alternate goals for this other than ‘winning’.
So if a group using your decision-making-process all took this survey, “rationally” trying to win the contest, they would end up winning $0. :)
Correct, just like people trying to ‘win’ a single iteration prisoner’s dilemna problem would defect.
I’m not claiming its the morally correct option or anything, just that its the correct strategy if your goal is to win.
I don’t think we’re using the same definition of ‘win’. This is the same thinking that leads to two-boxing.
If you had to play Newcomb’s problem against the Less Wrong community as Omega, would you one-box or two-box? The community would vote as to whether to put the money in the second box or not; whichever choice got more votes would determine whether the money was in the second box or not. Each player from the community would be rewarded individually if e guessed your choice correctly.