Let’s look at the ultimate extreme version. Assume she’s woken up once (or arbitrarily many non-zero times) for tails, and not at all for heads. Now the fact that she’s been woken up implies tails with certainty. So if the answer remains 1⁄2 in the extreme versions, then there must be a discontinuous jump, rather than convergence, when the ratio of the number of awakenings for heads vs. tails tends towards zero.
Vladimir_M
How about the following scenario? Say instead of Omega, it’s just a company doing a weird promotional scheme. They announce that they’ll secretly flip a coin in their headquarters, and if it’s tails, they’ll hand out prizes to a million random people from the phone directory tomorrow, whereas if it’s heads, they’ll award the same prize to only one lucky winner. The next day, you receive a phone call from them. Would you apply analogous reasoning in this case (and how, or why not)?
The exact equivalent of the original problem would be as follows. You announce that:
(1) You’re about to flip a coin at some secret time during the next few days, and the result will be posted publicly in (say) a week.
(2) Before the flip, you’ll approach a random person in the street and ask about their expectation about the result that’s about to be posted. After the flip, if and only if it lands tails, you’ll do the same with one additional person before the result is announced publicly. The persons are unaware of each other, and have no way to determine if they’re being asked before or after the actual toss.
So, does anyone see relevant differences between this problem and the original one?
Well, yes, I should also specify that you’ll actually act on the announcement.
But in any case, would anyone find anything strange or counterintuitive about this less exotic formulation, which could be readily tried in the real world? As soon as the somewhat vague “expectation about the result” is stated clearly, the answer should be clear. In particular, if we ignore risk aversion and discount rate, each interviewee should be willing to pay, on the spot, up to $66.66 for an instrument sold by a (so far completely ignorant) third party that pays off $100 if the announced result is tails.
Yes, you’re right (as are the other replies making similar points). I tried hard once more to come up with an accurate analogy of the above problem that would be realizable in the real world, but it seems like it’s impossible to come up with anything that doesn’t involve implanting false memories.
After giving this some more thought, it seems to me that the problem with the copying scenario is that once we eliminate the assumption that each agent has a unique continuous existence, all human intuitions completely break down, and we can compute only mathematically precise problems formulated within strictly defined probability spaces. Trouble is, since we’ve breaking one of the fundamental human common sense assumptions, the results may or may not make any intuitive sense, and as soon as we step outside formal, rigorous math, we can only latch onto subjectively preferable intuitions, which may differ between people.
OK, I think I have a definite reductio ad absurdum of your point. Suppose you wake up in a room, and the last thing you remember is Omega telling you: “I’m going to toss a coin now. Whatever comes up, I’ll put you in the room. However, if it’s tails, I’ll also put a million other people each in an identical room and manipulate their neural tissue so as to implant them a false memory of having been told all this before the toss. So, when you find yourself in the room, you won’t know if we’ve actually had this conversation, or you’ve been implanted the memory of it after the toss.”
After you find yourself in the room under this scenario, you have the memory of these exact words spoken to you by Omega a few seconds ago. Then he shows up and asks you about the expected value of the coin toss. I’m curious if your 1⁄2 intuition still holds in this situation? (I’m definitely unable to summon any such intuition at all—your brain states representing this memory are obviously more likely to have originated from their mass production in case of tails, just like finding a rare widget on the floor would be evidence for tails if Omega pledged to mass-manufacture them if tails come up.)
But if you wouldn’t say 1⁄2, then you’ve just reached an awful paradox. Instead of just implanting the memories, Omega can also choose to change these other million people in some other small way to make them slightly more similar to you. Or a bit more, or even more—and in the limit, he’d just use these people as the raw material for manufacturing the copies of you, getting us back to your copying scenario. At which step does the 1⁄2 intuition emerge?
(Of course, as I wrote in my other comment, all of this is just philosophizing that goes past the domain of validity of human intuitions, and these questions make sense only if tackled using rigorous math with more precisely defined assumptions and questions. But I do find it an interesting exploration of where our intuitions (mis)lead us.)
Hm.. let’s try pushing it a bit further.
Suppose you’re a member of a large exploratory team on an alien planet colonized by humans. As a part of the standard equipment, each team member has an intelligent reconnaissance drone that can be released to roam around and explore. You get separated from the rest of your team and find yourself alone in the wilderness. You send out your drone to explore the area, and after a few hours it comes back. When you examine its records, you find the following.
Apparently, a local super-smart creature with a weird sense of humor—let’s call it Omega—has captured several drones and released (some of?) them back after playing with them a bit. Examining your drone’s records, you find that Omega has done something similar to the above described false memory game with them. You play the drone’s audio record, and you hear Omega saying: “I’ll toss a coin now. Afterwards, I’ll release your drone back in any case. If heads come up, I’ll destroy the other ten drones I have captured. If it’s tails, I’ll release them all back to their respective owners, but I’ll also insert this message into their audio records.” Assume that you’ve already heard a lot about Omega, since he’s already done many such strange experiments on the local folks—and from what’s known about his behavior, it’s overwhelmingly likely that the message can be taken at face value.
What would you say about the expected coin toss result now? Would you take the fact that you got your drone back as evidence in favor of tails, or does your 1⁄2 intuition still hold? If not, what’s the difference relative to the false memory case above? (Unless I’m missing something, the combined memories of yourself and the drone should be exactly equivalent to the false memory scenario.)
I’m not sure I understand your “really extreme” formulation fully. Is the amnesia supposed to make the wins in chocolate bars non-cumulative?
SarahC:
A key thing to consider is the role of the “mainstream.” When a claim is out of the mainstream, are you justified in moving it closer to the bunk file?
An important point here is that the intellectual standards of the academic mainstream differ greatly between various fields. Thus, depending on the area we’re talking about, the fact that a view is out of the mainstream may imply that it’s bunk with near-certainty, but it may also tell us nothing if the mainstream standards in the area are especially bad.
From my own observations of research literature in various fields and the way academia operates, I have concluded that healthy areas where the mainstream employs very high intellectual standards of rigor, honesty, and judicious open-mindedness are normally characterized by two conditions:
(1) There is lots of low-hanging fruit available, in the sense of research goals that are both interesting and doable, so that there are clear paths to quality work, which makes it unnecessary to invent bullshit instead.
(2) There are no incentives to invent bullshit for political or ideological reasons.
As soon as either of these conditions doesn’t hold in an academic area, the mainstream will become infested with worthless bullshit work to at least some degree. For example, condition (2) is true for theoretical physics, but in many of its subfields, condition (1) no longer holds. Thus we get things like the Bogdanoff affair and the string theory wars—regardless of who (if anyone) is right in these controversies, it’s obvious that some bullshit work has infiltrated the mainstream. Nevertheless, the scenario where condition (1) doesn’t hold, but (2) does is relatively benign, and such areas are typically still basically sound despite the partial infestation.
The real trouble starts when condition (2) doesn’t hold. Even if (1) still holds, the field will be in a hopeless confusion where it’s hardly possible to separate bullshit from quality work. For example, in the fields that involve human sociobiology and behavioral genetics, particularly those that touch on the IQ controversies, there are tons of interesting study ideas waiting to be done. Yet, because of the ideological pressures and prejudices—both individual and institutional—bullshit work multiplies without end. (Again, regardless of whom you support in these controversies, it’s logically impossible that at least one side isn’t bullshitting.) Thus, on the whole, condition (2) is even more critical than (1).
When neither (1) nor (2) holds in some academic field, it tends to become almost pure bullshit. Macroeconomics is the prime example.
- Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields by 15 Feb 2011 9:17 UTC; 100 points) (
- 13 Sep 2010 20:12 UTC; 19 points) 's comment on The Effectiveness of Developing World Aid by (
- 26 May 2010 18:39 UTC; 5 points) 's comment on Open Thread: May 2010, Part 2 by (
- 21 May 2010 8:14 UTC; 4 points) 's comment on Open Thread: May 2010, Part 2 by (
SarahC:
There are three camps I have in mind, who are outside the academic mainstream, but not obviously (to me) dismissed as bunk: global warming skeptics, Austrian economists, and singularitarians.
So, to apply my above criteria to these cases:
Climate science is politicized to an extreme degree and plagued by vast methodological difficulties. (Just think about the difficulty of measuring global annual average temperature with 0.1C accuracy even in the present, let alone reconstructing it far into the past.) Thus, I’d expect a very high level of bullshit infestation in its mainstream, so critics scorned by the mainstream should definitely not be dismissed out of hand.
Ditto for mainstream vs. Austrian macroeconomics; in fact, even more so. If you look at the blogs of prominent macroeconomists, you’ll see lots of ideologically motivated mutual scorn and abuse even within the respectable mainstream. Austrians basically call bullshit on the entire mainstream, saying that the whole idea of trying to study economic aggregates by aping physics is a fundamentally unsound cargo-cult approach, so they’re hated by everyone. While Austrians have their own dubious (and sometimes obviously bunk) ideas, their criticism of the mainstream should definitely be taken into account considering its extreme level of politicization and lack of any clearly sound methodology.
As for singularitarians, they don’t really face opposition from some concrete mainstream academic group. The problem is that their claims run afoul of the human weirdness heuristic, so it’s hard to get people to consider their arguments seriously. (The attempts at sensationalist punditry by some authors associated with the idea don’t help either.) But my impression is that many prominent academics in the relevant fields who have taken the time to listen to the singularity arguments take them respectfully and seriously, certainly with nothing like the scorn heaped on dissenters and outsiders in heavily politicized fields.
Jack:
I think we ought to distinguish somehow between crackpots (believers in bunk) and incorrect contrarians. The former are obviously part of the latter but are they the same?
You ignore the possibility of crackpots who are not contrarians, but instead well established or even dominant in the mainstream. You have a very rosy view of academia if you believe that this phenomenon is entirely nonexistent nowadays!
That said, I’d say the main defining criterion of crackpots—as opposed to ordinary mistaken folks—is that their emotions have got the better of them, rendering them incapable of further rational argument. A true crackpot views the prospect of changing his mind as treachery to his cause, similar to a soldier scorning the possibility of surrender after suffering years of pain, hardship, and danger in a war. Trouble is, protracted intellectual battles in which contrarians are exposed to hostility and ridicule often push them beyond the edge of crackpottery at some point. It’s a pity because smart contrarians, even when mistaken about their main point, can often reveal serious weaknesses in the mainstream view. But then this is often why they are met with such hostility in the first place, especially in fields with political/ideological implications.
But I took our working definition of crackpot and bunk to exclude such people. We’re asking about a particular kind of being wrong: being wrong and unpopular.
Fair enough, if we define “crackpot” as necessarily unpopular. However, what primarily comes to my mind when I hear this word is the warlike emotional state that renders one incapable of changing one’s mind, which I described in the above comment. If people like that manage to grab positions of power in the academia and don the cloak of respectability, I still think that they share more relevant similarity with various scorned crackpot contrarians than with people whose mainstream respectability is well earned.
I think a good test for a crackpot vs. an ordinary mistaken contrarian would be how this individual would behave if the power relations were suddenly reversed, and the mainstream and contrarian views changed places. A crackpot would not hesitate to use his power to extirpate the views he dislikes with all means available, whereas an non-crackpot contrarian would show at least some respect for his (now contrarian) opponents.
Are you sure “flummoxed” is the right word? I don’t think “neurotypicals” are confused by the mathematics involved. They just dispute that the utilitarian math represents an accurate theory of ethics. Would you use the word “flummoxed” for a physicist who understands the mathematics of a theory but disputes that it says anything relevant about the real world, even if he has no alternative theory to offer?
For full disclosure, I am not convinced by utilitarian arguments at all, both in these problems you mention and in most other widely disputed ones. I understand them with perfect clarity; I just dispute that they have any relevance beyond the entertainment value of the logical exercise, and possibly propaganda value for some parties in some situations. I certainly wouldn’t describe my situation as “flummoxed.”
On the other hand, don’t forget that talk is cheap, and actions speak louder than words. I doubt that many utilitarians would be willing to follow their conclusions in practice in situations such as the fat man/trolley problem. To stress that point even further, imagine if you had to cut the fat man’s throat instead of just pushing him (and feel free to increase the cost of the alternative if you think this changes the equation significantly relative to pushing). I’d bet dollars to donuts that a large majority of the contemporary genteel utilitarians couldn’t bring themselves to do it, no matter how clear the calculus that—according to them—mandates this course of action.
This suggests to me that this “dumbfoundedness” might be in fact a consequence of more clear and far-reaching insight, not confusion. Biting moral bullets is easy in armchair discussions; what you’d actually be able to bring yourself to do is another question altogether. Therefore, when I see people who coolly affirm the logical conclusions of their favored formal ethical theories even when they run afoul of common folks’ intuition, I have to ask if they are really guided by logic to an exceptional degree in their lives—or do they simply fail to see, out of sheer mental short-sightedness, how remote their armchair theorizing is from what they’d be willing and capable to do if they, God forbid, actually found themselves in some such situation.
(This is not the reason why I don’t see any validity in utilitarianism; that would be a topic for another discussion altogether. The point here is that logical consistency in ethical armchair discussions could in fact be a consequence of myopia, not logical clear-sightedness.)
- 16 May 2010 2:59 UTC; 1 point) 's comment on The Psychological Diversity of Mankind by (
You’re allowed to say “X is the action I would want to take, but I wouldn’t be able to”
I don’t think this statement is logically consistent. Unless you’re restrained by some outside force, if you don’t do something, that means you didn’t want to do it. You might hypothesize that you would have wanted it within some counterfactual scenario, but given the actual circumstances, you didn’t want it.
The only way out of this is if we dispense with the concept of humans as individual agents altogether, and analyze various modules, circuits, and states in each single human brain as distinct entities that might be struggling against each other. This might make sense, but it breaks down the models of pretty much all standard ethical theories, utilitarian and otherwise, which invariably treat humans as unified individuals.
But regardless of that, do you accept the possibility that at least in some cases, bullet-biting on moral questions might be the consequence of a failure of imagination, not exceptional logical insight?
Yes, why should we assume that these difficult ethical conundrums have some sort of “right answer” at all? Why would asking about the “right choice” in trolley and similar problems necessarily have to have any more sense than asking about the “correct value” of 0^0?
It’s more complicated than that. Most people would say that there are imaginable situations where a certain course of action is right, but they’d be strongly tempted to act differently out of base motives. For example, if you ask a typical person whether it would be right to gain a large amount of money by some sort of cheating, assuming you know for sure there won’t be any negative consequences, they’ll immediately understand that the question is about what’s normatively right, not how they’d be tempted to act. Some very sincere people would probably admit that they might yield to the temptation, even though they consider it wrong.
Now, imagine you’re introduced to someone who had the opportunity to cheat a business partner for a million dollars with zero risk of repercussions, but flat-out refused to do so out of sheer moral fiber. You’ll immediately perceive this person as trustworthy and desirable to deal with—a man who acts according to high principles, not base passion and instinct. In contrast, you’d shun and despise him if you heard he’d acted otherwise.
However, let’s now compare that with the extreme fat man problem (where you’d have to cut the fat man’s throat to avert some greater loss of life). Imagine you’re introduced to someone who was faced with it and who slit the fat man’s throat without blinking. Would you feel warm and fuzzy about this person? Would any of the bullet-biting utilitarians fail to be profoundly creeped out just by the knowledge that they are standing next to someone who actually acted like that—even though they’d all defend (nay, prescribe!) his course of action relentlessly when philosophizing? Moreover, I would again bet dollars to donuts that our genteel utilitarians would be much less creeped out by someone who couldn’t bring himself to butcher the fat man.
When I think about this, I honestly can’t but detect severe short-sightedness in moral bullet-biters.
- 4 Jun 2010 20:14 UTC; 2 points) 's comment on Virtue Ethics for Consequentialists by (
You are mostly right, except that I disagree that such simplifications are limited to 20th century economics. I had in mind formal ethical theories that I find discussed in modern analytical philosophy, and especially utilitarianism. I honestly don’t see how utilitarianism can make sense unless humans are modeled as unified agents, each with a single utility function. From what I’ve seen, other popular formal consequentialist approaches make analogous assumptions, for which I don’t see how they could be reconciled with dissolving the concept of humans as unified agents.
But yes, considering the vast philosophical tradition you mention, my above statement definitely doesn’t hold in general. However, to get back to the issue that started this discussion, I don’t think that Aspergery logical consistency—that, according to Roko, apparently makes for a good consequentialist ethicist -- would be a good guide through the works of the authors you mention!
Autism in general affects four times as many men than women in the general population;
Does this statistic refer only to severe cases of autism that are likely to be noticed and diagnosed whenever they occur, or also to the milder, high-functioning autism spectrum disorders? Because if the latter, I would expect that mildly autistic men are much more likely to be noticed as weird and dysfunctional than women, so this might account for at least a part of the discrepancy in the rate of diagnosis.
The explanation for the greater public prominence (and presumably social acumen) of female autistics is probably similar. In most situations, it’s probably harder for autistic men than women to avoid coming off as creepy or ridiculous.
I have a question for those more familiar with the discussions surrounding this problem: is there anything really relevant about the sleeping/waking/amnesia story here? What if instead the experimenter just went out and asked the next random passerby on the street each time?
It seems to me that the problem could be formulated less confusingly that way. Am I missing something?