Sam Harris’ argument style reminds me very much of the man that trained me, and the example of fire smoke negatively affecting health is a great zero point to contest. Sam has slipped in a zero point of physical health being the only form of health which matters. Or at least the highest. One would have to argue against his zero point, that there are other values which can be measured in terms of health greater than mere physical health associated with fire. Psychological, familial, and social immediately come to mind. Further, in the case of Sam, famed for his epistemological intransigence, one would likely have to argue against his zero point of what constitutes rationality itself in order to further one’s position that physical health is very often a secondary value, as this sort of argument follows more a conversational arrangement of complex interdependent factors, than the straight rigorous logic Sam seemingly prefers
A lot of what’s going on here is primarily frame control—setting the relevant scale on which a particular zero is then made salient. And that is not being done in the nice explicit friendly way.
He’s not casting the sneaky dark-arts version of the spell
Sam Harris here is not casting a sneaky version of Tare Detrimens, but he’s maybe (intentionally or not, benevolently or malevolently) casting a sneaky version of Fenestra Imperium.
Huh—it suddenly struck me that Peter Singer is doing the exact same thing in the drowning child thought experiment, by the way, as Tyler Alterman points out beautifully in Effective altruism in the garden of ends. He takes for granted that the frame of “moral obligation” is relevant to why someone might save the child, then uses our intuitions towards saving the child to suggest that we agree with him about this obligation being present and relevant, then he uses logic to argue that this obligation applies elsewhere too. All of that is totally explicit and rational within that frame, but he chose the frame.
In both cases, everyone agrees about what actually happens (a child dies, or doesn’t; you contribute, or you don’t).
In both cases, everyone agrees because within the frame that has been presented there is no difference! Meanwhile there is a difference in many other useful frames! And this choice of frame is NOT, as far as I can recall, explicit. Rather than recall, let me actually just go check… watching this video, he doesn’t use the phrase “moral obligation”, but asks “[if I walked past,] would I have done something wrong?”. This interactive version offers a forced choice “do you have a moral obligation to rescue the child?”
In both cases, the question assumes the frame, and is not explicit about the arbitrariness of doing so. So yes, he is explicit about setting the zero point, but focusing on that part of the move obscures the larger inexplicit move he’s making beforehand.
I like and agree with your argument here in general.
I don’t think it’s true that, in the specific LW case, ”...you didn’t know how each other felt about it or what it meant to each other, and that if you’d more thoroughly seen the world through each others’ eyes, it wouldn’t seem like ‘zero point’ is the relevant frame here.”
Or at least, none of what you said about the LW admin perspective was new to me; it had all been taken into account by me at the time. (I suspect at all times I was capable of passing their ITT with at least a C+ grade; I am less sure they were capable of passing mine.)
But that individual case seems separate from your overall point, which does seem correct. So I’m not sure where disagreement lies.
Ah this comment from facebook also feels relevant:
A lot of what’s going on here is primarily frame control—setting the relevant scale on which a particular zero is then made salient. And that is not being done in the nice explicit friendly way.
Sam Harris here is not casting a sneaky version of Tare Detrimens, but he’s maybe (intentionally or not, benevolently or malevolently) casting a sneaky version of Fenestra Imperium.
Huh—it suddenly struck me that Peter Singer is doing the exact same thing in the drowning child thought experiment, by the way, as Tyler Alterman points out beautifully in Effective altruism in the garden of ends. He takes for granted that the frame of “moral obligation” is relevant to why someone might save the child, then uses our intuitions towards saving the child to suggest that we agree with him about this obligation being present and relevant, then he uses logic to argue that this obligation applies elsewhere too. All of that is totally explicit and rational within that frame, but he chose the frame.
In both cases, everyone agrees because within the frame that has been presented there is no difference! Meanwhile there is a difference in many other useful frames! And this choice of frame is NOT, as far as I can recall, explicit. Rather than recall, let me actually just go check… watching this video, he doesn’t use the phrase “moral obligation”, but asks “[if I walked past,] would I have done something wrong?”. This interactive version offers a forced choice “do you have a moral obligation to rescue the child?”
In both cases, the question assumes the frame, and is not explicit about the arbitrariness of doing so. So yes, he is explicit about setting the zero point, but focusing on that part of the move obscures the larger inexplicit move he’s making beforehand.
I like and agree with your argument here in general.
I don’t think it’s true that, in the specific LW case, ”...you didn’t know how each other felt about it or what it meant to each other, and that if you’d more thoroughly seen the world through each others’ eyes, it wouldn’t seem like ‘zero point’ is the relevant frame here.”
Or at least, none of what you said about the LW admin perspective was new to me; it had all been taken into account by me at the time. (I suspect at all times I was capable of passing their ITT with at least a C+ grade; I am less sure they were capable of passing mine.)
But that individual case seems separate from your overall point, which does seem correct. So I’m not sure where disagreement lies.