I absolutely agree. McDonalds and the other demons of the Western Diet cause much more harm, both in absolute terms and per capita. That was really my point; within the class of ‘health misinformation and disinformation that causes harm’, furphies about vegan nutrition are a comparatively minor problem.
wilkox
You don’t just have a level of access, you have a type of access. Your access to your own mind isn’t like looking at a brain scan.
From my Camp 1 perspective, this just seems like a restatement of what I wrote. My direct access to my own mind isn’t like my indirect access to other people’s minds; to understand another person’s mind, I can at best gather scraps of sensory data like ‘what that person is saying’ and try to piece them together into a model. My direct access to my own mind isn’t like looking at a brain scan of my own mind; to understand a brain scan, I need to gather sensory data like ‘what the monitor attached to the brain scanner shows’ and try to piece them into a model. This seems to be completely explained by the fact that my brain can only gather data about the external world though a handful of imperfect sensory channels, while it can gather data about its own internal processes through direct introspection. To make things worse, my brain is woefully underpowered for the task of modelling complex things like brains, so it’s almost inevitable that any model I construct will be imperfect. Even a scan of my own brain would give me far less insight into my mind than direct introspection, because brains are hideously complicated and I’m not well-equipped to model them.
Whether you call that a ‘level’ or ‘type’ of access, I’m still no closer to understanding how Nagel relates the (to me mundane) fact that these types of access exist to the ‘conceptual mystery’ of qualia or consciousness.
The Mary’s Room thought experiment brings it out. Mary has complete access to someone elses mental state, form the outside, but still doesn’t experience it from the inside.
Imagine a one-in-a-million genetic mutation that causes a human brain to develop a Simulation Centre. The Simulation Centre might be thought of as a massively overdeveloped form of whatever circuitry gives people mental imagery. It is able to simulate real-world physics with the fidelity of state-of-the-art computer physics simulations, video game 3D engines, etc. The Simulation Centre has direct neural connections to the brain’s visual pathways that, under voluntary control, can override the sensory stream from the eyes. So, while a person with strong mental imagery might be able to fuzzily visualise something like a red square, a person with the Simulation Centre mutation could examine sufficiently detailed blueprints for a building and have a vivid photorealistic visual experience of looking at it, indistinguishable from reality.
Poor Mary, locked in her black-and-white room, doesn’t have a Simulation Centre. No matter how much information she is given about what wavelengths correspond to the colour blue, she will never have the visual experience of looking at something blue. Lucky Sue, Mary’s sister, was born with the Simulation Centre mutation. Even locked in a neighbouring black-and-white room, when she learns about the existence of materials that don’t reflect all wavelengths of light but only some wavelengths, Sue decides to model such a material in her Simulation Centre, and so is able to experience looking at the colour blue.
In other words: the Mary’s Room thought experiment seems to me (again, from a Camp 1 perspective) to illustrate that our brains lack the machinery to turn a conceptual understanding of a complex physical system into subjective experience.[1] This seems like a mundane fact about our brains (‘we don’t have Simulation Centres’) rather than pointing to any fundamental conceptual mystery.
- ^
This might just be a matter of degree. Some people apparently can do things like visualise a red square, and it seems reasonable that a person who had seen shapes of almost every colour before but had never happened to see a red square could nevertheless visualise one if given the concept.
- ^
Apologies for the repetition, but I’m going to start by restating a slightly updated model of what I think is going on, because it provides the context for the rest of my comment. Basically I still think there are two elements to our disagreement:
The Camp 1 vs Camp 2 disagreement. Camp 1 thinks that a description of the physical system would completely and satisfactorily explain the nature of consciousness and subjective experience; Camp 2 thinks that there is a conceptual element of subjective experience that we don’t currently know how to explain in physical terms, even in principle. Camp 2 thinks there is a capital-H Hard Problem of consciousness, the ‘conceptual mystery’ in Rafael’s post; Camp1 does not. I am in Camp 1, and as best I can tell you are in Camp 2.
You think that all(?) ‘mental states’ pose this conceptual Hard Problem, including intentional phenomena like thoughts and beliefs as well as more ‘purely subjective’ phenomena like experiences. My impression is that this is a mildly unorthodox position within Camp 2, although as I mentioned in my original comment I’ve never really understood e.g. what Nagel was trying to say about the relationship between mental phenomena being only directly accessible to a single mind and them being Hard to explain, so I might be entirely wrong about this. In any case, because I don’t believe that there is a conceptual mystery in the first place, the question of (e.g.) whether the explanandum is an utterance vs a belief means something very different to me than it does to you. When I talk about locating the explanandum at utterances vs beliefs, I’m talking about the scope of the physical system to be explained. When you talk about it, you’re talking about the location(s) of the conceptual mystery.
What you call “model” here would presumably correspond only to the the externally observable neural correlate of a belief, not to the belief. The case would be the same for a neural correlate of an experience, so this doesn’t provide a difference between the two. Explaining the neural correlate is of course just as “easy” as explaining an utterance. The hard problem is to explain actual mental states with their correlates. So the case doesn’t explain the belief/experience in question in terms of this correlate.
As a Camp 1 person, I don’t think that there is any (non-semantic) difference between the observable neurological correlates of a belief or any other mental phenomenon and the phenomenon itself. Once we have a complete physical description of the system, we Camp 1-ites might bicker over exactly which bits of it correspond to ‘experience’ and ‘consciousness’, or perhaps claim that we have reductively dissolved such questions entirely; but we would agree that these are just arguments over definitions rather than pointing to anything actually left unexplained. I don’t think there is a Hard Problem.
It would be unable to distinguish between a p-zombie, which does have the neural correlate but not the belief/experience, and a normal person.
I take Dennett’s view on p-zombies, i.e. they are not conceivable.
So it seems that either the explanation is unsuccessful as given, since it stops at the neural correlate and doesn’t explain the belief or experience, or you assume the explanandum is actually just the neural correlate, not the belief.
In the Camp 1 view, once you’ve explained the neural correlates, there is nothing left to explain; whether or not you have ‘explained the belief’ becomes an argument over definitions.
I would not count “psychotic” here, since one is not necessarily directly acquainted with it (one doesn’t necessarily know one has it).
Would it be fair to say then that by ‘mental states’ you mean ‘everything that the brain does that the brain can itself be aware of’?
I thought you saw the fact that beliefs are about something as evidence that they are easier to explain than experiences
I don’t think there is any connection between whether a thought/belief/experience is about something and whether it is explainable. I’m not sure about ‘easier to explain’, but it doesn’t seem like the degree of easiness is a key issue here. I hold the vanilla Camp 1 view that everything the brain is doing is ultimately and completely explainable in physical terms.
or that they at least more similar to utterances than to experiences
I do think beliefs are more similar to utterances than experiences. If we were to draw an ontology of ‘things brains do’, utterances would probably be a closer sibling to thoughts than to beliefs, and perhaps a distant cousin to experiences. A thought can be propositional (‘the sky is blue’) or non-propositional (‘oh no!’), as can an utterance, but a belief is only propositional, while an experience is never propositional. I think an utterance could be reasonably characterised as a thought that is not content to stay swimming around in the brain but for whatever reason escapes out through the mouth. To be clear though, I don’t think any of this maps on to the question of whether these phenomena are explicable in terms of the physical implementation details of the brain.
So accepting beliefs as explanandum would not be in principle different from accepting fears or experiences as explanandum, which would seem to put you more in Camp #2 rather than #1.
I think there is an in-principle difference between Camp 1 ‘accepting beliefs [or utterances] as explanandum’ and Camp 2 ‘accepting experiences as explanandum’. When you ask ‘What counts as a simple explanandum such that we would not run into hard explanatory problems?’, I think the disagreement between Camp 1 and Camp 2 in answering this question is not over ‘where the explanandum is’ so much as ‘what it would mean to explain it’.
It might help here to unpack the phrase ‘accepting beliefs as explanandum’ from the Camp 1 viewpoint. In a way this is a shorthand for ‘requiring a complete explanation of how the brain as a physical system goes from some starting state to the state of having the belief’. The belief or utterance as explanandum works as a shorthand for this for the reasons I mentioned above, i.e. that any explanation that does not account for how the brain ended up having this belief or generating this utterance is not a complete and satisfactory explanation. This doesn’t privilege either beliefs or utterances as special categories of things to be explained; they just happen to be end states that capture everything we think is worth explaining about something like ‘having a headache’ in particular circumstances like ‘forming a belief that I have a headache’ or ‘uttering the sentence “I have a headache”’.
By analogy, suppose that I was an air safety investigator investigating an incident in which the rudder of a passenger jet went into a sudden hardover. The most appropriate explanandum in this case is ‘the rudder going into a sudden hardover’, because any explanation that doesn’t end with ‘...and this causes the rudder to go into a sudden hardover’ is clearly unsatisfactory for my purposes. Suppose I then conduct a test flight in which the aircraft’s autopilot is disconnected from the rudder, and discover a set of conditions that reliably causes the autopilot to form an incorrect model of the state of the aircraft, such that if the autopilot was connected to the rudder it would command a sudden hardover to ‘correct’ the situation. It seems quite reasonable in this case for the explanandum to be ‘the autopilot forming an incorrect model of the state of the aircraft’. There is no conceptual difference in the type of explanation required in the two cases. They can both in principle be explained in terms of a physical chain of events, which in both cases would almost certainly include some sequence of computations inside the autopilot. The fact that the explanandum in the second case is a propositional representation internal to the autopilot rather than a physical movement of a rudder doesn’t pose any new conceptual mysteries. We’re just using the explanandum to define the scope of what we’re interested in explaining.
This is distinct from the Camp 2 view, in which even if you had a complete description of the physical steps involved in forming the belief or utterance ‘I have a headache’, there would still be something left to explain, that is the subjective character of the experience of having a headache. When the Camp 2 view says that the experience itself is the explanandum, it does privilege subjective experience as a special category of things to be explained. This view asserts that experience has a property of subjectiveness that in our current understanding cannot be explained in terms of the physical steps, and it is this property of subjectiveness itself that demands a satisfactory explanation. When Camp 2 point to experience as explanandum, they’re not saying ‘it would be useful and satisfying to have an explanation of the physical sequence of events that lead up to this state’; they’re saying ‘there is something going on here that we don’t even know how to explain in terms of a physical sequence of events’. Quoting the original post, in this view ‘even if consciousness is compatible with the laws of physics, it still poses a conceptual mystery relative to our current understanding.’
Mental states do not need to be “about” something, but it is pretty clear they can be.
I’m still a bit confused by what you mean by ‘mental states’. My best guess is that you are using it as a catch-all term for everything that is or might be going on in the brain, which includes experiences, beliefs, thoughts, and more general states like ‘psychotic’ or ‘relaxed’.
I agree that mental states do not need to be about something, but I think beliefs do need to be about something and thoughts can be about something (propositional in the way you describe). I don’t think an experience can be propositional. I don’t understand this relates to whether these particular mental states are able to be explained.
It is clear that utterances are caused by mouth movements which are caused by neurons firing, but it is not clear how neurons could “cause” a belief, or how to otherwise (e.g. reductively) explain a belief.
My best account for what is going on here is that we have two interacting intuitive disagreements:
The ‘ordinary’ Camp 1 vs 2 disagreement, as outlined in Rafael’s post, where we disagree where the explanandum lies in the case of subjective experience.
A disagreement over whether whatever special properties subjective experience has also extend to other mental phenomena like beliefs, such that in the Camp 2 view there would be a Hard Problem of why and how we have beliefs analogous to or identical with the Hard Problem of why and how we have subjective experience.
Does this account seem accurate to you?
Huh, this is interesting. I wouldn’t have suspected this to be the crux. I’m not sure how well this maps to the Camp 1 vs 2 difference as opposed to idiosyncratic differences in our own views.
In your view however, if I’m not misunderstanding you, beliefs are more similar to utterances than to experiences. So while I think beliefs are equally hard to explain as experiences, in your view beliefs are about as easy to explain as utterances. Is this a fair characterization?
This is a fair characterisation, though I don’t think ease of explanation is a crucial point. I would certainly say that beliefs are more similar to utterances than to experiences. To illustrate this, sitting here now on the surface of Earth I think it’s possible for me to produce an utterance that is about conditions at the centre of Jupiter, and I think it’s possible for me to have a belief or a thought that is about conditions at the centre of Jupiter, and all of these could stand in a truth relation to what conditions are actually like at the centre of Jupiter. I don’t think I can have an experience that is about conditions at the centre of Jupiter. Strictly, I don’t think I can have an experience that is ‘about’ anything. I don’t think experiences are models of the world, in the way that utterances, beliefs, and thoughts can be. This is why I would agree that it is not possible to be mistaken about an experience, though in everyday language we often elide experiences with claims about the world that do have truth values (‘it looks red’ almost always means ‘I believe it is actually red’, not ‘when I look at it I experience seeing red but maybe that’s just a hallucination’).
I find the following obvious: thoughts or beliefs are on the same subjective level as experiences,
What do you see as the important difference between ‘subjective’ and ‘objective’? Is subjectivity about who has access to a phenomenon, or is it a quality of the phenomenon itself?
The reason I think utterances are “easy” to explain is that they are physical events and therefore obviously allow for a mechanistic third-person explanation. The explanation would not in principle be different from explaining a simple spinal reflex. Nerve inputs somehow cause nerve outputs, except that for an utterance there are orders of magnitude more neurons involved, which makes the explanation much harder in practice. But the principle is the same.
I agree with this.
It is unclear how to even grasp subjective beliefs and experiences in a mechanical language of cause and effect.
If for the sake of argument we strike out ‘beliefs’ here and make it just about experiences, this seems to be a restatement of the Camp 1 vs 2 distinction. As a Camp 1 person, a mechanical explanation of whatever chain of events leads me to think or say that I have a headache would fully dissolve the question. I wouldn’t feel that there is anything left to explain. From what I understand of Camp 2, even given such an explanation they would still feel there is something left to explain, namely how these objective facts come together to produce subjective experience.
Okay, so you are saying that in the first-person case, the evidence for having a headache is not itself the experience of having a headache, but the belief that you have the experience of having a headache.
Not quite. I would say that in the first-person case, the explanandum – the thing that needs to be explained – is the belief (or thought, or utterance) that you have the experience of having a headache. Once you have explained how some particular set of inputs to the brain led to that particular output, you have explained everything that is going on, in the Camp #1 view. Quoting the original post, in the Camp #1 view ‘if we can explain exactly why you, as a physical system, uttered the words “I experienced X”, then there’s nothing else to explain.’
So according to you, one could be wrong about currently having a headache, namely when the aforementioned belief is false, when you have the belief but not the experience. Is this right?
I would actually agree that ’you can’t be mistaken about your own current experiences’, but I think the problem Rafael’s post points out is that Camp #1 and Camp #2 would understand that to mean different things.
Intuitively it doesn’t seem possible to be wrong about one’s own current mental states.
I’m a bit confused about what you mean by ‘mental states’. It’s certainly possible to be wrong about one’s own current mental state, as I understand the term; people experiencing psychosis usually firmly believe they are not psychotic. I don’t think the two Camps would disagree on this.
The three examples you mention, of having a headache, being depressed (by which I assume you mean feeling down rather than the psychiatric condition specifically), and feeling pain, all seem like examples of subjective experiences. Insofar as this paragraph is saying ‘it’s not possible to be wrong about your own subjective experience’, I would agree, with the caveat as above that what I think this means might be different to what a Camp #2 person thinks this means.
So the explanandum, the evidence, would in both cases be something mental. But it seems you require the explanandum to be something “objective”, like an utterance.
I don’t require the explanandum to be an utterance, and I don’t think there’s any important sense in which an utterance is more objective than a thought or belief. My original comment was intended only to point out that in the first-person case you have privileged access to certain data, namely the contents of your own mind, that you don’t have in the third-person case. The reasons for this are completely mundane and conditional on the current state of affairs, namely that we currently have no practical way of accessing the semantic content inside each others’ skulls other than via speech. It’s possible to imagine technology that might change this state of affairs, like a highly accurate thought-reading device for example.
I do think the explanandum is required to be an output, because being able to explain or predict the output is the test of your model of what is going on. If you predict ‘this person is going to say they don’t have a headache’, and the person says ‘I have a headache’, then there’s something wrong with your model.
You’re absolutely right that this is the more interesting case. I intentionally chose the past tense to make it easier to focus on the details of the example rather than the Camp #1/Camp #2 distinction per se. For completeness, I’ll try to recapitulate my understanding of Rafael’s account for the present-tense case ‘I have a headache right now’.
From my Camp #1 perspective, any mechanistic description of the brain that explained why it generated the thought/belief/utterance ‘I have a headache right now’ instead of ‘I don’t have a headache right now’ in response to a given set of inputs would be a fully satisfying explanation. Perhaps it really is impossible for a human brain to generate the output ‘I have a headache right now’ without meeting some objective definition of a headache (some collection of facts about sensory inputs and brain state that distinguishes a headache from e.g. a stubbed toe), but there doesn’t seem to be any reason why this impossibility could not be a mundane fact conditional on the physical details of human brains. The brain is taking some combination of inputs, which might include external sensory data as well as introspective data about its own state, and generating a thought/belief/utterance output. It doesn’t seem impossible in principle that, by tweaking certain connections or using TMS or whatever, the mapping between these inputs and outputs could be altered such that the brain reliably generates the output ‘I don’t have a headache right now’ in situations where the chosen objective definition of ‘having a headache’ holds true. So, for Camp #1 the explanandum really is the output ‘I have a headache right now’. (The purpose of my comment was to expand the definition of ‘output’ to explicitly include thoughts and beliefs as well as utterances, and to acknowledge that the inputs in the case ‘I have a headache’ really are different to those in the case ‘John says he has a headache’.)
Camp #2 would say that it is impossible even in principle to be mistaken about the experience of having a headache. They might say it is impossible to meaningfully define ‘having a headache’ only in terms of sensory and/or introspective inputs to the brain. In their view, there is a sort of hard, irreducible kernel of experiencing-a-headache-subjective-qualia-stuff which is closely entangled with the objective inputs and outputs (they would agree that you are more likely to experience a headache if you were hit on the head with a hammer, and more likely to say ‘I have a headache’ if you were experiencing a headache), but nevertheless exists independent from and in addition to these objective facts and is not reducible to an account of only the inputs, outputs, and mapping between them. The explanandum, in their view, is the subjective-qualia-stuff. Camp #2 would fully admit that it’s really difficult to pin down the nature of the subjective-qualia-stuff; that’s why it’s a Hard Problem.
I’ve done my best here to represent Camp #2 accurately, but it’s difficult because their perspective is very alien to me. Apologies in advance to any Camp #2 people and happy to hear your corrections.
This is a clear and convincing account of the intuitions that lead to people either accepting or denying the existence of the Hard Problem. I’m squarely in Camp #1, and while I think the broad strokes are correct there are two places where I think this account gets Camp #1 a little wrong on the details.
According to Camp #1, the correct explanandum is still “I claim to have experienced X” (where X is the apparent experience). After all, if we can explain exactly why you, as a physical system, uttered the words “I experienced X”, then there’s nothing else to explain. […] In other words, the two camps disagree about the epistemic status of apparently perceived experiences: for Camp #2, they’re epistemic bedrock, whereas for Camp #1, they’re model outputs of your brain, and like all model outputs of your brain, they can be wrong.
I think this is conflating two difference senses of ‘claim’. The first sense is the interpersonal or speech sense: John makes a claim to you about his internal experience, in the form of speech. In this sense, ‘John claims to have a headache’ is the correct explanandum, in the Camp #1 view, of John telling you he has a headache, because it’s the closest thing to John’s actual experience that you have access to.
However, there is something different going on in the case where you yourself seem to have had an experience. You can believe you have had a certain experience without telling anybody about it, or without even uttering the words ‘I experienced X’ into an empty room, so the interpersonal or speech sense of ‘claim’ doesn’t really seem to apply. This only leaves us with the sense of ‘making a claim to yourself’, which might more precisely be called ‘thinking’ or ‘believing’.
Even in the Camp #1 view, there really is something different about a claim you make to yourself. You have privileged access to the contents of your own mind that you don’t have to contents of other people’s minds, by virtue of the mundane physical fact that the neurones in your brain are connected to the other neurones in your brain but not to the neurones in other people’s brains. Even if you don’t utter the words ‘I experienced X’, there is still something to be explained that lies between ‘actually experiencing X’ and ‘claiming in speech to have experienced X: why did you have the thought or belief ‘I experienced X’, instead of ‘I didn’t experience X but it would be useful for me to lie about it’? The explanandum in the case of your own experience is located a little deeper than it is in the case of the experiences of others. You can still be wrong about the underlying reality of your experiences – perhaps the memory of having a headache was falsely implanted with nefarious technology – but you have access to a type of evidence about it that John does not.
(I’ve never been able to figure out if Thomas Nagel, in ‘What is it like to be a bat?’, believes that the mere existence of this sort of privileged evidence about one’s own experiences tells us something about the nature of qualia/subjectivity. He says ‘The point of view in question is not one accessible only to a single individual. Rather it is a type.’ But, from my Camp #1 perspective, he never seems to explain what the difference is.)
So consciousness will be a densely connected part of this network – no more, no less – and it will have fuzzy boundaries because there is, ultimately, no ground truth as to what does or doesn’t constitute consciousness.
Perhaps this overly nit-picky, but I don’t believe Camp #1 intuitions imply that consciousness is or arises from a particular ‘part’ of the brain, in the sense that you could say ‘it comes from the neurones in this region’ or ‘it comes from the subset of neurones lighting up on this fMRI’, even allowing fuzzy boundaries. There’s no reason to expect the physical substrate of the brain, or even the network topology of the its connections, to always map straightforwardly to some feature or property of the mind, and particularly not for more abstract and higher-level properties. Sometimes there is such an obvious mapping (e.g. visual pathways), but there’s no more reason to expect that there is a ‘consciousness part of the brain’ than a ‘reasoning part of the brain’ or an ‘optimising part of the brain’; it might just be a thing that the whole brain is or does. By analogy, you might be able to point to a particular bit of circuitry in a computer that processes raw data from a camera sensor, but you can’t point to any one part and say ‘this is where the operating system comes from’.
The upshot is the same: Camp #1 will view consciousness as an ‘inherently fuzzy phenomenon’. We might just find it to be even fuzzier than you suggest here.
Surely you don’t think that’s the right moral category for ethical veganism?
I don’t really understand what you’re asking here. How would you describe the moral category you’re referring to, and why do you think it doesn’t or shouldn’t apply to veganism?
Thoughts on why the post gave (me) the impression it did, in no particular order:
‘Trade-offs’ is broad and vague, and the post didn’t make a lot of detailed claims about vegan nutrition. This makes sense in the context of you trying to communicate the detailed facts previously, but coming to the post without that context made it hard to tell if you were just making an unobjectionable claim or trying to imply something broader.
Some statements struck me as technically true but hyperbolic. Examples:
You can get a bit of all known nutrients from plants and fortified products, and you can find a vegan food that’s at least pretty good for every nutrient, but getting enough of all of them is a serious logic puzzle unless you have good genes.
[...]
Some people are already struggling to feed themselves on an omnivore diet, and have nothing to replace meat if you take it away.
[...]
If vegans are equally healthy but are spending twice as much time and money on food, that’s important to know.
[...]
If there are three vegan sources and you’re allergic to all of them, you need animal products.
I was very confused by what you were hoping to learn from an RCT or ‘good study’, and my impression of your in-office nutritional testing was that you were trying to gather new primary data about vegan nutrition. Because the basic facts about the risk of nutrient deficiencies in veganism seemed uncontroversial, my interpretation of that was that you thought there were other and potentially more significant ‘trade-offs’ that might exist at the level-of-evidence gap between an RCT or in-office testing and e.g. the introduction to the Wikipedia article on veganism.
The implied model for the relationship between diet and health felt...off, or at least different from my own. I tend to think of diet as an input to a set of homeostatic processes, which are generally robust but can be slowly pushed off-balance by sustained problems and usually fixed with gentle correction. This post seemed to model diet as an all-or-nothing ‘logic puzzle’ to be either solved or, more likely, failed.
...what you think could be done to convey the important, true points with as little animosity-due-to-misreading as possible
Beyond the points above, I genuinely don’t know. I’m probably not the intended audience anyway, since that seems to be vegans or potential vegans who don’t already know about the risks of nutrient deficiencies. The only thing I can think to contribute here is: if you’ve tried presenting the basic facts of the matter, and experienced pushback for it, does that necessarily mean that just presenting the facts is the wrong strategy? It’s a charged topic with a polluted local memespace, so some level of malicious or confused pushback is guaranteed; it’s what you would expect to see even when you are communicating well.
That is a frustrating situation. As you note in the introduction this is a charged topic that tends to lead to poor discussions, so you deserve credit for wading in anyway.
Given the context I laid out, is there anything I could have done to create a more productive discussion with you, personally?
I’m not sure. The discussion in this comment thread (and others) has been productive in the sense that I now have a much better understanding of your position and the context. In terms of the original post, I don’t know if a one-sentence summary would have changed much; given my impression of the post it might have looked like an attempt to motte-and-bailey. I could try to break down why the post gave me the impression it did, if you think that would be useful.
Thanks, this and your comment here helped a lot to clarify your position and intentions. My initial impression was similar to Natália’s, i.e. that you believed something more like point 3.
Re. point 2, by “widely recognised” (and similarly for “widespread” in point 3) I meant something like “widely recognised in relevant academic literature/textbooks/among experts” rather than “among people who have ever tried a vegan diet”. My impression is that on this definition you wouldn’t endorse point 2 either.
We may still disagree on the “importance” of point 1, although to be clear I completely support any effort to inform vegans or potential vegans about the risk of nutrient deficiencies. It’s probably not possible or worth the effort to resolve this disagreement, but it does make me wonder about:
and:
Could at least some of these encounters be explained by a similar “disagreement about importance”, as opposed to disagreement about the basic facts? That might explain why these exchanges seemed obfuscated or un-cooperative; you thought they were evading obvious facts, while they thought you were making mountains out of (what they saw as) molehills.
I don’t doubt the Faunalytics data. If anything the number seems surprisingly low, considering it comes from self-reporting among people who went on to quit veganism.
I’m not sure how to weigh ‘importance’ other than subjectively, but I’ll attempt to at least put bounds on it. As a floor, some number of people experience health issues that are important enough to them that they are motivated to quit veganism. As a ceiling, the health risks of veganism are less important than those of other harms related to diet – for example, dyslipidaemia or diabetes – that increase mortality, given that veganism doesn’t seem to increase mortality and may reduce it.
My stance at the moment is still more ‘generally confused about what you’re trying to communicate/achieve’ than ‘disagreeing with a particular claim you’re making’. I’d like to close the inferential distance if possible, but feel free to ignore this comment if you don’t think it’s leading anywhere useful.
I still don’t understand which of the following (if any) you would endorse:
Many vegans don’t know about the risk of nutrient deficiencies and would benefit from this knowledge (in which case I’m still confused why you wrote this post instead of just presenting this information)
The health harms of nutrient deficiencies in veganism are more serious than is widely recognised
There are serious health harms of veganism other than the risk of nutrient deficiencies that are known but this knowledge is not widespread, or that we don’t know about yet but have good reason to suspect exist
I agree with this, though I would expand
there are plenty of points on this axis where people will seek help for heart-attacks but will be pessimistic about getting help with “vaguely feeling tired lately”
to note that heart attacks are on average a much more serious problem then vague fatigue, so the fact that people are more likely to see a doctor about the former is a good thing. People will generally self-select whether to seek medical help by the severity of the problem, and to the extent that they don’t veganism is probably the least of their worries.
I’m confused- the issues you mention seem both important and, in most cases, extremely easy to fix. If there’s a large population that is going vegan without the steps you mention (and my informal survey says there is), it seems high value to alert them to the necessity.
I suspect we have different intuitions as a matter of degree for ‘important’, ‘high value’, and ‘necessity’ here. Despite that, I think we would probably agree on a statement like ‘vegans who are not aware that their diet increases the risk of nutrient deficiencies would benefit from learning about this’.If you had posted something like ‘PSA: if you are vegan, you might not know you are at increased risk of certain nutrient deficiencies; read (this link) to find out more and see your doctor if you have (list of symptoms) or want to get tested’, I would have thought this was a good idea. What confuses me is why you wrote the post you did instead, which seems to be gesturing at a larger problem. As a specific example, in ‘Evidence I’m looking for’, you wrote:
The ideal study is a longitudinal RCT […] I’ve spent several hours looking for good studies on vegan nutrition.
While more high-quality evidence on veganism would be valuable in general, I’m confused by what you expect to learn from such an RCT or study. Do you think there is reason to believe there are significant health harms of veganism that we don’t know about yet? If so, why?
- 5 Jun 2023 23:48 UTC; 11 points) 's comment on Change my mind: Veganism entails trade-offs, and health is one of the axes by (
- 17 Sep 2023 3:13 UTC; 1 point) 's comment on Change my mind: Veganism entails trade-offs, and health is one of the axes by (
I’ve been trying to figure out why I feel like I disagree with this post, despite broadly agreeing with your cruxes. I think it’s because it in the act of writing and posting this there is an implicit claim along the lines of:
Subtle nutritional issues that are specific to veganism can cause significant health harms, to a degree that it is worth spending time and energy thinking about this as ‘a problem’.
For context, I am both vegan and a doctor. Nutrient deficiencies are common and can cause anything ranging from no symptoms to vague symptoms to life-threatening diseases. (For simplicity, I’m going to focus on deficiencies only, although of course there are other ways diet can affect health.) They are generally well-understood, can be detected with cheap laboratory tests, and have cheap and effective treatments.
Veganism is a known risk factor for some nutrient deficiencies, particularly B12 and iron. Many vegans, including myself, will routinely get blood tests to monitor for these deficiencies. If detected, they can be treated with diet changes, fortified foods, oral supplementation, or intramuscular/intravenous supplementation.
Some vegans don’t know about this, and they might end up with a nutrient deficiency. They might be asymptomatic, or they might develop symptoms, and if they go to a doctor with those symptoms the doctor will (hopefully) figure out the problem and recommend a solution. If they don’t go to a doctor, it could be either because the problem is minor enough that they can’t be bothered, or because they generally don’t seek medical help when they are seriously unwell, in which case the risk from something like B12 deficiency is negligible compared to e.g. the risk of an untreated heart attack. Many people don’t have good access to medical care, but this is a problem orthogonal to veganism, and for these people veganism is unlikely to be their most important health concern.
Beyond these well-known issues, is there any reason to expect veganism in particular to cause any health harms worth spending time worrying about? People have vague symptoms all the time, and perhaps some of these are related to veganism. They might also be related to microplastics, or pathogens, or antibiotics in the meat supply, or who knows what. As far as I’m aware, there is no mysterious syndrome or increased mortality rate among vegans that is currently going unexplained.
Let’s suppose that people do the exact opposite of what you recommend: proselytise for veganism without mentioning the risk of nutrient deficiencies; fail to suggest dietary issues when discussing health problems; make false claims about vegan nutrition. If you take out the references to veganism, this is just the current state of the world. People advertise their fast food restaurants and feed their children sugary breakfast cereals without caveats about the risk of heart disease or diabetes. People give each other folk medical advice and swap half-baked ideas about supplements and fad diets. Much of the work of public health and medicine is preventing, screening for, and fixing the problems caused by this. ‘People should have better health knowledge’ is broadly laudable and agreeable, but it’s not a claim about veganism.
In summary:
The potential health harms associated with veganism are well understood, easily detected, and easily treated
To the extent that vegans don’t know about them, this is not a problem specific to veganism nor is it one where veganism in particular is likely to be causing significant health harms
There is no reason to suspect veganism in particular is causing as yet undiscovered harms
- 3 Jun 2023 15:05 UTC; 59 points) 's comment on Change my mind: Veganism entails trade-offs, and health is one of the axes by (EA Forum;
Have you considered melatonin? Quoting gwern:
Melatonin allows us a different way of raising the cost, a physiological & self-enforcing way. Half an hour before we plan to go to sleep, we take a pill. The procrastinating effect will not work—half an hour is so far away that our decision-making process & willpower are undistorted and can make the right decision (viz. following the schedule). When the half-hour is up, the melatonin has begun to make us sleepy. Staying awake ceases to be free, to be the default option; now it is costly to fight the melatonin and remain awake.
I use it for exactly this reason and it works brilliantly.
- 12 Apr 2010 15:12 UTC; 7 points) 's comment on Case study: Melatonin by (
This is like saying “if evolution wants a frog to appear poisonous, the most efficient way to accomplish that is to actually make it poisonous”. Evolution has a long history of faking signals when it can get away with it. If evolution “wants” you to signal that you care about the truth, it will do so by causing you to actually care about the truth if and only if causing you to actually care about the truth has a lower fitness cost than the array of other potential dishonest signals on offer.
I wouldn’t say either of these things. A quick and easy treatment like B12 replacement is not mutually exclusive with a long-term and difficult treatment like diet modification. (This is not an abstract question for me; prescribing a statin and counselling on lifestyle changes are both things I do several times a week, and of the two, the script is orders of magnitude easier for both me and the patient, but we’ll usually do both in parallel when treating dyslipidaemia.)
As I said earlier in the thread, I’m all in favour of you or anybody else spending time on making people aware of the risk of nutrient deficiencies associated with veganism and what to do about them. (Again, this is not an abstract issue to me; I routinely discuss, screen for, and treat nutrient deficiencies with vegan and vegetarian patients.) I do recognise that you’ve had some bad experiences doing this, which is unfair.
I’m not sure if you chose this example intentionally, but for what it’s worth: Oreos are vegan.