But that’s just a starting point, and he then moves in a direction that’s very far from any kind of LW consensus.
If he says:
“In this essay I’ll argue strongly for a different perspective: that we can easily imagine worlds consistent with quantum mechanics (and all other known physics and biology) where the answer to the question [scanning of minds possible?] is yes, and other such worlds where the answer is no.”
and he’s right, then LW consensus is religion (in other words, you made up your mind too early).
LW consensus is not necessarily wrong, even if Scott is right. However, making up your mind on unsettled empirical questions (which is what LW had done if Scott is right) is a dangerous practice.
I found the phrasing “he then moves in a direction that’s very far from any kind of LW consensus” broadly similar to “he’s not accepting the Nicene Creed, good points though he may make.” Is there even a non-iffy reason to say this about an academic paper?
I was trying to position the paper in terms of LW opinions, because my target audience were LW readers. (That’s also the reason I mentioned the tangential Eliezer reference.) It’s beneath my dignity to list all the different philosophical questions where my opinion is different from LW consensus, so let’s just say that I used the term as a convenient reference point rather than a creed.
However, making up your mind on unsettled empirical questions
Either this is self-contradictory, or it means ‘never be wrong’. If you’re always right, you’re making too few claims and therefore being less effective than you could be. Being wrong doesn’t mean you’re doing it wrong.
As for iffiness, I read that phrase more as “Interesting argument ahead!”
Either this is self-contradictory, or it means ‘never be wrong’.
I think if you are making up your mind on unsettled empirical questions, you are a bad Bayesian. You can certainly make decisions under uncertainty, but you shouldn’t make up your mind. And anyways, I am not even sure how to assign priors for the upload fidelity questions.
In that case then you’re the one who made the jump from ‘goes against consensus’ to ‘this was assigned 0 probability’. If we all agreed that some proposition was 0.0001% likely, then claiming that this proposition is true would seem to me to be going against consensus.
Ok, what exactly is your posterior belief that uploads are possible? What would you say the average LW posterior belief of same? Where did this number come from? How much ‘cognitive effort’ is spent at LW thinking about the future where uploads are possible vs the future where uploads are not possible?
To answer the last question first—not a heck of a lot, but some. It was buried in an ‘impossible possible world’, but lack of uploading was not what made it the impossible possible world, so that doesn’t mean that it’s considered impossible.
To answer your questions:
-- Somewhere around 99.5% that it’s possible for me. The reasons for it to be possible are pretty convincing.
-- I would guess that the median estimate of likelihood among active posters who even have an estimate would be above 95%, but that’s a pretty wild guess. Taking the average would probably amount to a bit less than the fraction of people who think it’ll work, so that’s not very meaningful. My estimate of that is rough—I checked the survey, but the most applicable question was cryonics, and of course cryonics can be a bad idea even if uploading is possible (if you think that you’ll end up being thawed instead of uploaded) And of course if you somehow think you could be healed instead of uploaded, it could go the other way. 60% were on the fence or in favor of getting cryonically preserved, which means they think that the total product of the cryo Drake equation is noticeable. Most cryo discussions I’ve seen here treat organization as the main problem, which suggests that a majority consider recovery a much less severe problem. Being pessimistic for a lower bound on that gives me 95%.
-- The most likely to fail part of uploading is the scanning. Existing scanning technology can take care of anything as large as a dendrite (though in an unreasonably large amount of time). So, for uploading to be impossible, it would have to require either dynamical features or features which would necessarily be destroyed by any fixing process, and no other viable mechanism.
The former seems tremendously unlikely because personality can recover from some pretty severe shocks to the system like electrocution, anaerobic metabolic stasis, and inebriation (or other neurotoxins). I’d say that there being some relevant dynamical process that contains crucial nondeducible information is maybe 1 in 100 000, ballpark. Small enough that it’s not significant.
The latter seems fairly unlikely as well—if plastination or freezing erases some dendritic state, and that encodes personality information. Seems very unlikely indeed that there’s literally no way around this at all—no choice of means of fixing will work possible within the laws of physics. Maybe one in 20 that we can’t recover that state… and maybe one in 20 that it was vital to determine long-term psychological features (for the reasons outlined above, though weakened since we’re allowing that this is not transient, just fragile). Orders of magnitude, here.
Certainly, our brains are far larger than they need to be, and so it seems like you’re not going to run into the limits of physics. Heisenberg is irrelevant, and the observer effect won’t come and bite you at full strength because you have probes much less energetic than the cells in question. If nothing else, you should be able to insinuate something into the brain and measure it that way.
But of course I could have screwed up my reasoning, which accounts for the rest of the 0.5%. Maybe our brains are sufficiently fragile that you’re going to lose a lot when you poke it hard enough to get the information out. I doubt it to the tune of 199:1. As a check, I would feel comfortable taking a 20:1 bet on the subject, and not comfortable with a 2000:1 bet on it.
~~~~
Of course, the real reason that we don’t talk too much about what happens if uploading isn’t possible is that that would just make the future that much more like the present. We know how to deal with living in meat bodies already. If it works out that that’s the way we’re stuck, then, well, I guess we don’t need to worry about em catastrophes, and any FAI will really want to work on its biotech.
Ok—thanks for a detailed response. To be honest, I think you are quibbling. If your posterior is 99.5% and 95% if being pessimistic you made up your mind essentially as far as a mind can be made up in practice. If the answer to the upload question depends on an empirical test that has not yet been done (because of lack of tech), then you made up your mind too soon.
Of course, the real reason that we don’t talk too much about what happens if uploading isn’t
possible is that that would just make the future that much more like the present.
I think a cynic would say you talk about the upload future more because its much nicer (e.g. you can conquer death!)
If your posterior is 99.5% and 95% if being pessimistic you made up your mind essentially as far as a mind can be made up in practice. If the answer to the upload question depends on an empirical test that has not yet been done (because of lack of tech), then you made up your mind too soon.
These two statements clash very strongly. VERY strongly.
If you can predict the outcome of the empirical test with that degree of confidence or a higher one, then they’re perfectly compatible. We’re talking what’s physically possible with any plan of action and physically possible capabilities, not merely what can be done with today’s tech. The negative you’re pushing is actually a very very strong nonexistence statement.
I would guess that he thinks that the probability of this hypothetical—worlds in which brain scanning isn’t possible—is pretty low (based on having discussed it briefly with him). I’m sure everyone around here thinks it is possible as well, it’s just a question of how likely it is. It may be worth fleshing out the perspective even if it is relatively improbable.
In particular, the probability that you can’t get a functional human out of a brain scan seems extremely low (indeed, basically 0 if you interpret “brain scan” liberally), and this is the part that’s relevant to most futurism.
Whether there can be important aspects of your identity or continuity of experience that are locked up in uncopyable quantum state is more up for grabs, and I would be much more hesitant to bet against that at 100:1 odds. Again, I would guess that Scott takes a similar view.
Hi Paul. I completely agree that I see no reason why you couldn’t “get a functional human out of a brain scan”—though even there, I probably wouldn’t convert my failure to see such a reason into a bet at more than 100:1 odds that there’s no such reason. (Building a scalable quantum computer feels one or two orders of magnitude easier to me, and I “merely” staked $100,000 on that being possible—not my life or everything I own! :-) )
Now, regarding “whether there can be important aspects of your identity or continuity of experience that are locked up in uncopyable quantum state”: well, I regard myself as sufficiently confused about what we even mean by that idea, and how we could decide its truth or falsehood in a publicly-verifiable way, that I’d be hesitant to accept almost ANY bet about it, regardless of the odds! If you like, I’m in a state of Knightian uncertainty, to whatever extent I even understand the question. So, I wrote the essay mostly just as a way of trying to sort out my thoughts.
we can easily imagine worlds consistent with quantum mechanics (and all other known physics and biology) where the answer to the question [scanning of minds possible?] is yes, and other such worlds where the answer is no
If it is so easy, could someone please explain me the main idea in less than 85 pages?
(Let’s suppose that the scanned mind does not have to be an absolutely perfect copy; that differences as big as the difference between me now and me 1 second later are acceptable.)
“The question also has an “empirical core” that could turn out one way or another, depending on details of the brain’s physical organization that are not yet known. In particular, does the brain possess what one could call a clean digital abstraction layer: that is, a set of macroscopic degrees of freedom that
(1) encode everything relevant to memory and cognition,
(2) can be accurately modeled as performing a classical digital computation, and
(3) “notice” the microscopic, quantum-mechanical degrees of freedom at most as pure randomnumber sources, generating noise according to prescribed probability distributions?”
You could do worse things with your time than read the whole thing, in my opinion.
Thank you for the quote! (I tried to read the article, but after a few pages it seemed to me the author makes too many digressions, and I didn’t want to know his opinions on everything, only on the technical problems with scanning brains.)
Do I understand it correctly that the question is, essentially, whether there exists a more efficient way of modelling the brain than modelling all particles of the brain?
Because if there is no such efficient way, we can probably forget about running the uploaded brains in real time.
Then, even assuming we could successfully scan the brains, we could get some kind of immortality, but we could not get greater speed, or make life cheaper… which is necessary for the predicted economical consequences of “ems”.
Some smaller economical impacts could still be possible, for example if a person would be so miraculously productive, that even running them at 100× slower speed and 1000× higher costs could be meaningful. (Not easy to imagine, but technically not impossible.) Or perhaps if the quality of life increases globally, the costs of real humans could grow faster than costs of emulated humans, so at some moment emulation could be economically meaningful.
Still, my guess is that there probably is a way to emulate brain more efficiently, because it is a biological mechanism made by evolution, so it has a lot of backwards compatibility and chemistry (all those neurons have metabolism).
Do I understand it correctly that the question is, essentially, whether there exists a more efficient way of
modelling the brain than modelling all particles of the brain?
I don’t presume to speak for Scott, but my interpretation is that it’s not a question of efficiency but fidelity (that is, it may well happen that classical sims of brains are closely related to the brain/person scanned but aren’t the same person, or may indeed not be a person of any sort at all. Quantum sims are impossible due to no-cloning).
For more detailed questions I am afraid you will have to read the paper.
No his thesis is that it is possible that even a maximal upload wouldn’t be human in the same way. His main argument goes like this:
a) There is no way to find out the universe’s initial state, thanks to no-cloning, the requirement of low entropy, and there being only one copy.
b) So we have to talk about uncertainty about wavefunctions—something he calls Knightian uncertainty (roughly, a probability distribution over probability distributions).
c) It is conceivable that particles in which the Knightian uncertainties linger (ie they have never spoken to anything macroscopic enough for decoherence to happen) mess around with us, and it is likely that our brain and only our brain is sensitive enough to one photon for that to mess around with how it would otherwise interact (he proposes Na-ion pathways).
d) We define “non-free” as something that can be predicted by a superintelligence without destroying the system (ie you can mess around with everything else if you want, though within reasonable bounds the interior of which we can see extensively).
e) Because of Knightian uncertainty it is impossible to predict people, if such an account is true.
My disagreements (well, not quite—more, why I’m still compatibilist after reading this):
a) predictability is different from determinism—his argument never contradicts determinism (modulo prob dists but we never gave a shit about that anyway) unless we consider Knightian uncertainties ontological rather than epistemic (and I should warn you that physics has a history of things suddenly making a jump from one to the other rather suddenly). And if it’s not deterministic, according to my interpretation of the word, we wouldn’t have free will any more.
b) this freedom is still basically random. It has more to do with your identification of personality than anything Penrose ever said, because these freebits only hit you rarely and only at one place in your brain - but when they do affect it they affect it randomly among considered possiblities,
I’d say I was rather benefitted by reading it, because it is a stellar example of steelmanning a seemingly (and really, I can say now that I’m done) incoherent position (well, or being the steel man of said position). Here’s a bit of his conclusion that seems relevant here:
To any “mystical” readers, who want human beings to be as free as possible from the mechanistic
chains of cause and effect, I say: this picture represents the absolute maximum that I can see how
to offer you, if I confine myself to speculations that I can imagine making contact with our current
scientific understanding of the world. Perhaps it’s less than you want; on the other hand, it does
seem like more than the usual compatibilist account offers! To any “rationalist” readers, who cheer
when consciousness, free will, or similarly woolly notions get steamrolled by the advance of science,
I say: you can feel vindicated, if you like, that despite searching (almost literally) to the ends of
the universe, I wasn’t able to offer the “mystics” anything more than I was! And even what I do
offer might be ruled out by future discoveries.
Easily imagining worlds doesn’t mean they are possible or even consistent, as per the p-zombie world.
This is not argument against Aaronson paper in general, although I think it’s far from correct, but against your deduction.
Plus, I think there exist multiple, reasonable and independent arguments that favors LW consensus, and this is evidential weight against Aaronson paper, not the opposite.
I think he proposes an empirical question the answer to which influences whether e.g. uploading is possible. Do you think his question has already answered? Do you have links explaining this, if so?
I have yet to read the full paper, so a full reply will have to wait. But I’ve already commented that he hand-waves a sensible argument against his thesis. so this is not promising.
What an offensive analogy! Please don’t tar a vast, nuanced thing like religion with hasty analogies to something as trifling and insignificant as LW consensus. After all, denotation may win arguments, technically, but connotation changes minds—so I beg thee, be careful.
If he says:
“In this essay I’ll argue strongly for a different perspective: that we can easily imagine worlds consistent with quantum mechanics (and all other known physics and biology) where the answer to the question [scanning of minds possible?] is yes, and other such worlds where the answer is no.”
and he’s right, then LW consensus is religion (in other words, you made up your mind too early).
I’m not quite sure what you mean here. Do you mean that if he’s right, then LW consensus is wrong, and that makes LW consensus a religion?
That seems both wrong and rather mean to both LW consensus and religion.
LW consensus is not necessarily wrong, even if Scott is right. However, making up your mind on unsettled empirical questions (which is what LW had done if Scott is right) is a dangerous practice.
I found the phrasing “he then moves in a direction that’s very far from any kind of LW consensus” broadly similar to “he’s not accepting the Nicene Creed, good points though he may make.” Is there even a non-iffy reason to say this about an academic paper?
I was trying to position the paper in terms of LW opinions, because my target audience were LW readers. (That’s also the reason I mentioned the tangential Eliezer reference.) It’s beneath my dignity to list all the different philosophical questions where my opinion is different from LW consensus, so let’s just say that I used the term as a convenient reference point rather than a creed.
Why?
Presumably, he wanted some relatively quick way to tell people why he was posting it to lesswrong, and what they should expect from it.
Either this is self-contradictory, or it means ‘never be wrong’. If you’re always right, you’re making too few claims and therefore being less effective than you could be. Being wrong doesn’t mean you’re doing it wrong.
As for iffiness, I read that phrase more as “Interesting argument ahead!”
I think if you are making up your mind on unsettled empirical questions, you are a bad Bayesian. You can certainly make decisions under uncertainty, but you shouldn’t make up your mind. And anyways, I am not even sure how to assign priors for the upload fidelity questions.
In that case then you’re the one who made the jump from ‘goes against consensus’ to ‘this was assigned 0 probability’. If we all agreed that some proposition was 0.0001% likely, then claiming that this proposition is true would seem to me to be going against consensus.
Ok, what exactly is your posterior belief that uploads are possible? What would you say the average LW posterior belief of same? Where did this number come from? How much ‘cognitive effort’ is spent at LW thinking about the future where uploads are possible vs the future where uploads are not possible?
To answer the last question first—not a heck of a lot, but some. It was buried in an ‘impossible possible world’, but lack of uploading was not what made it the impossible possible world, so that doesn’t mean that it’s considered impossible.
To answer your questions:
-- Somewhere around 99.5% that it’s possible for me. The reasons for it to be possible are pretty convincing.
-- I would guess that the median estimate of likelihood among active posters who even have an estimate would be above 95%, but that’s a pretty wild guess. Taking the average would probably amount to a bit less than the fraction of people who think it’ll work, so that’s not very meaningful. My estimate of that is rough—I checked the survey, but the most applicable question was cryonics, and of course cryonics can be a bad idea even if uploading is possible (if you think that you’ll end up being thawed instead of uploaded) And of course if you somehow think you could be healed instead of uploaded, it could go the other way. 60% were on the fence or in favor of getting cryonically preserved, which means they think that the total product of the cryo Drake equation is noticeable. Most cryo discussions I’ve seen here treat organization as the main problem, which suggests that a majority consider recovery a much less severe problem. Being pessimistic for a lower bound on that gives me 95%.
-- The most likely to fail part of uploading is the scanning. Existing scanning technology can take care of anything as large as a dendrite (though in an unreasonably large amount of time). So, for uploading to be impossible, it would have to require either dynamical features or features which would necessarily be destroyed by any fixing process, and no other viable mechanism.
The former seems tremendously unlikely because personality can recover from some pretty severe shocks to the system like electrocution, anaerobic metabolic stasis, and inebriation (or other neurotoxins). I’d say that there being some relevant dynamical process that contains crucial nondeducible information is maybe 1 in 100 000, ballpark. Small enough that it’s not significant.
The latter seems fairly unlikely as well—if plastination or freezing erases some dendritic state, and that encodes personality information. Seems very unlikely indeed that there’s literally no way around this at all—no choice of means of fixing will work possible within the laws of physics. Maybe one in 20 that we can’t recover that state… and maybe one in 20 that it was vital to determine long-term psychological features (for the reasons outlined above, though weakened since we’re allowing that this is not transient, just fragile). Orders of magnitude, here.
Certainly, our brains are far larger than they need to be, and so it seems like you’re not going to run into the limits of physics. Heisenberg is irrelevant, and the observer effect won’t come and bite you at full strength because you have probes much less energetic than the cells in question. If nothing else, you should be able to insinuate something into the brain and measure it that way.
But of course I could have screwed up my reasoning, which accounts for the rest of the 0.5%. Maybe our brains are sufficiently fragile that you’re going to lose a lot when you poke it hard enough to get the information out. I doubt it to the tune of 199:1. As a check, I would feel comfortable taking a 20:1 bet on the subject, and not comfortable with a 2000:1 bet on it.
~~~~
Of course, the real reason that we don’t talk too much about what happens if uploading isn’t possible is that that would just make the future that much more like the present. We know how to deal with living in meat bodies already. If it works out that that’s the way we’re stuck, then, well, I guess we don’t need to worry about em catastrophes, and any FAI will really want to work on its biotech.
Ok—thanks for a detailed response. To be honest, I think you are quibbling. If your posterior is 99.5% and 95% if being pessimistic you made up your mind essentially as far as a mind can be made up in practice. If the answer to the upload question depends on an empirical test that has not yet been done (because of lack of tech), then you made up your mind too soon.
I think a cynic would say you talk about the upload future more because its much nicer (e.g. you can conquer death!)
These two statements clash very strongly. VERY strongly.
They don’t. 99.5% is far too much.
If you can predict the outcome of the empirical test with that degree of confidence or a higher one, then they’re perfectly compatible. We’re talking what’s physically possible with any plan of action and physically possible capabilities, not merely what can be done with today’s tech. The negative you’re pushing is actually a very very strong nonexistence statement.
I would guess that he thinks that the probability of this hypothetical—worlds in which brain scanning isn’t possible—is pretty low (based on having discussed it briefly with him). I’m sure everyone around here thinks it is possible as well, it’s just a question of how likely it is. It may be worth fleshing out the perspective even if it is relatively improbable.
In particular, the probability that you can’t get a functional human out of a brain scan seems extremely low (indeed, basically 0 if you interpret “brain scan” liberally), and this is the part that’s relevant to most futurism.
Whether there can be important aspects of your identity or continuity of experience that are locked up in uncopyable quantum state is more up for grabs, and I would be much more hesitant to bet against that at 100:1 odds. Again, I would guess that Scott takes a similar view.
Hi Paul. I completely agree that I see no reason why you couldn’t “get a functional human out of a brain scan”—though even there, I probably wouldn’t convert my failure to see such a reason into a bet at more than 100:1 odds that there’s no such reason. (Building a scalable quantum computer feels one or two orders of magnitude easier to me, and I “merely” staked $100,000 on that being possible—not my life or everything I own! :-) )
Now, regarding “whether there can be important aspects of your identity or continuity of experience that are locked up in uncopyable quantum state”: well, I regard myself as sufficiently confused about what we even mean by that idea, and how we could decide its truth or falsehood in a publicly-verifiable way, that I’d be hesitant to accept almost ANY bet about it, regardless of the odds! If you like, I’m in a state of Knightian uncertainty, to whatever extent I even understand the question. So, I wrote the essay mostly just as a way of trying to sort out my thoughts.
If it is so easy, could someone please explain me the main idea in less than 85 pages?
(Let’s suppose that the scanned mind does not have to be an absolutely perfect copy; that differences as big as the difference between me now and me 1 second later are acceptable.)
Absolutely, here’s the relevant quote:
“The question also has an “empirical core” that could turn out one way or another, depending on details of the brain’s physical organization that are not yet known. In particular, does the brain possess what one could call a clean digital abstraction layer: that is, a set of macroscopic degrees of freedom that
(1) encode everything relevant to memory and cognition,
(2) can be accurately modeled as performing a classical digital computation, and
(3) “notice” the microscopic, quantum-mechanical degrees of freedom at most as pure randomnumber sources, generating noise according to prescribed probability distributions?”
You could do worse things with your time than read the whole thing, in my opinion.
Thank you for the quote! (I tried to read the article, but after a few pages it seemed to me the author makes too many digressions, and I didn’t want to know his opinions on everything, only on the technical problems with scanning brains.)
Do I understand it correctly that the question is, essentially, whether there exists a more efficient way of modelling the brain than modelling all particles of the brain?
Because if there is no such efficient way, we can probably forget about running the uploaded brains in real time.
Then, even assuming we could successfully scan the brains, we could get some kind of immortality, but we could not get greater speed, or make life cheaper… which is necessary for the predicted economical consequences of “ems”.
Some smaller economical impacts could still be possible, for example if a person would be so miraculously productive, that even running them at 100× slower speed and 1000× higher costs could be meaningful. (Not easy to imagine, but technically not impossible.) Or perhaps if the quality of life increases globally, the costs of real humans could grow faster than costs of emulated humans, so at some moment emulation could be economically meaningful.
Still, my guess is that there probably is a way to emulate brain more efficiently, because it is a biological mechanism made by evolution, so it has a lot of backwards compatibility and chemistry (all those neurons have metabolism).
I don’t presume to speak for Scott, but my interpretation is that it’s not a question of efficiency but fidelity (that is, it may well happen that classical sims of brains are closely related to the brain/person scanned but aren’t the same person, or may indeed not be a person of any sort at all. Quantum sims are impossible due to no-cloning).
For more detailed questions I am afraid you will have to read the paper.
No his thesis is that it is possible that even a maximal upload wouldn’t be human in the same way. His main argument goes like this:
a) There is no way to find out the universe’s initial state, thanks to no-cloning, the requirement of low entropy, and there being only one copy.
b) So we have to talk about uncertainty about wavefunctions—something he calls Knightian uncertainty (roughly, a probability distribution over probability distributions).
c) It is conceivable that particles in which the Knightian uncertainties linger (ie they have never spoken to anything macroscopic enough for decoherence to happen) mess around with us, and it is likely that our brain and only our brain is sensitive enough to one photon for that to mess around with how it would otherwise interact (he proposes Na-ion pathways).
d) We define “non-free” as something that can be predicted by a superintelligence without destroying the system (ie you can mess around with everything else if you want, though within reasonable bounds the interior of which we can see extensively).
e) Because of Knightian uncertainty it is impossible to predict people, if such an account is true.
My disagreements (well, not quite—more, why I’m still compatibilist after reading this):
a) predictability is different from determinism—his argument never contradicts determinism (modulo prob dists but we never gave a shit about that anyway) unless we consider Knightian uncertainties ontological rather than epistemic (and I should warn you that physics has a history of things suddenly making a jump from one to the other rather suddenly). And if it’s not deterministic, according to my interpretation of the word, we wouldn’t have free will any more.
b) this freedom is still basically random. It has more to do with your identification of personality than anything Penrose ever said, because these freebits only hit you rarely and only at one place in your brain - but when they do affect it they affect it randomly among considered possiblities,
I’d say I was rather benefitted by reading it, because it is a stellar example of steelmanning a seemingly (and really, I can say now that I’m done) incoherent position (well, or being the steel man of said position). Here’s a bit of his conclusion that seems relevant here:
For less than 85 pages, his main argument is in sections 3 and 4, ~20 pages.
Easily imagining worlds doesn’t mean they are possible or even consistent, as per the p-zombie world.
This is not argument against Aaronson paper in general, although I think it’s far from correct, but against your deduction.
Plus, I think there exist multiple, reasonable and independent arguments that favors LW consensus, and this is evidential weight against Aaronson paper, not the opposite.
I think he proposes an empirical question the answer to which influences whether e.g. uploading is possible. Do you think his question has already answered? Do you have links explaining this, if so?
I have yet to read the full paper, so a full reply will have to wait. But I’ve already commented that he hand-waves a sensible argument against his thesis. so this is not promising.
What an offensive analogy! Please don’t tar a vast, nuanced thing like religion with hasty analogies to something as trifling and insignificant as LW consensus. After all, denotation may win arguments, technically, but connotation changes minds—so I beg thee, be careful.