Thank you for pointing this out. I’m embarrassed for not noticing this in advance of writing the above.
EricHerboso
Confabulation Bias
Two minor grammatical corrections:
A space is missing between “itself” and “is ” in “The marble itselfis a small simple”, and between “experimental” and “results” in “only reality gets to determine my experimentalresults”.
Cognitive biases abound on both ends of the political spectrum. Recently a test of UK MPs showed they can’t do basic probability, let alone deal with the kind of biases we discuss here on LessWrong. In the US, compare global climate change denial on one end of the spectrum with GM scares on the other. At first glance, neither group seems more likely to be susceptible to bias than the other.
For reference, I vote mostly with the Green party in the US, despite their idiotic views in homeopathy, pseudoscience, nuclear power, and several other talking points. There is no such thing as a Technical Rationality Party, and even if there was, I’m unsure as to what positions it would take on several issues that differ greatly on ethical assumptions (and hence bayesian priors).
For example, I’m sure most of you eat meat because you value the feelings of nonhuman animals so much less than I do. As a vegan, my ethical assumption is that there is nothing special about humans that makes their preferences matter more, and so I compare the benefit of good taste against the suffering involved with factory farms, concluding that it is not ethical for me to eat meat. Yet I completely understand and accept that several LessWrong members will think there is nothing wrong with eating meat, and will not be suffering from bias in coming to that conclusion, merely because they go into the bayesian calculation with a completely different prior: they qualitatively prefer humans rather than quantitatively, like I do.
Different ethical assumptions result in different political positions, even when no bias is present. Since ethics is not something that is an independent part of the world but rather is a part of what we impart into it, there is no basis on which any of us can conclusively convince another to change their initial ethical assumptions, except by exposing that one’s current view is inconsistent or flawed in some way. Yet there is a huge gulf between being able to say “your ethical view is inconsistent with logic” and “my ethical view is the preferred one”. Just because they’re wrong doesn’t make you right.
So what’s the status on this? Are the votes sufficient or insufficient to try an implementation of RES for LessWrong?
I was under the impression that there are no common historical facts that we have of Jesus. But I have never bothered to look deeply into the topic, so I might be incorrect.
Ciphergoth’s point was to show that you did not really believe the statement “I value a life of suffering more than no life at all.”
Now that the point is made, your justification of extra production falls apart. Saying that extra production means more lives which means more good is not a good argument. If you honestly felt this way, then you’d accept ciphergoth’s deal—and you’d also be morally obligated to forcibly impregnate as many women as possible to boot.
I disagree.
Under the assumption that I am a recluse and have zero capacity to influence anyone else on dietary choices, my ability to affect animal welfare through buying choices is strongly quantized. Purchasing a burger at a busy restaurant in a large city will not affect how many burgers they purchase from their distributor. Assuming they buy by the case (what restaurant wouldn’t?), affecting how much they purchase would require either eating there extremely often or being part of a large group of people that eat there, all of which cease buying burgers.
However, despite disagreeing with the specifics of what you posted here, I do agree with the spirit. As a compassionate person who has the capacity to influence others, it is important that I be vigilant with veganism, if for no other reason than that it makes me less persuasive if I appear to be hypocritical. Even if buying the occasional burger does not cause any additional harm in the world by itself, it would lessen my credibility, and my ability to influence others into making more ethical choices would be harmed.
I’m so sorry. On rereading I see that you said average; I guess I was reading too quickly when I posted this reply.
I will use this as an opportunity to remind myself to always slowly reread any comments I plan to reply to at least once. It was sloppy of me to reply after a single read through, especially when missing that one word made me misunderstand the key point I found disagreement with.
Forgive me if I’m misunderstanding, but doesn’t the fallacy you bring up apply specifically to continuous functions only? For step-wise functions, a significantly small change in input will not correspond to a significantly small change in output.
Assuming the unit is a 100 burger box (100BB), then my purchase of a burger only affects their ordering choices if I bring the total burger sales over some threshold. I’m guesstimating, but I’d guess it’s around 1⁄3 of a box, or 33 burgers in a 100BB. So if I’m the 33rd additional customer, it might affect their decision to buy an extra box; but if I’m one of the first 32, it probably won’t. This puts a very large probability on the fact that my action will not have an effect.
Is my reasoning here flawed? I’ve gone over it again in my head as I wrote this comment, and it still seems to apply to me, even after reading your above comment. But perhaps I’m missing something?
My apologies on the male assumption. By sheer chance, when I first wrote the comment referencing ciphergoth, I noticed myself using the pronoun “he” and took steps to rephrase appropriately. Yet I did not do the same with you.
I really need to spend more time checking my assumptions before I post, but old habits are tough to break in a short period of time. Your above post will reinforce the need for me to check assumptions before hitting the “comment” button.
As for extra production, I can see that a stronger moral obligation would override in circumstances like rape. But what about culture wide influences? It isn’t obvious to me that a stronger moral obligation would override your desire to have a culture-wide policy on reproducing as often as possible. Wouldn’t a major goal of yours be to somehow help guide civilization toward some optimal human saturation in your light cone? I don’t mean paperclip maximization style, as obviously after a certain density overall good would be lessened, not greatened. But surely an increase in human density up to some optimal saturation?
I know you say that “the desire to optimize life in others does not override most of my other desires”, but surely this applies mostly to principles like “don’t rape”, and not to principles like “don’t institute strong societal encouragement for procreation”.
edit: Added a missing “don’t institute” on the final line.
I really dislike this. It makes me feel like we all have the responsibility to upvote downvoted threads if we happen to notice discussion going on downstream. After all, if discussion is happening, then it should be greater than −4, and so we should upvote in circumstances where we otherwise would have not voted.
I like the option of not voting. I upvote when I see something I think we should have more of, leave alone the majority of stuff, and downvote only when I see something inappropriate. Our choices are NOT binary, but ternary. Yet this new system of hiding at −4 takes away my choice to not upvote. If I see worthwhile discussion downstream, I feel obligated to upvote.
EricHerboso—EricHerboso.org
Thank you for keeping up this list. (c:
When I first thought about this, I was fairly confident of my belief; after reading your first comment, I rethought my position but still felt reasonably confident; yet after reading this comment, you’ve completely changed my position on the issue. I had completely neglected to take into account the largeness of the effect.
You’re absolutely correct, and I retract my previous statements to the contrary. Thank you for pointing out my error. (c:
That sounds like “undecided” to me.
I’m torn. On the one hand, using the method to explain something the reader probably was not previously aware of is an awesome technique that I truly appreciate. Yet Vaniver’s point that controversial opinions should not be unnecessarily put into introductory sequence posts makes sense. It might turn off readers who would otherwise learn from the text, like nyan sandwich.
In my opinion, the best fix would be to steelman the argument as much as possible. Call it the physics diet, not the virtue-theory of metabolism. Add in an extra few sentences that really buff up the basics of the physics diet argument. And, at the end, include a note explaining why the physics diet doesn’t work (appetite increases as exercise increases).
The concept of inferential distance suggests to me that posts should try and make their pathways as short and straight as possible. Why write a double-length post that explains both causal models and metabolism, when you could write a single-length post that explains only causal models? (And, if metabolism takes longer to discuss than causal models, the post will mostly be about the illustrative detour, not the concept itself!)
You’ve convinced me. I now agree that EY should go back and edit the post to use a different more conventional example.
How one responds to this dilemma depends on how one values truth. I get the impression that while you value belief in truth, you can imagine that the maximum amount of long-term utility for belief in a falsehood is greater than the minimum amount of long-term utility for belief in a true fact. I would not be surprised to see that many others here feel the same way. After all, there’s nothing inherently wrong with thinking this is so.
However, my value system is such that the value of knowing the truth greatly outweighs any possible gains you might have from honestly believing a falsehood. I would reject being hooked up to Nozick’s experience machine on utilitarian grounds: I honestly value the disutility of believing in a falsehood to be that bad*.
(I am wary of putting the word “any” in the above paragraph, as maybe I’m not correctly valuing very large numbers of utilons. I’m not really sure how to evaluate differences in utility when it comes to things I really value, like belief in true facts. The value is so high in these cases that it’s hard to see how anything could possibly exceed it, but maybe this is just because I have no understanding of how to properly value high value things.)
I did not get a chance to read this entry until four years after it was published, but it nonetheless ended up correcting a long-held flawed view I had on the Many Worlds Interpretation. Thank you for opening up my eyes to the idea that Occam’s razor applies to rules, and not entities in a system. You have no idea as to how embarrassed I feel for having so drastically misunderstood the concept before now.
Incidentally, I wrote a blog entry on how this article changed my mind which seems to have generated additional discussion on this issue.