I have taken the survey.
EricHerboso
If the snitch is both the trigger and the epicenter of this spell in progress, then this would explain how the three wishes will be granted by “a single plot”. The game is played/watched by mostly Slytherin/Ravenclaw students, so mostly Slytherin/Ravenclaw students would die. I can see a school like Hogwarts then giving both these houses the House Cup as a way to deal with the trauma for surviving students and honor the lost children. So that’s all three wishes: both houses win the House Cup, and the snitch is removed from Qudditch, all using “a single plot”.
(from Iron_Nightingale on r/hpmor)
Thank you for pointing this out. I’m embarrassed for not noticing this in advance of writing the above.
I did not get a chance to read this entry until four years after it was published, but it nonetheless ended up correcting a long-held flawed view I had on the Many Worlds Interpretation. Thank you for opening up my eyes to the idea that Occam’s razor applies to rules, and not entities in a system. You have no idea as to how embarrassed I feel for having so drastically misunderstood the concept before now.
Incidentally, I wrote a blog entry on how this article changed my mind which seems to have generated additional discussion on this issue.
I have taken the survey.
While I don’t agree with much of the linked post, the line portraying civil disobedience as an application of might makes right really hits hard for me. I need to do more thinking on this to see if there is justification for me to update my current beliefs.
I really dislike this. It makes me feel like we all have the responsibility to upvote downvoted threads if we happen to notice discussion going on downstream. After all, if discussion is happening, then it should be greater than −4, and so we should upvote in circumstances where we otherwise would have not voted.
I like the option of not voting. I upvote when I see something I think we should have more of, leave alone the majority of stuff, and downvote only when I see something inappropriate. Our choices are NOT binary, but ternary. Yet this new system of hiding at −4 takes away my choice to not upvote. If I see worthwhile discussion downstream, I feel obligated to upvote.
I’m torn. On the one hand, using the method to explain something the reader probably was not previously aware of is an awesome technique that I truly appreciate. Yet Vaniver’s point that controversial opinions should not be unnecessarily put into introductory sequence posts makes sense. It might turn off readers who would otherwise learn from the text, like nyan sandwich.
In my opinion, the best fix would be to steelman the argument as much as possible. Call it the physics diet, not the virtue-theory of metabolism. Add in an extra few sentences that really buff up the basics of the physics diet argument. And, at the end, include a note explaining why the physics diet doesn’t work (appetite increases as exercise increases).
That speaks to GWWC’s favor, I think. It would be odd for them to not take into account research done by GiveWell.
Remember that they don’t agree on everything (e.g., cash transfers). When they do agree, I take it as evidence that GWWC has looked into GiveWell’s recommendation and found it to be a good analysis. I don’t really view it as parroting, which your comment might unintentionally imply.
I get the impression that they already have years worth of demand lined up, and so investing in supply improvements will have far higher returns on their end.
I’d hate for this to be the reason why CFAR decides not to pursue putting out an online course on rationality. Even if demand really is as high as you say, doing an online course would dramatically increase the number of people able to go through the curriculum at all, which I assume would be good progress toward CFAR’s mission. Even if CFAR couldn’t fully take advantage of the extra demand for camps that this would drive, I still think Konkvistador & Wrongnesslessness’ idea is worthwhile for the organization.
I agree with the spirit of this comment, but I think you are perhaps undervaluing the usefulness of helping with instrumental goals.
I am a huge fan of GiveWell/Giving What We Can, but one of the problems that many outsiders have with them is that they seem to have already made subjective value judgments on which things are more important. Remember that not everyone is into consequentialist ethics, and some find problems just with the concept of using QALYs.
Such people, when they first decide to start comparing charities, will not look at GiveWell/GWWC. They will look at something atrocious, like Charity Navigator. They will actually prefer Charity Navigator, since CN doesn’t introduce subjective value judgments, but just ranks by unimportant yet objective stuff like overhead costs.
Though I’ve only just browsed their site, I view AidGrade as a potential way to reach those people. The people who want straight numbers. People who maybe aren’t utilitarians, but recognize anyway that saving more is better than saving less, and so would use AidGrade to direct their funding to a better charity within whatever category they were going to donate to anyway. These people may not be swayed by traditional optimal philanthropy groups’ arguments on mosquito nets over hiv drugs. But by listening to AidGrade, perhaps they will at least redirect their funding from bad charities to better charities within whatever category they choose.
My initial impression was that the volunteer completion rate would be higher among a group like LW members. But now I realize that was a naive assumption to make.
Maybe he’s counting the lack of an objective state as additional information?
As a (perhaps) trivial example, consider the pair of predictions:
“Intelligent roads are in use, primarily for long-distance travel.”
“Local roads, though, are still predominantly conventional.”
As one of the people who participated in this study, I marked the first as false and the second as true. Yet the second “true” prediction seems like it is only trivially true. (Or perhaps not; I might be suffering from hindsight bias here.)
Omega could tell you “Either I am simulating you to gauge your response, or this is reality and I predicted your response”—and the problem would be essentially the same.
This is essentially the same only if you care only about reality. But if you care about outcomes in simulations, too, then this is not “essentially the same” as the regular formulation of the problem.
If I care about my outcomes when I am “just a simulation” in a similar way to when I am “in reality”, then the phrasing you’ve used for Omega would not lead to the standard Newcomb problem. If I’m understanding this correctly, your reformulation of what Omega says will result in justified two-boxing with CDT.
Either I’m a simulation, or I’m not. Since I might possibly choose to one-box or two-box as a probability distribution (e.g.: 70% of the time one-box; otherwise two-box), Omega must simulate me several times. This means I’m much more likely to be a simulation. Since we’re in a simulation, Omega has not yet predicted our response. Therefore two-boxing really is genuinely better than one-boxing.
In other words, while Newcomb’s problem is usually an illustration for why CDT fails by saying we should two-box, under your reformulation, CDT correctly says we should two-box. (Under the assumption that we value simulated utilons as we do “real” ones.)
After comparing my own answers to the clusters Bouget & Chalmers found, I don’t appear to fit well in any one of the seven categories.
However, I did find the correlations between philosophical views outlined in section 3.3 of the paper to be fairly predictive of my own views. Nearly everything in Table 4 that I agree with on the left side corresponds to an accurate prediction of what I’d think about the issue on the right side.
Interestingly, not all of these correlations seem like they have an underlying reason why they should logically go together. Does this mean that I’ve fallen prey to agreeing with the greens over the blues for something other than intellectual reasons?
I agree with the idea that EAA seems more likely to be more effective than 80k for the reasons you stated. However, I disagree that this is sufficient reason to encourage earmarking.
It’s true that I’d prefer to give to EAA directly, and the only way to do this currently is to write a check to the “Tides Foundation” and earmark it for EAA. But I think the far better way of doing this is for EAA to be separate not just from Tides, but also 80k (which has a confusingly distinct mission focused on careers and lifetime charitable donations, not animal welfare). Until they’re separate, I can see why earmarking is justified, but you said it should be encouraged, which is an entirely different thing. I would NOT encourage earmarking; I’d earmark regretfully, and only until they separate out the organizations so that I can donate toward the mission I consider to be genuinely more effective.
Actually, I think this is a technical problem they have, and should not be construed as a positive endorsement of earmarking. It looks like what they want are separate organizations (80k, GWWC), but the way their org is set up, they can only be tax deductible if you donate to the “Tides Foundation” instead.
Although technically this looks like earmarking, the intent seems to be that they wanted to have separate organizations with separate funding but have so far not actually separated them for the purposes of tax deductibility.
With which of these moral philosophies do you MOST identify?
There is no such thing as “morality”
Can you please rephrase this to “moral skepticism”? Or is there some benefit to saying it in the way you have?
Note that moral skepticism does not necessarily equate to nihilism—error theories, fictionist accounts and moral revisionism all talk about doing what others would call “the right thing”, even though they are all moral skeptic theories.
Also, don’t you think this section is a bit coarsely defined? I’d love to see a breakdown of moral skeptics categorized as revisionists, fictionists, etc. You can always include an “general moral skeptic” option for those people that stop thinking about metaethics once they decide moral skepticism is correct. Similarly, I’d love to see more finely grained options under consequentialism and the other broad categories of this section.
I answered every question, and enjoyed doing so. Thank you for putting this together. (c: