Is the intended point simply that people have more confidence in their beliefs than would be optimal? People should change their assumptions more often and see what happens?
27chaos
Slytherin, the hat had almost put him in, and his similarity to Slytherin’s heir Riddle himself had commented on. But he was beginning to think this wasn’t because he had “un-Gryffindor” qualities that fit only in Slytherin, but because the two houses—normally pictured as opposites—were in some fundamental ways quite similar.
Ravenclaws in battle, he had no doubt, would cooly plan the sacrifice of distant strangers to achieve an important objective, though that cold logic could collapse in the face of sacrificing family instead. Hufflepuffs would sacrifice no one, though it means they sacrifice an objective in its place.
Only Gryffindors and Slytherins were good at sacrificing those they loved.
But with one friend who had lost weeks to the hospital wing and who could so easily have lost her life instead, with another mourning..., with himself going into battles he barely survived, and making decisions he should not have to make, he dreaded what they might be called upon to sacrifice next.
And he decided: he would do much, to see that it did not happen.
From Myst Shadow’s excellent fanfiction, Forging the Sword.
I don’t have the internal capacity to feel large numbers as deeply as I should, but I do have the capacity to feel that prioritizing my use of resources is important, which amounts to a similar thing. I don’t have an internal value assigned for one million birds or for ten thousand, but I do have a value that says maximization is worth pursuing.
Because of this, and because I’m basically an ethical egoist, I disagree with your view that effective altruism requires ignoring our care-o-meters. I think it only requires their training and refinement, not complete disregard. Saying that we should ignore our actual values and focus on “more rational” values we could counterfactually have is disquieting to me because it seems to involve an underlying nihilism of sorts. Values are orthogonal to rationality, I’m not sure why many people here understand that idea in some cases but ignore it in others. If we’re going to get rid of values for not being sufficiently rational or consistent, we might as well delete them all.
Gunnar Zarncke makes a good point as well, one I think complements my argument. There’s no standard with which to choose between helping all the birds and helping none, once you’ve thrown the care-o-meter away.
My own moral intuitions say that there is an optimal number of human beings to live amongst X (perhaps around Dunbar’s number, though maybe not if society or anonymity are important) and that we should try to balance between utilizing as much of the universe’s energy as possible before heat death and maximizing these ideal groups of X size. I think a universe totally filled with humans would not be very good, it seems somewhat redundant to me since many of those humans would be extremely similar to each other but use up precious energy. I also think that individuals might feel meaningless in such a large crowd, unable to make an impact or strive for eudaimonia when surrounded by others. We might avoid that outcome by modifying our values about originality or human purpose, but those are values of mine I strongly don’t want to have changed.
Yeah. The problem I see with that is that if humans grow too far apart, we will thwart each other’s values or not value each other. Difficult potential balance to maintain, though that doesn’t necessarily mean it should be rejected as an option.
I worry that such consistency isn’t possible. If you have a preference for chocolate over vanilla given exposure to one set of persuasion techniques, and a preference for vanilla over chocolate given other persuasion techniques, it seems like you have no consistent preference. If all our values are sensitive to aspects of context such as this, then trying to enforce consistency could just delete everything. Alternatively, it could mean that CEV will ultimately worship Moloch rather than humans, valuing whatever leads to amassing as much power as possible. If inefficiency or irrationality is somehow important or assumed in human values, I want the values to stay and the rationality to go. Given all the weird results from the behavioral economics literature, and the poor optimization of the evolutionary processes from which our values emerged, such inconsistency seems probable.
Have you read any of Paul Graham’s essays? I’m always very impressed by the quality of his writing.
Practically everyone is wary of false dichotomies. The trick is recognizing them. This quote doesn’t help much with that.
I don’t think the quote significantly increases the probability someone will have that thought. I think practically everyone here already has that habit of wariness. Maybe I’m wrong, typical mind fallacy, but identifying false dichotomies has always been rather automatic for me and I thought that was true for everyone (except when other biases are involved as well).
What do you think that something is that they feel?
Why do you think such a meme would spread or originate, if not due to its truth value?
What predictions might you make about human behavior that someone who believed in altruism would not?
My impression is that when people say they believe altruism exists, they mean that they believe people derive pleasure from altruistic behavior. There are some people like Kantians who might be imagining something else and I agree that version of altruism is wrong. But I think that view of altruism is a minority one.
Let’s imagine a computer simulation that has various organisms. Some of these organisms are programmed to sacrifice their own lives for the lives of others in their area who have no genetic relationship at all. Is it accurate to describe the behavior of these organisms as altruistic?
Are you aware that group selection has come back into scientific acceptability since the 80s? The original experiments assumed static populations, but when you allow populations to have varying growth rates group selectionism does much much better. http://en.wikipedia.org/wiki/Multi-level_selection
I think Occam’s razor is best used when you have Model A and Model B, where Model B is identical to Model A except it has one extra idea in it. Comparing different models or different types of models through one’s intuitions about simplicity alone is generally a bad idea.
It is about a month, right? I don’t really see the importance of that knowledge though, unless you’re fighting werewolves. I agree people are dumb, but they’re dumb because they don’t understand useful ideas like math rather than because they don’t remember trivia about everyday phenomena.
I do not understand your sphere rotation example because I can’t visualize that 3D example. Any chance someone can help out?
I request that you add an intensity of importance question for each item on the politics section. I might think that abortion is terrible but not at all politically important, for example, and both of those seem like worthwhile information.
Calculating interest.
Predator prey relationships.
I think I have some excellent advice for you this time.
I’ve noticed very recently that in my own writing I tend to optimize for the strength of individual sentences instead of for the strength of paragraphs or arguments as a whole. Because I write one sentence at a time, it’s tempting to have each sentence make its point as direct and powerfully as possible. But this is a little bit like playing each note of a song as loudly as possible in an attempt at maximum musical impact. A more skilled performer would play some notes softly, others louder, and use that to emphasize certain ideas over others within the work. I think writing is the same way, and some sentences or paragraphs should be softer or louder than others. The main function of some sentences should be what they do for other sentences, rather than their own arguments. Changing my writing habits in this way will be difficult, but I think eventually highly rewarding.
I don’t know whether you have a similar problem or not. But I suspect it’s a common one, and hope someone will find this advice useful even if you don’t.
Stephen King argues that writer’s block is a myth. Is writing still hard if you’re willing to just set pen to paper without trying to filter for good ideas? I find this kind of free writing to be almost repulsive to me, but I think it is just a weird bias that I have and a lot of people have but don’t ever move past. I know that many of my favorite writers endorse reckless first drafts and brainstorming sessions.
Maybe writing’s difficulty is overestimated by the general public, but underestimated by amateur writers? That seems compatible with both our positions.
That historical example did a lot to persuade me. Do you have any others similar to it?
I used to share your position, but moved away from it. The main reason I did is studies such as the ones mentioned in this article:
http://online.wsj.com/articles/SB10001424052702304854804579234030532617704.
How do you explain such results?
It would be slightly interesting to read a fic in which Naming was a mechanism of magic, and Voldemort chose that specific name for very good reasons. Reasons which explained why people feared the name. Maybe he stole the Grim Reaper’s power for his very own, somehow becoming Master of Death or Flight from Death or something similar, something involving an actual title with power invested into it. Neat thoughts in this area, easy for the picking. French is kind of a silly language for it, of course.