Well, it also seems like a no-brainer to me the Breatharianism is insane, but I know people certainly subscribe to it. What I meant by that was more that it seems well-established among LessWrong readers.
PaulG
Do you mean the assignment of a value of life or the general principle of assignment of values to everything? In either case, both of those seem sorta like no-brainers (which, I imagine, is why no one is discussing them).
It seems to me that the most relevant thing in this post was the idea of a bias against recognizing underuse in general. It actually reminds me of when I was introduced to Robin’s idea of the danger of excess medical care in that most people (myself included, at the time) had a bias against recognizing the harms done by extra treatment.
Are you saying that there is no incidence of the tragedy of the commons at all, or just that these things are not tragedies of the commons? If it’s the latter, I think it’s pointless to argue the specifics of any particular examples when the broader point still stands. When there is a tragedy of the commons, one possible solution is to create property rights so that incentives align with social optima, but the problem of the tragedy of the anticommons can arise if the property rights you create are too strong.
In practice, there will be cases where you don’t want to try to re-align incentives. If you have a situation where you are going to be naturally close to the social optimum (maybe the spectrum or the skyline are good examples of this—I’m not familiar with these cases intimately), then unless you have a well-calibrated government you are more likely than not to over-shoot the social optimum. If you have something that’s seriously misaligned—maybe people burning huge amounts of neurotoxin-containing wood and wearing a mask or something—you might overshoot or undershoot the social optimum, but even a poorly-calibrated government might be able to get you closer.
Is it possible for anyone here to actually suggest anything that’s truly meaningful in the context of raising children? We can say what we think is a good idea, but I think the first place to look for this information is in any population studies that have been done (adopted twin studies maybe?) about rationalist beliefs in people raised with different techniques. Then we’d still have the politically untenable task of randomly assigning the techniques we come up with to children and testing how rational the end up being. Maybe there’s some insight to be had here, but I doubt we would have responses much better than chance.
It sounds to me like you are ignoring the Tragedy of the Commons there, though. The purported reason for each of these government interventions is to enforce property rights where they don’t exist. I think the whole point of this post about the tragedy of the anticommons is to illustrate that you are finding an optimum, not a single limit.
The fact that all of these things mentioned here are created by government (and I am not sure that you’ve proven that tragedies of the anticommons can’t arise naturally) just gets to the point that you can easily over-correct for a failure of natural incentives, which means that you should probably be putting some thought into designing feedback mechanisms to naturally find the optima that you are looking for.
I don’t agree that by failing to put a value on life you necessarily also fail to discover the concept of underuse. Doesn’t it follow immediately from the fact that you can have a positive externality that you would necessarily also have underuse?
I don’t agree that by failing to put a value on life you necessarily also fail to discover the concept of underuse. Doesn’t it follow immediately from the fact that you can have a positive externality that you would necessarily also have underuse?
One problem with eminent domain is that it doesn’t insure optimum use because the only feedback mechanism is through democratic processes, so it’s just a public choice problem then. With eminent domain you will probably destroy underuse and just replace it with overuse.
Hm. Well, I was thinking in general that you can come to the same conclusion by more than one route and it could be important to see how other people do it. For example, I hold now some libertarian-style beliefs that I held when I was a teenager, but the framework that those beliefs are in is completely different. “Free trade is good because (comparative advantage, economic reasoning” is different than “Free trade is good because people shouldn’t be restricted in who they can sell their goods to!” by a wide margin.
In fact, there have been situations where I’ve changed my mind to be on the other side of an issue by reading something whose conclusion I agree with, because I would see flaws in their arguments, try and overlay my own arguments and find that the same flaws exist in both arguments, leading me to change my beliefs.
Maybe we agree, though, and what you mean by “conclusions” is what I mean by “conclusions and reasoning.”
I don’t think I agree with step 3 in the second script (step 4 in the third script). I think that would create a bias against understanding the intricacies of arguments that you agree with, which I’m not comfortable with. Maybe you could just restate it as “If you aren’t sure that you agree with the statement, continue reading” or something to that effect.
I have to second the idea that it takes time to realign your emotions. I have overcome a number of irrational fears in my life and they don’t usually go away as soon as I realize that they are irrational. For example, after I stopped believing in god, I still felt uncomfortable blaspheming. After I decided that it was OK to eat meat, it took me months before I actually was willing to eat any meat. And there are countless other situations where I decided, “This is a safe/acceptable activity”, and yet I would still have a visceral uneasiness about doing them while acclimating to the idea.
It seems to me like it shouldn’t matter how often you buy the $15 items, technically. Even if you always bought $125 items and never bought $15 items, your heuristic still wouldn’t be completely irrational. If you only buy $125 items, you’ll only be able to buy 4% more stuff with your income, as compared to 33% more stuff if you always buy $15 items.
I still think it’s the same basic framework. “Benefits” is a highly subjective term. I think you are still making the same essential decision—is it worth risk X for new experience Y. I agree with you in the sense that I think very few people would actually decide to take a concussion just to experience an altered brain, but that doesn’t mean it’s not the same type of decision.
And to be fair to your point, although I think his analogy is apt, it is rhetorically misleading in that the implication is that you wouldn’t want the concussions so you shouldn’t want the drugs. In fact, I think that the asymmetry between his analogy and the drugs analogy serve to best demonstrate that the question isn’t black-and-white and that it would be hasty to jump to the conclusion that everyone interested in brain function should rationally take mushrooms.
I disagree here. I think that Annoyance’s analogy was apt in that it is the same sort of decision, but with a different cost/benefit analysis. Clearly in both cases (and in the India case) you “should” take the action (get a concussion, take some drugs) if you think that the cost of taking the action is less than the potential benefit.
I do agree with you, however, in the sense that I imagine that most people consider the net benefit of taking drugs at least once to be more in line with a trip to India than with a damaged brain.
I wonder if there’s some selection bias inherent in the studies presented here. Assuming that it has been established that older scientists are more willing to accept new controversial hypotheses than younger scientists, has it also been established that they differentially accept good new controversial hypotheses? What I see here is that they tended to embrace the big paradigm shifts relatively early, but it doesn’t say anything about older scientists’ tendencies to embrace controversial hypotheses that ended up later being discredited. Specifically, Linus Pauling’s obsession with Vitamin C megadosing later in life springs to mind.
The idea of super-votes sounds similar to the system they have at everything2, where users are awarded a certain number of “upvotes” and a certain number of “cools” every day, depending on their level. An upvote/downvote adds/subtracts 1 point to their equivalent of karma for the post and a Cool gives the player a certain number of points, is displayed as “Cooled” on the post and is promoted to their main page.
(I reposted this as a reply because I was unfamiliar with the posting system when I first wrote it.)
I’ll agree that they are lower, but I am not sure that they are significantly lower. It seems to me that ANY positive externality would be evidence for underuse and you can think of a large number of them without ever putting a value on life.
That said, I do think that it is obviously important to put a value on life so that you can do cost-benefit analyses.