Imagine A is the set of all positive integers and B is the set of all positive even integers. You would say B is smaller than A. Now multiply every number in A by two. Did you just make A become smaller without removing any elements from it?
saturn
Does your rational advice differ from the common folk wisdom/cargo culting on this topic? And if so, what was your research process?
It strikes me that, in addition to the face-value interpretations given by the researchers, the subjects of some of these experiments could also be seen as rationally responding to incentives not to reveal their desires. The face attractiveness subjects might be afraid of embarrassing an authority figure or “messing up” the experiment. The split-brain patient might (rightly) think a truthful “I don’t know” would be interpreted as evasive or hostile. The children might reason that being seen doing a rewarded activity “for free” would remove the basis for any future rewards.
The priming results don’t seem to fit this pattern, though.
Since about 50 years ago all but the lowest-end thermostats are designed to be “anticipators” — they shut off the heat before the requested temperature is reached, then gradually approach it with a lower duty cycle. More often than not, the installer doesn’t bother to fine-tune this, in which case it can take a long time to reach equilibrium. Turning it a few degrees warmer than you actually want isn’t a completely stupid idea.
(reference: http://en.wikipedia.org/wiki/Thermostat)
I think it would be more informative to ask people to take one specific online test, now, and report their score. With everyone taking the same test, even if it’s miscalibrated, people could at least see how they compare to other LWers. Asking people to remember a score they were given years ago is just going to produce a ridiculous amount of bias.
Whenever you have a process that requires multiple steps to complete, you can’t go any faster than the slowest step. Unless Intel’s R&D department actually does nothing but press the “make a new CPU design” button every few months, I think the limiting factor is still the step that involves unimproved human brains.
Elsewhere in this thread you talk about other bottlenecks, but as far as I know FOOM was never meant to imply unbounded speed of progress, only fast enough that humans have no hope of keeping up.
As an aspiring rationalist you’ve already learned that most people don’t listen, and you usually don’t bother—but this person is a friend, someone you know, someone you trust and respect to listen.
I’ve actually had some success with Other-optimizing, so I’m going to go out on a limb and defend it. Doing it well isn’t easy and doesn’t give you the quick ego/status boost you get from giving someone a pithy injunction. You need to gather enough information about the other person’s goals to uniquely determine what action you take, essentially giving away some of your optimization power for the other person to use for their own purposes. Of course, this mostly eliminates the usual motivation (i.e. status) while also being vastly more difficult.
Although the packets are labeled “silica gel,” they aren’t guaranteed to contain nothing but silica gel. In fact, they can contain cobalt chloride or other poisonous things. If you do one day want to eat silica gel, I would recommend getting it from a food- or laboratory-grade source rather than from a packet which says “DO NOT EAT.”
Vladimir was being sarcastic, because Louie dismissed the possibility of optimizing one’s expenditures.
- 16 Dec 2010 20:54 UTC; 14 points) 's comment on What do you mean by rationalism? by (
Don’t forget that if it works, you probably get immortality too. If you were already immortal, would you be willing to become mortal for $500 000?
In my experience, being obnoxious doesn’t deter others from being obnoxious. Quite the opposite, in fact.
LW could be considered a select group by discussion board standards. For example, posters who haven’t studied the rather large amount of presumed background knowledge are, to a decreasing but still significant extent, only reluctantly tolerated. Some people accustomed to more typical discussion boards do seem somewhat miffed about the idea that LW has such prerequisites at all, and I assume this is because they perceive it as elitist.
Bringing this back to the main point, LW already does a reasonably good job at covering what you call the ‘hard’ material. It’s hard to overstate how fickle and delicate online communities can be. I’m wary of attempting to change the norms of the existing community in order to produce more ‘easy’ material. (This is what you are effectively proposing, since newbies can’t produce their own ‘easy’ material, it would be the blind leading the blind.) Therefore I think that job should be delegated to another website (maybe appliedrationality.org) rather than shoehorned into LW.
What’s worse, since downvoting is limited by karma, it’s not some random lurkers. Users with fairly high karma must be wasting their time doing this.
After being told whether they are deciders or not, 9 people will correctly infer the outcome of the coin flip, and 1 person will have been misled and will guess incorrectly. So far so good. The problem is that there is a 50% chance that the one person who is wrong is going to be put in charge of the decision. So even though I have a 90% chance of guessing the state of the coin, the structure of the game prevents me from ever having more than a 50% chance of the better payoff.
eta: Since I know my attempt to choose the better payoff will be thwarted 50% of the time, the statement “saying ‘yea’ gives 0.9*1000 + 0.1*100 = 910 expected donation” isn’t true.
Given what I’ve heard about CI’s quality control, I don’t blame her for trying to raise enough money for Alcor.
I assume there are also limits to the amount of cognitive effort anyone wants to spend writing comments.
I just looked through several pages of lukeprog’s most recent comments, and the only ones that were signed were direct replies to SilasBarta.
Regarding elitism: LW is elitist, and would not be what it is without its elitism. What else differentiates LW from /r/skeptic or agi-list? The LW community recognizes that some writings are high quality and deserve to be promoted, and others are not. If anything, I wish LW would become more elitist.