It could be worse—but on my 26-inch monitor—most fixed-width sides do not look great—I find.
timtyler
Prediction markets tend to be zero-sum games. Most rational agents would prefer to play in a real stock market—where you can at least expect to make money in line with inflation.
Re: The word “rational” is overloaded with associations, so let me be clear: to me, more “rational” means better believing what is true, given one’s limited info and analysis resources.
Ouch! A meta discussion, perhaps—but why define “rational” that way? Isn’t the following much more standard?
“In economics, sociology, and political science, a decision or situation is often called rational if it is in some sense optimal, and individuals or organizations are often called rational if they tend to act somehow optimally in pursuit of their goals. [...] In this concept of “rationality”, the individual’s goals or motives are taken for granted and not made subject to criticism, ethical or otherwise. Thus rationality simply refers to the success of goal attainment, whatever those goals may be.”
A common example of where rationality and truth-seeking come into conflict is the case where organisms display their beliefs—and have difficulty misrepresenting them. In such cases, it may thus benefit them to believe falsehoods for reasons associated with signalling their beliefs to others:
“Definitely on all fronts is has become imperative not to bristle with hostility every time you encounter a stranger. Instead observe him, find out what he might be. Behave to him with politeness, pretending that you like him more than you do—at least while you find out how he might be of use to you. Wash before you go to talk to him so as to conceal your tribal odour and take great care not to let on that you notice his own, foul as it may be. Talk about human brotherhood. In the end don’t even just pretend that you like him (he begins to see through that); instead, really like him. It pays.”
Discriminating Nepotism—as reprinted in: Narrow Roads of Gene Land, Volume 2 Evolution of Sex, p.359.
I don’t see how “seeking truth” is “goal-neutral”. It is a goal much like any other.
The main thing I feel the urge to say about “seeking truth” is that it usually isn’t nature’s goal. Nature normally cares about other things a lot more than the truth.
Well, sure. Repeating other posts—but one of the most common examples is when an agent’s beliefs are displayed to other agents. Imagine that all your associates think that there is a Christian god. This group includes all your prospective friends and mates. Do you tell them you are an agnostic/atheist—and that their views are not supported by the evidence? No, of course not! However, you had better not lie to them either—since most humans lie so poorly. The best thing to do is probably to believe their nonsense yourself.
If truth is a bad idea, it’s not clear what the reader is doing on Less Wrong [...]
Believing the truth is usually a good idea—for real organisms.
However, I don’t think rationality should be defined in terms of truth seeking. For one thing, that is not particularly conventional usage. For another, it seems like a rather arbitrary goal. What if a Buddhist claims that rational behaviour typically involves meditating until you reach nirvana. On what grounds would that claim be dismissed? That seems to me to be an equally biologically realistic goal.
I think that convention has it right here—the details of the goal are irrelevances to rationality which should be factored right out of the equation. You can rationally pursue any goal—without any exceptions.
Re: Regarding “rationalists should win”—that still leaves us with the problem of distinguishing between someone who won because he was rational and someone who was irrational but won because of sheer dumb luck.
Just don’t go there in the first place. Attempting to increase your utility is enough.
Indeed—religion is persistent. Of course in the real world you would find that isolated communities would arise, where “belief mutations” could arise without them being severely punished by the crowd.
Eliezer’s proposal seems worse than your one in this thread to me—partly since it seems so irregular. I think trying to talk him down would be the most sensible strategy. “Truth-seeker” is terminology which is good enough.
Re: I doubt a psychedelic experience can help me optimize my current utility function better than my sober self. How can I be more rational from a drug?
Psychedelic drugs can provide people with a different perspective on your own mind. They thus provide you with information about the operation of your own mind—how its perceptual filters work, that kind of thing—which is otherwise inaccessible to consciousness.
Some find that information valuable, much as they find foreign holidays valuable in providing a new perspective.
The argument you made that the only way such a drug can work is by changing your preferences seems pretty weak to me. Does an trip to Peru change your preferences? No—it just tells you a lot of things about the world that you didn’t know before.
The more obvious negatives: legal concerns; purity—will you be consuming what you think; your reputation—what do your friends think of drug users; bad reactions—most react positively to psychedelics—but not everyone does—depending on how stable your personality and circumstances are, there may be risks; unknown factors—psycheldelic science is young—and on safety grounds, you may be better off with LSD rather than psilocybin—if you can get pure controlled doses.
Re: The naturally occurring magic mushrooms are picked out of cow shit.
That’s the Mexican ones. Come to the Europe and relatives grow in the ground.
Re: DMT is a molecule that is practically identical to Psilocybin and other than duration and method of consumption, I doubt most people would be able to tell the difference in a double blind test.
Um, those drugs may be chemical relatives—but they typically have pretty radically different effects.
What seems to be needed is a page about the rules of Less Wrong—who can do what and how often—the moderation policy—that sort of thing.
If different rules are to apply to Yudkowsky and Hanson, then that could be made clear there.
Does psilocybin get classified as a “random modification of your mind” any more than a trip to India does? Surely both could be dangerous?
Self-reported values are all we we have, really—if you want better evidence, you might have to wait for quite a while.
I don’t know that the only effects are as a result of obtaining new information. My own perception is that the drugs do provide a mountaian of information, that it is difficult to obtain that information in other ways, that the information is sometimes regarded as being useful by the individuals in question, and that the side effects on things like goals are sometimes so slight as to be undetectable by the individual. How frequent are such outcomes? Pretty often it seems to me—and you have a reasonable chance of avoiding negative outcomes if you use some common sense.
David Pearce puts things better, in my view:
“Worse, the psychedelics aren’t primarily euphoriants. They don’t directly stimulate the pleasure-centres and guarantee the user a good trip. Both the serotonin- and catecholamine-like families trigger psychedelia mainly via their role as partial agonists of the 5-HT2A receptors in the central nervous system; 5-HT2 heteroreceptors exert a tonic inhibitory effect on the striatal dopaminergic neurons. Such agents aren’t a dependable choice of clinical or recreational mood-brightener, whether in the short- or long-term.”
A quick vote for implementing fluid-width.
Please consider using a “fluid-width” theme.