Cannibal, what exactly is your point and aren’t you forgetting all the Babyeater casualties we’d expect in the next week?
Jack
The remarkable thing about this story is the conflicting responses in the stories. The fact that a relatively homogeneous group of humans can have totally different intuitions about which ending is better and which aliens they prefer, to me, means that actually aliens (or an AI, whatever) have the potential to be, well- alien, far in excess of what is described in this story. Both aliens have value systems which, while different from ours are almost entirely comprehensible. I think we might be vastly underestimating how radically alien aliens could be.
There is no such thing as a “subconscious”. You mean unconscious.
The bumper sticker existentialism is pretty lame. I don’t know who Lee Crocker is, and though I think I’ve heard of Warhammer 40k all this was said first and much better by Kierkegaard, Nietzsche, Heidegger etc. I wonder how much of these authors Eliezer has read.
My sense is that people value the truth to varying degrees. Further, people encounter barriers to pursuing the truth to varying degrees. Whether or not someone ends up here is likely a function of them caring about truth enough to make the relevant social and psychological sacrifices to get past the barriers.
For me, I don’t remember when I started caring about whether or not my beliefs were true. I know that the moment the possibility of God’s non-existence was put to me I immediately became an agnostic- and an atheist when I learned about the scientific method, Karl Popper, etc. I was raised Catholic ostensibly but my mother is a Unitarian (though one who believes in a fair bit of New-Agey gobbledygook) and my Catholic father is a doubter and extreme skeptic. The areas I’ve lived in have always been fairly non-religious and relatively non-Christian (until I attended a Catholic university).
The answer to the question “what got the transition started?” is probably a just knowledge of the rationalist position and hearing an unbiased version of rationalist arguments. What made the transition possible was valuing truth and having few significant barriers to pursuing the truth. What makes people value truth, I suspect, usually comes before most people’s conscious memory and not recognizable at the time.
However, I did have an experience that increased the how much I valued truth-My parents got divorced and told me contradicting stories. Hypothesis 1: Being lied to increases one’s subjective value of truth. Hypothesis 2: Being lied to by people who answered all of your initial questions and guided your initial decisions increases one’s subjective value of truth.
I’ve never read anything like this “excellence pornography” but I believe:
A survey of such literature that examined commonalities would be far more useful.
The secrets of the successful are probably things successful people have internalized such that they cannot easily explicate them to others. For example, there is strong experimental evidence that in determining who gets a job given identically qualified candidates the chief variables are posture and demeanor in the interview. But I’d be no successful person would explain their success by pointing to their posture just as most unsuccessful people won’t even know what it is they did wrong.
But that does not mean such tricks cannot be taught, just that you’ll have to critically compare the lives of the successful to the lives of the unsuccessful (obviously with statistically significant sample sizes) in order to figure out what exactly the tricks are (you’d also need to take your raw data and control for the factors individuals cannot control). But the data gathering couldn’t be done by interview or survey, you’d have to examine the lives of your subjects. This would be a gargantuan task if you wanted to look at every aspect of people’s lives at once but it is easy to do give certain limited parameters (like in the interview case) and you can infer things from such limited conclusions. It would also be worth looking at the intersection of limited parameter studies (which could answer questions like whether quality or quantity is more important for getting promotions, posture vs. articulateness vs. physical appearence etc.).
Anyway here is scientifically sound list of 7 Habits of Highly Successful People: http://www.mcsweeneys.net/links/lists/23BrendonLloyd.html
I think there is likely a distinction between being rational at games and rational at life. In my experience those who are rational in one way are very often not rational in the other. I think it highly unlikely that there is a strong correlation between “good at prediction markets” or “good at poker” and “good at life”. Do we think the best poker players are good models for rational existence? I don’t think I do and I don’t even think THEY do.
A suggestion:
List your goals. Then give the goals deadlines along with probabilities of success and estimated utility (with some kind of metric, not necessarily numerical). At each deadline, tally whether or not the goal is completed and give an estimation or the utility.
From this information you can take at least three things.
Whether or not you can accurately predict your ability.
Whether or not you are picking the right goals (lower than expected utility would be bad, I think)
With enough date points your could determine your ration of success to utility. Too much success and not enough utility means you need to aim higher. Too little success for goals with high predicted utility mean either aim lower or figure out what you’re doing wrong in pursuing the goals. If both are high you’re living rationally, if both are low YOU’RE DOING IT WRONG.
The process could probably be improved if it was done transparently and cooperatively. Others looking on would help prevent you from cheating yourself.
Not terribly rigorous, but thats the idea.
- 3 Mar 2009 8:58 UTC; 2 points) 's comment on Test Your Rationality by (
Its too bad karma scores are reads-neutral. Late comments to posts tend to get ignored at the bottom of the thread. I wonder if one couldn’t add a “Read this comment” button… though I imagine a lot of people wouldn’t bother.
You’d want people to estimate the utility of their goals and compare that to a post-goal completion estimate of utility. See here http://lesswrong.com/lw/h/test_your_rationality/dg#comments
“Recent comments” may work when traffic is low and there is only 1-2 posts a day. But imagine when this thing gets going and you’re posting in an old article during high traffic hours.
Places where rationality* is not welcome:
Churches, political parties, Congress, family reunions, dates, cable news, bureaucracy, casinos… . *Of course rationality might dictate deception- but I take it lying confers some cost on the liar.
Please list the rest. Also, who here is involved with any of the things on the list? Am I wrong to include something and if not how do you deal with being rational in a place that discourages it.
I’ve always wondered if there are any documented instances of someone unscrewing his steering wheel and tossing it out during a game of chicken.
This is my thought too. And I actually think you’re underestimating the role that selection plays. Higher level academia is actually very good and finding talent and the talented students and the talented professors all flock to the same institutions both for independent reasons (funding) and because they prefer the company of one another. You do not do grad work under a Nobel prize winner unless everyone in your field has already notices you and thinks it somewhat possible that you could one day win a Nobel prize. I’m actually astonished the number of Nobel prize winners who worked under other prize winners is as LOW as Eliezer says.
That doesn’t mean there isn’t some method to genius that could be taught but I haven’t see evidence that there is anything that can be taught.
Yes. It is not literally true. Nonetheless, I’d bet students of Nobel winners almost always show significant promise. Moreover, they’re likely working in areas where Nobel prizers are likely to be won- what I mean is, there are some areas in any given field where work is likelier to yield a Nobel, even controlling for the quality of the work. In Physics, for example, Nobel’s are rarely awarded for the more theoretical work on less established subjects. So since the students of Nobel winners are usually in the same fields it makes sense that they would have a high then average likelihood of winning one as well.
First, I’d caution against reflexively questioning appeals to authority. Arguments from authority are not fallacies despite their traditional classification as such. There is no way for an individual to experimentally verify even a small fraction of the things she counts as knowledge- it would be an absurd and unnecessary barrier. Indeed, I think cautioning against arguments from authority is a kind of keeping kosher- an outdated purity norm that is no longer necessary given modern science and method. Once upon a time it made great sense to distrust experts because the experts were often bullshitting and there were few checks to prevent them from doing so. Similarly, now we know how to cook our shellfish and so you’re not likely to get sick from eating scallops.
The problem, on the contrary, are claims being passed off as if the maker of the claim has in fact read the experts when they have not. Particularly false claims that do not contradict common sense go by undetected- and do not die. I’m thinking here of something like “Eskimos have an extraordinary number of words for snow because they’re around it all the time” (http://en.wikipedia.org/wiki/Eskimo_words_for_snow) Snopes is obviously a fantastic resource in this regard but if we want to stop the spread of empirically false beliefs I might suggest dramatically expanding the use of wikipedia’s “citation needed” demand. What if instead of citing claims on occasion or as requested every comment was just assumed to need a citation. If a claim lacked a citation a dozen Less Wrong commenters immediately responded with just the words “citation needed?”. If original poster wants to avoid this she simply includes a citation of gives a reason why she didn’t “I’m just guessing” or “There are no empirical claims here” etc. Eventually we’d just come to expect a citation or some sort of explanation and if we didn’t see one we’d know to immediately question the claim.
(I don’t believe I’ve made any non-obvious empirical claims, but if someone wants to see evidence regarding the superiority of modern science as compared to medieval scholarship I can find that)
Occam’s Razor is a heuristic… and one I proceed according to- but its not at all clear just what its justification is. Why exactly ought we to believe the simpler hypothesis?
I’m not familiar with the psychological literature on emotions but its a little counter-intuitive (I think my brain is tagging it as annoying) to use the word emotions to describe all of these different tags. Maybe the process of tagging something “morally obligatory” is indistinguishable from tagging something “happy” on an fMRI but in common parlance and, I think phenomenologically, the two are different. Different enough to justify using a word other than emotion (which traditionally refers to a much smaller set of experiences). It is worth noting, for example, that we use normative terms to describe emotions- jealousy bad, love good, etc. Even though both can motivate decisions. I assume you have it that this is just the brain tagging motivations- and maybe thats right, but in that case you probably want a different word.
Also, I assume you don’t think highly of attempts to derive values from reason? I don’t think such attempts have been especially successful, but its not as if they haven’t been tried. Are all such attempts just trying to describe our feelings in logicy-sounding ways?
Lastly, am I the only one who gets nervous when we rely heavily on programming metaphors. Seems like the sort of thing that could steer us terribly wrong.
So on a related point that may or may not be worth its own post. Looking at the new Less Wrong facebook group one rapidly becomes aware that basically everyone here is demographically identical. The vast majority are white men in their twenties- and among those who volunteered the information, most had degrees in math, science or philosophy. There did appear to be a large international presence (and by international I mean European).
So my question is 1) why? What about the Less Wrong project selects YWM and 2) is it a problem? I tend to think that someone biography influences their perspective to such an extent that its useful to talk to and read people with different biographical backgrounds. So maybe its just a matter of reading different blogs… on the other hand if you’re trying to build a broad rationalist movement then we’re doing something wrong, no?
I like this idea. Let me know the result.
Does the Babyeater morality emphasize the consumption and digestion of babies or is it simply the winnowing that they value? If it’s the latter our biologies are probably different enough that one could fudge the translation of some texts about contraception and abortion to make it look like we winnow. It just turns out that we destroy our young even earlier and do so by prohibiting them from combining another sort of baby- which they need to survive. Sometimes when they do combine we destroy them anyway.
Not to be crude, but maybe the aliens would enjoy some of our oral sex pornography.
Just as human individuals change their behavior and outlook when they are associating with different groups- you’re an ass around your college friends but a gentlemen around the ladies- so it makes sense for the species to act differently around aliens with different cultural and moral norms. In this case we should exaggerate the role contraception and abortion plays in human civilization and fudge the language so it looks like we’re killing babies rather then just sperm and zygotes. It is precisely because our biology is so different that such a mistranslation won’t be caught. We might as well call our sperm “baby”- all the translation so far as been inexact enough to permit this, surely “baby” doesn’t have to entail a fully developed brain.
Moreover, the aliens must still have some analogy for our love and to-our-deaths willingness to protect our young since any deaths AFTER the winnowing would likely be viewed as devastatingly unfortunate. Does the winnowing even coincide with the surviving babies immediately undergoing some drastic biological change? The only real sense in which Babyeater morality differs from ours is the time during the development of the individual when the individual is declared by society to be morally valuable.