Writes Putanumonit.com and helps run the New York LW meetup. @yashkaf on Twitter.
Jacob Falkovich
I second some people’s claim that “rationality” can be a double-edged sword in the title, even people who might be interested in the book may have negative connotations. It would fit better in the subtitle, something like:
Think Like Reality: The Art of Being Rational
Greetings, y’all. I’m very excited to take the plunge into the LW community proper. I spent the last six months plowing through the sequences and testing the limits of my friends’ patience when I tried to engage them in it. Besides looking for people to talk to, I am beginning to feel a profound restlessness at not doing anything with all the new ideas in my head. At 27, I’m not a “level 1 adult” yet. I don’t really have something to protect or a purpose I’m dedicated to. I hope that by being active in the community will at least get me in the habit of being active.
My name is Jacob, I was born in the Soviet Union and grew up in Israel. My parents are scientists, my dad is probably top 10 worldwide in his field. I grew up playing soccer and sitting at dinner with students and scientists from around the world, I hope I actually did realize even as a teenager how awesome it was. I did my Bar Mitzva at a reform synagogue but God was never really part of our family conversation, I don’t think that I’ve said a prayer and actually meant it since I was 12 or 13. There are just enough Russian-speaking math geeks in Israel to form a robust subculture and I was at the top of it: winning national competitions in math and getting drunk the next day on cheap vodka. I had a very strange four-year service in the IDF. I sweated blood for a degree in math and physics that got me a minimum-wage job in the Israeli desert, and then effortlessly breezed my way through a top 20 MBA in the US that suddenly made me a middle class New Yorker. I work an easy job that leaves me with plenty of energy at the end of the day to play sports, perform stand up, date, and improve my skills as a rationalist by considering my intellectual biases.
I stumbled on LW after reading an article about Roko’s #$&%!@ of all things, and the last few months were what I saw someone here describe as “epiphany porn”. Even before that, I read a lot on similar themes and took it all very seriously: “Fooled by Randomness” made me quit my job as a day-trader for a hedge fund and “Thinking Fast and Slow” changed my life in several ways, including the choice of car I bought. I’m very happy to start noticing changes in my brain after LW too. For example, I spent a lot of my time in the US arguing with anti-zionists. I just recently realized that the hypocrisy and stupidity I usually find arrayed against me has pushed me into a pro-Israel affective death spiral of my own, that I’m now trying to climb out of. In general, I argue less about politics now and don’t ever plan to vote anymore. I just went to my first OB-New York meetup and hung out at the solstice concert, I hope to become more and more engaged with LWers offline going forward.
The main result of my business school days are several entrepreneurial fantasies about “Moneyballing” things. One recent idea is to set up a personal philanthropy investment fund—people put in X% of their salary that can be used only for emergency or charity. This eliminates the psychological pain of giving money, increases giving, makes personal altruism much more focused and effective and saves on taxes. I also came up with a better matching algorithm for dating websites. Dating in general is at the very top of my interests. While a rigorous model of Bayesian dating seems as unattainable as quantum relativity, I do find that my open minded approach has gotten me in relationships that I didn’t even believe were an option a few years ago (that’s a discussion I’d love to get to on somewhere else on this site).
And finally: where I hope to end up. Perhaps even a year ago I imagined I could be perfectly satisfied living a content middle-class life with a decent job, good relationships and fun hobbies. I realized that the world doesn’t care too much that I was always the smartest person in the room as a teenager, and that I’d do well to dedicate myself to humility. Unfortunately, LW changed that. I see now that things are changing and going to change unpredictably, and that smart people occasionally do make a very non-humble impact. I’m not in a rush to plunge myself into some grand project (like FAI) just for the sake of it, but I do feel that my life is getting too comfortable for comfort. When the waves come, I want to have built a rad surfboard.
Eliezer, thank you for writing a beautiful post. I do hope that the people of the future value my life more than the people of the present, and the fact that there is at least two people in the present who do (Eliezer and my mom ;-) ) is heartening.
I am quite convinced about cryonics in general, but I am not convinced at all that paying up right now for CI or Alcor is a smart investment. What’s the downside of just setting aside enough money for cryopreservation and choosing the best option when death looms?
Consider:
I am 27. If I die suddenly (without regaining consciousness even for a day) in the next decade it’s likely that I would die in a fashion (shot in the head, car crash) that won’t leave much of my brain to be preserved.
The chances that I’ll be in the US when I die are very far from certain (I’m a foreign citizen living in NYC currently).
If I decide a decade from now that I don’t want to cryopreserve, the fees would have been money wasted. I can’t force me-in-10-years into a decision.
Judging by the progress of modern medicine (advances in cancer treatment) and my family history (pretty good from a cardiovascular standpoint) it is very likely that my ticket out will be Alzheimer’s or another neurodegenerative disease. In that case, cryopreservation will only make sense if I commit suicide at the very onset of the disease and am frozen right away which may not be possible. If I get Alzheimer’s I may as well donate all my money to SIAI or Africa.
If the future is going to move in the direction we are hoping to, it’s not unlikely that there would be more companies offering cryopreservation with better deals (e.g. lower fees, global coverage, eternal investment trust management).
Basically, what is the upside of signing up for one specific company and paying the fees vs. knowing that I have made the decision to spend the money on cryopreservation instead of life-prolonging treatment and trusting my future cancer-diagnosed self to be brave enough to keep it?
I was thinking specifically of the fees and not the life insurance. The Alcor fees are high enough that they’ll be worth paying only if I was fairly certain I’ll be in a freezable situation (which is likelier if I’m dying 50 years from now) and Alcor will still be the best option (which is unlikely given 50 years).
As for life insurance, I do have it right now because I don’t have the $50,000-$100,000 saved up that could be used to pay for cryopreservation. If I have the money saved up, I could afford to stop paying the premiums because life insurance has a net negative expectancy. At that point I’d rather keep exercising and eating veggies and keep the $100K in a safe mutual fund, waiting for the decision of how to cryopreserve to become more pressing.
Here’s an idea: instead of Alcor, why wouldn’t I name Eliezer the beneficiary of my life insurance policy with instructions to pay for my cryopreservation at the best affordable company available and take the remainder of the money for SIAI (as renumeration for his trouble)?
Do you think Eliezer writes so much (and so well) to increase the chances that future generations will be interested in reviving him? If you had the power, think who you would rather thaw first: a prolific 19th century philosopher that you read or an anonymous 19th century lawyer.
I guess the next thing after signing up for cryonics I should do is write a book :)
It’s always a bit of a shock when you’re the contrarian and you discover someone meta-contrarianizing you on the outside lane. For example, here’s an interesting triad I just recently became aware of:
Base: monogamy is assumed without discussion, cheating is the end of a relationship unless maybe if you confess and swear to never do it again.
Contrarian: open/poly relationship is agreed upon after discussion, it’s not cheating if there’s no lying.
Meta-con: non-exclusivity is assumed, no discussion. Cheating is whatever, just don’t tell me about it.
I held the first position since I was a teenager, the second since my early twenties. The third one I have recently heard from a couple of young ladies in New York, where polyamory is quite popular. While it’s hard for me to see rationally why the third option would be better (don’t ask don’t tell vs. open agreement), I find the meta-contrarianism of it extremely seductive… Yvain, you may have just saved my next relationship.
I recently wrote about why voting is a terrible idea and fell into the same error as Gelman (I assumed 49.9-50.1 a priori is conservative). Wes and gwern, thanks for correcting me! In fact, due to the Median Voter Theorem and with better and better polling and analysis we may assume that the distribution of voter distributions should have a peak at 50-50.
Of course, there are other great reasons not to vote (mainly to avoid “enlisting in the army” and letting your mind be killed. My suggestion is always to find a friend who is a credible threat to vote for the candidate you despise most and invite him to a beer on election day under the condition that neither of you will vote and you will not talk about politics. Thus, you maintain your friendship while cancelling out the votes. I call it the VAVA (voter anti-voter annihilation) principle.
Malkina: I don’t think I miss things. I think to miss something is to hope that it will come back, but it’s not coming back.
Reiner: You don’t think that’s a bit cold?
Malkina: The truth has no temperature.
Cameron Diaz and Javier Bardem in The Counselor
Revolution is internal
Help yourself at any time
Evolution isn’t over
We are about to use our mind
Gogol Bordello, Raise the Knowledge
I guess that’s partly what we’re here for, right?
Myth: Americans think they know a lot about other countries but really are clueless.
Verdict: Self-cancelling prophesy.
Method: Semi-humorous generalization from a single data series, hopefully inspiring replication instead of harsh judgment :)
I decided to do some analysis about what makes people overconfident about certain subjects, and decided to start with an old stereotype. I compared how people did on the population calibration question (#9) based on their country.
Full disclosure: I’m Israeli (currently living in the US) and would’ve guessed Japan with 50% confidence, but I joined LW (unlurked) two days after the end of the survey.
I normalized every probability by rounding extreme confidence values to 1% and 99% and scored each answer that seemed close enough to a misspelling of Indonesia according to the log rule.
Results: Americans didn’t have a strong showing with an average score of −0.0071, but the rest of the world really sucked with an average of −0.0296. The reason? While the correct answer rate was almost identical (28.3% v 28.8%) Americans were much less confident in their answers: 42.4% confidence v 46.3% (p<0.01).
Dear Americans, you don’t know (significantly) less about the world than everyone else, but at least you internalized the fact that you don’t know much*!
Next up: how people who grew up in a religious household do on the Biblical calibration question.
*Unlike cocky Israelis like me.
Have you heard of Charlie Munger? Most people probably haven’t, which is part of why he’s a great (male, real life) sidekick. Munger is the vice-chairman of Berkshire Hathaway and has been Warren Buffet’s right hand man for decades. Munger is one of the examples in Michael Eisner’s (former Disney CEO) book on partnerships. One of the book’s main points is that 50-50 is a very unstable split in a business partnership, but if one of the partners is willing to stand half a step lower the couple can achieve more.
You see this example a lot in sports, and by “you” I mean me because I’ve met few rationalists who care about sports as much as I do :) Scottie Pippen would’ve been an excellent player on his own, but being Michael Jordan’s sidekick made him an all-time great.
Since professional sports is very competitive and rewards “alpha dogs” with all of the money and fame (endorsement deals, max contracts, hottest groupies), players who could have been amazing Robins become mediocre Batmans. If players were only paid based on winning championships, I’m sure that would change. If your goal is to save the world, that’s the only goal and no one cares about “individual stats”. With this goal drawing quite a few heroes, being a sidekick may well be the best, noblest, and most effective way to contribute.
Do start-ups distribute according to a power law? In that case they would be somewhere in the middle between sports and saving the world.
In American sports leagues there’s a salary cap that’s the same for each team (flat distribution). Being the second best player on a championship team almost always means less money than being the #1 star on a bad one. Usually athletes only start taking pay cuts to play for contenders towards the end of their careers. If start up earnings are distributed exponentially, it would seem that being #5 on a top 20 start-up is better than #1 on top-200 one. On the other hand, you mentioned other incentives, like fame (decision power, ego..) that would confound the issue. It’s hard to care about “the company” as a goal separate from yourself, otherwise being fired from a company wouldn’t change our opinion of it (for those who haven’t ever been fired: I have, it does). If you’re trying to save the world, the payoff distribution should be discrete: 0 if you fail, [your favorite number here] if you win. If Sauron wins, all hobbits are equally screwed. Once the ring was destroyed, did Frodo get a higher payout than Sam? Not if you derive positive utility from having 10 fingers :)
I like the definition of eucatastrophe, I think it’s useful to look at both sides of the coin when assessing risk.
Far out example: we receive a radio transmission from an alien craft that passed by our solar system a few thousand years ago looking for intelligent life. If we fire a narrow beam message back at them in the next 10 years they might turn back, after that they’ll be out of range. Do we call them back? It’s quite likely that they could destroy Earth, but we also need to consider the chance that they’ll “pull us up” to their level of civilization, which would be a eucatastrophe.
More relevant example: a child is growing up, his g factor may be the highest ever measured and he’s talking his first computer science class at 8 years old. Certainly, if anyone in our generation is going to be give the critical push towards AGI it’s likely to be him. But what if he’s not interested in AI friendliness and doesn’t want to hear about values or ethics?
Let me be an Excel sidekick among statistical analysis heroes.
I saw the OKCupid stuff as well, I ran a quick test in Excel to see if the variance in attractiveness contributes to the decision to meet beyond the attractiveness mean. Here’s what I got doing regression, with apologies for the hideous formatting:
......... Coefficients ..Standard Error ..t Stat ..P-value Intercept -0.569931558 0.042946471 -13.27074239 4.65749E-35 avg_attr 0.156634411 0.005238302 29.90175402 2.6299E-117 attr_std 0.028596624 0.012485497 2.290387431 0.022377128
The dependent variable is match percent (percent of people who decided they want to date the ratee), avg attr is the mean and attr std the standard deviation of the physical attractiveness ratings. attr std is not the attractiveness to STDs ;-)
As we can see, the coefficient for attractiveness deviation is significantishly positive. It actually has a small negative correlation with match and a larger negative correlation with attractiveness. This means that there is more consensus on the attractiveness of prettier people. Holding attractiveness constant, variance, which is visible for a single rater as an “unusual look”, increases the chances that people will want to date you. Put some flowers in your hair!
You know what really helps me accept a counterintuitive conclusion? Doing the math. I spent an hour reading and rereading this post and the arguments without being fully convinced of Eliezer’s position, and then I spent 15 minutes doing the math (R code attached at the end). And once the math came out in favor of Eliezer, the conclusion suddenly doesn’t seem so counterintuitive :)
Here we go, I’m diving all the numbers by five to make the code work but it’s pretty convincing either way.
The setup—Researcher A does 20 trials always, researcher B keeps doing trials until the ratio of cures is at least 70% (1 cure / 1 trial is also acceptable).
E—The full evidence, namely that 20 patients were tried and 14 were cured.
H0 - The hypothesis that the success rate of the cure is 60%.
H1 - The hypothesis that the success rate is 70%.
Pa—Researcher A’s probabilities.
Pb—Researcher B’s probabilities.
In this setup, it’s clear to see that Pa and Pb aren’t equal for every thing you want to measure. For example, for any evidence E that doesn’t contain 20 observations Pa(E)=0. However, Reverend Bayes reminds us that the strength of our EVIDENCE depends on the odds ratio, and not on all the sub probabilities:
P(H1|A) / P(H0|B) = P(H1)/P(H0) P(E|H1)/P(E|H0) aka posterior odds = prior odds odds ratio of evidence. Assuming that the prior odds are the same, let’s calculate the odds ratio for both Pa and Pb and see if they are different.
Pa(E|H0) = 12.4%, as a simple binomial distribution: dbinom(14,20,0.6). Pa(E|H1) = 19.1%. The odds ratio: Pa(E|H1)/Pa(E|H0) = 1.54. That’s the only measure of how much our posterior should change. If originally we gave each hypothesis an equal chance (1:1), we now favor H1 at a ratio of 1.54:1. In terms of probability, we changed our credence in H1 from 50% to 60.6%.
What about researcher B? I simulated researcher B a million times in each possible world, the H0 world and the H1 world. In the H0 world, evidence E occurred only 5974 times out of a million, for Pb(E|H0) = 0.597% which is very far from 12.4%. It makes sense: researcher 2 usually stops after the first trial, and occasionally goes on for zillions! What about the H1 world? Pb(E|H1) = 0.919%. The odds ratio: Pb(E|H1) / Pb(E|H0) = wait for it = 1.537. Exactly the same!
I think all the other posts explain quite well why this was obviously the case, but if you like to see the numbers back up one side of an argument, you got ’em. I personally am now converted, amen.
R code for simulating a single researcher B:
resb<-function(p=0.6){
cures<-0
tries<-0
while(tries < 21) { # Since we only care whether B stops after 20 trials, we don’t need to simulate past 21.
tries<-tries+1 cures<-cures+rbinom(1,1,p) if((cures/tries) >= 0.7) return(tries)
}
tries }
R code for simulating a million researchers B in H1 world:
x<-sapply(1:1000000,function(i) {resb(0.7)})
length(x[x==20])
The Other Path—a poem
I think that those are two distinct problems: refusing to take a position (Crono) and refusing to reconsider an entrenched position (Vaniver). I wrote more about the latter, I think I’ve seen it happening more often to people around me. I especially find it staggering how little effort people spend on picking the opinion to defend vs. how much the expend on defending it.
It’s not just about opinions that are worthwhile to have and defend for political reasons and tribe affiliation. My roommate for example will automatically pick the side against me in any argument just to be contrarian, even if it’s just between us. Then, he will spend hours performing rationalization and confirmatory research until he has fully convinced himself of a position that he had no prior cause to favor (even I’m the dumbest person in the world, reversing my opinions can’t be a truth-signal). Needless to say, after every exercise in this vein he congratulates himself on being extremely intelligent because he “fought well”.
Did you know that according to the last survey females (sex at birth) on LessWrong have a higher IQ with p=0.058?
Irresponsible speculation alert: people join LW because they dig the ideas and/or because they dig the community. The ideas are more enticing for people with higher IQ, the community is more enticing for.. guys. Thus, at equal levels of IQ more women will be filtered out because they feel (on average) less comfortable with the community.
Like I said, I don’t assign the above explanation an overwhelming epistemic status, but I do think that the IQ results are non-zero evidence against point #8 and general arguments of the “women aren’t smart enough for LW” type.
A good way I think of to define humility is as the inverse of your willingness to argue with future you. Imagine that yourself from a few weeks in the future (or 5 years in Matthew McConaughey’s case) steps out of a time machine. Would you be willing to concede that he knows more?
Examples:
The student who is certain of his answer will expect that it will not change, so he is not humble at all about it.
The student who is resigned to the fact that the answer is unknowable expects that future her doesn’t know any better so she’s not humble either.
The student who rechecks her answer anticipates that future her found a mistake, otherwise she wouldn’t bother checking. That’s how you know she’s humble.
I’m humble about my assessments of the probability of creating an AGI. I would immediately take future-me’s word on it because he will surely know more.
I’m not humble about my belief in MWI, because I don’t expect that future me will know more about it. The only thing that could change my mind is an experiment disproving superposition for cat-sized objects, which I don’t expect me-in-5-years to see. If future-me doesn’t believe in MWI I would need to hear all of his arguments, I wouldn’t agree with him on the spot (maybe I’m going to get hit on the head in two years?)
I believe that people systematically underestimate the amount that the world, themselves and their opinions will change in 5 years. That would amount to a bias for under-humility.