The End (of Sequences)
This concludes the final sequence on Overcoming Bias / Less Wrong. I have not said everything I wanted to say, but I hope I have said (almost) everything I needed to say. (Such that I actually could say it in these twenty-one months of daily posting, August 2007 through April 2009.)
The project to which Less Wrong is devoted—the art and science and craft of human rationality—is, indeed, important. But the calculus of choosing among altruistic efforts is, in some ways, a calculus of who can take your place. I am more easily replaced here, than elsewhere. And so it has come time for me to begin pulling my focus away from Less Wrong, and turning toward other matters, where I am less easily replaced.
But I do need replacing—or rather, the work that I was doing needs replacing, whether by one person or by many people or by e.g. a karma system.
And so my final sequence was my letter that describes the work that I can already see remaining to be done, gives some advice on how to configure the effort, and warns direly against standard failure modes.
Any idea that can produce great enthusiasm is a dangerous idea. It may be a necessary idea, but that does not make it any less dangerous. I do fear, to a certain extent, that I will turn my focus away, and then find out that someone has picked up the ideas and run with them and gotten it all wrong...
But you can only devote your whole life to one thing at a time. In those ways I have thought to anticipate, at least, I have placed a blocking Go stone or two, and you have been warned.
I am not going to turn my attention away entirely and all at once. My initial plan is to cut back my posting to no more than one post per week.
At some future point, though, there must come a time when I turn my attention entirely away from building rationalism, and focus only on that other task.
So, yes, just to belabor the point—if there’s going to be a lasting community, and not just a body of online writing that people occasionally stumble across, it needs to set itself up to run without me.
The last explicit dependency left on me is promoting posts, and I’ve been mostly doing that based on user voting (though not entirely; my activation threshold is lower for posts I perceive as higher-quality). I plan to start trying to delegate that power to co-editors shortly.
For myself… I’ve been feeling rather burned out on writing, so I’m thinking of taking a short vacation and then studying math and working over the summer, before I go back to producing a book.
I’m no longer certain about the time-investment wisdom of trying to convert the OB/LW sequences into minibooks. The Wiki might accomplish a lot of the same purpose of making the info more accessible and organizing it. We’ll see how the Wiki develops (especially once my old posts are imported which should happen Any Time Now).
Now, while I’m on semi-vacation, is a good time to have me speak at your hedge fund. If you wait until after I write a book and it comes out, then, if things go remotely well, it will cost you a lot more money to have me speak (because the marginal utility to me of additional money will have gone way down, and demand gone up). Right now, though, additional money does have substantial marginal utility to me. So those of you who work at the hedge funds who were rational enough to survive: please request me for your speaker programs. I’m told that I give a wonderful, amazing, etcetera talk for audiences interested in rationality, considerably more than twice as useful as talks from big-name speakers who cost considerably more than twice as much.
If the rationality book is written, sold, and takes off, I may well vanish entirely off the face of the Earth, all purposes of publicity having already been served. That is the optimal and desirable outcome—it means I am allowed to specialize narrowly.
I would like to take this moment to confess, once again, that I have had ulterior motives in all this—as is right and proper to a rationalist. It is not the pure service of rationality that diverted my focus here… though I have served that overt cause as a thing in its own right, with its own integrity.
And I will not insist on recruiting all the rationalists that I create...
But still...
In the end, these 21 months were wagered upon your active response, not your passive possession of truth. Even those of you who are not moved to care about the thing that I protect, may still be moved to align their activities with Seasteading, or the Methuselah Foundation, or Givewell, or cryonics. And, perhaps, tell them that we sent you, so that they know that this “rationality” business is working to their favor; so that maybe they will say a thing or two about rationality. And someone who encounters the craft there, will read on it further, and eventually donate professionally specialized hours (in one form or another) to the Singularity Institute. What goes around comes around, I hope.
To the extent that you don’t feel moved to do anything in particular—even include a link in your signature, or link to apposite posts now and then in your blog—then I have lost my gamble of these 21 months. Or I have lost that part of the stakes which was about you and decision processes similar to you. (No, that doesn’t mean you should seize this opportunity to post about how I lost my gamble with you. You should know better by now, if you want any rationalist cause to get anything done ever, whether or not you are a part of it.)
And this advice: If there is some rationalist cause you have decided to help eventually, I advise you very strongly to help that cause now—even if it’s just a tiny amount. One of the regularities I have discovered, working in the nonprofit industry, is that people who donated last year donate the next year, and people who are planning to donate next year will, next year, still be planning to donate “next year”. The gap between little helpers and big helpers is a lot more permeable than the membrane that separates helpers and procrastinators. This holds whether you would help my own cause, or any of the other causes that have rationality as their common interest.
As for why Earth needs rational activists in particular—I hope that by now this has become clear. In this fragile Earth there are many tasks which are underserved by irrational altruists. Scope insensitivity and the purchase of moral satisfaction leads people to donate to puppy pounds as easily as existential risk prevention; circular altruism prevents them from going so far as to multiply utilons by probabilities; unsocialized in basic economics, they see money as a dirty thing inferior to volunteering unspecialized labor; they try to purchase warm fuzzies and status and utilons all at the same time; they feel nervous outside of conventional groups and follow the first thought that associates to “charity”...
And these are all very normal and human mistakes, to be sure—forgiveable in others, if not in yourself. Nonetheless, I will advise you that a rationalist’s efforts should not be wasted on causes that are already popular far outside of rationalist circles. There is nothing remotely approaching an efficient market in utilons.
Is all this inclusiveness a pretense? Did I, in the end, gamble only upon the portion of the activism that would flow to my own cause? Yes, of course I did; that is how the calculation comes out when I shut up and multiply.
But I have faithfully served the integrity of that pretense, because that inclusiveness matters to my own cause as well.
So I say to you now, on behalf of all our causes: Do, whatever you may find worth doing.
- 1 Jul 2010 22:07 UTC; 13 points) 's comment on Open Thread: July 2010 by (
- 14 Dec 2016 17:23 UTC; 11 points) 's comment on Ozy’s Thoughts on CFAR’s Mission Statement by (
- 30 Jun 2010 14:13 UTC; 11 points) 's comment on A Challenge for LessWrong by (
- 16 May 2009 6:40 UTC; 9 points) 's comment on A Parable On Obsolete Ideologies by (
- Seeking matcher for SIAI donation by 8 Oct 2010 5:38 UTC; 3 points) (
- 1 Jul 2010 22:05 UTC; 0 points) 's comment on Open Thread: July 2010 by (
- 13 Jul 2011 19:23 UTC; 0 points) 's comment on Efficient Charity: Do Unto Others... by (
My only question is: who gets to be Judas?
Markup:
Thanks. Er. Now, please delete your comment because we’re taking up a whole lot of space.
Eliezer,
I’m still curious about one thing: how did you develop the art in the first place? I mean I get that you are an auto-didact, but did you have any mentors? How did you choose the next thing/book to study/read, how did you grow?
Eliezer,
Has LessWrong so-far served the ends you intended in 2009?
thank you
I was going to say just “thank you”, but instead I donated $30 to the Singularity Institute (I had already donated before, so I guess Eliezer was right).
I encourage other regular OB/LW readers to donate too. That’s the least we can do considering the value we’ve got in the past 21 months.
Edit: Here’s an easy link so you have fewer excuses not to do it: http://singinst.org/donate/
Awww, shucks! You warm my heart^H^H^H^H^H limbic system. (Consider this as a reply to (almost) everyone.)
Next time, you can use ^W ;)
Are you going to scale back comments, too, or just top-level posts?
I think you should remix the material into a book. It seems like a relatively small amount of additional effort that could reach a much larger audience. A lot more people still read books than blogs, even if it doesn’t seem that way to us. There is a lot of value in having a single distilled compendium of the worldview that people can give as gifts, recommend to friends, etc, that is a lot more accessible than a site like Less Wrong.
(and thanks for the shout-out!)
Anyone know why this was voted down?
I hate to see you go but it’s true, there is another thing that greatly needs and deserves your full attention. So thank you, and safe journey. I hope what you planted here grows strong, and you look back on this time as being well worth the gamble.
Thank you. It’s been a truly wonderful time. Not thanks to you alone, even if you were the driving factor. It will be difficult for anyone to fill your shoes, but then again, LW has shown many others having great promise, well enough that it can become a community much greater than it already is, and thus meaning success for you in this endeavour.
While I’m sad to see you give up your central role, for yours are the posts that I’ve in general found to be the most eye-opening and enjoyable, it is a also a relief to see you returning focus to the core job of SIAI, as it indicates greater confidence about your chances of success in that. Still it would be interesting to hear why you considered this detour from concentrating on FAI so important to do at the point you did it.
Thanks for all of the posts, Eliezer. You are an inspiration. I hope that entropy works slowest amongst your neurons. ( I know, praying and hoping is just wasted time, but can’t help myself here.)
I guess one is supposed to keep this dignified, but one small nit-pick. Please postpone your plans of doing a salinger / richard rainwater only after giving a full and proper book tour where you have had a chance to debate a lot of people. just a small piece of advice.
...so that I get bad ideas refuted, or some other reason?
Yes, to get ideas refuted also, though the marginal benefit may not be much since a good number of people have tried and tested these ideas in the debate rooms of OB/LW. I think it will be more to introduce these ideas to a larger audience with the purpose of finding out any unspoken/unmentioned emotional connections that these ideas might have/introduce.
It has been mentioned before that our kind might have similarities that are not obvious to us. We have to step outside our own box. We may be a little less egalitarian compared to a society that atleast pays major lip-service to egalitarianism. We might be more willing to move around with ambiguity and uncertainty about certain topics, where the average journalist might want certainty. We might need more certainty and non-contradiction about other matters (science related) where the average journalist might just agree to disagree.
A book tour would also help you prepare story metaphors that would work among a wider range of people. If your idea of friendly AI is still CEV, it would help you get a better feel for what the extrapolated wishes of humanity would be, I hope. Am i right or wrong? Or is it besides the point?
I find my reaction to this post interesting. It is less “Well, now what?” and more “Yeah… and?”
I attribute this to both to being new and also to the fact that my (fledgling) loyalty is to Less Wrong and not Eliezer or Overcoming Bias. My value assessment for the latter two firmly revolves around things they have already done. When I read this post it confirms that the value I can receive from Eliezer is in the past. Less Wrong is valuable because of something in its future. So, while digging through all of Eliezer’s radio messages, I remember that my primary task is to keep moving forward and learning about the maze I am in. Standing around pondering what Eliezer’s maze must have looked like and appreciating how Eliezer was so smart as to describe this way is only so useful.
This, natually, is both good and bad. The good is that I am some small evidence that Eliezer did find a way to replace himself. The bad is that I may start picking up ideas and running around with them while getting them all wrong. I may be a bad spark of enthusiasm that sees just enough of a promise at the end of a maze to start madly dashing around in circles. The real test is if Less Wrong can succeed as a whole where any one individual may be wrong and hold enough courage to squelch the bad sparks.
In the meantime, I am off to explore my maze with newly opened eyes. If my radio messages do not jive with the Way, feel free to tune me out, vote me down, and I will wander off alone. But for goodness sake, don’t touch the bad spark.
(Note) I have been doing minor editing on this comment. Just FYI.
Another “thank you”. It’s been...remarkable.
I hope you can continue to use your symbolic position to keep us cohesive for a while even when you’re not posting at quite the same rate. And I look forward to seeing more of your often very interesting ad hoc observational posts now that you don’t feel compelled to stick to a sequence.
Let me join in the love-fest. Your articles have been the largest single influence on my world-view. Not surprising, since after two years of reading Overcoming Bias, I’ve probably read more words written by you than any individual non-fiction writer. Hope you can get some good vacation and come back strong. Until then, it looks like we need to cheerlead Yvain into posting daily.
Best of luck Eliezer, the past two years of reading religiously have changed the way I think about pretty much anything. As with Galactica, the finale gives me an excellent rationale for heading back to the start. I also take a strange pleasure from reading my nonsense comments from early on.
Looking forward to seeing your name in lights and saying ‘that guy told me I was wrong hundreds of times!’
[Ben Jones on OB]
Standing_Ovation
Godspeed. Can’t wait for the book!
May your confidence intervals forever be as narrow as your circle of influence is wide.
I’ve been a lurker of OB/LW for a little while. I will truly miss your postings, which I thank you greatly for. (Perhaps with a donation to the SI from my next paycheck.) I’ve found them to be very well-written and well-explained, and although you’re currently at an intellectual level higher than my own, you don’t write cryptically.
I have read some of your old stuff and have noticed an improvement, so I’ve decided to try my own experiment of writing something of size and complexity each day. Perhaps one of these times I’ll put it on LW, and not worry about looking like a complete idiot when it’s juxtaposed next to something of yours. =P
Eliezer has done an excellent job of conveying to us his knowledge and experience, and for that we are grateful. But this is only the very beginning, the very first step towards creating an Art. One rationalist has shared his skills—this gives us a single data point. Now what?
We need to learn how each rationalist came to their current beliefs, to identify common elements and patterns. It is through this process that we can begin to outline a curriculum, so that we can begin trying and improving it. Rationalists seem to be predominantly young males, but eventually we will have children ourselves—how do we teach rationality to them? What could they accomplish if they have been raised with rationality their entire lives?
There is much work to be done. Eliezer is a respected colleague who provided a seminal contribution, and he will be remembered well, but it is time to forge ahead into the future.
Eliezer, you are a great inspiration, thank you and I wish you success in everything you do!
As a year-long reader and a never-commenter, thanks for writing. Your blog posts have enriched my life just as my world-views are coming into twisting, bending, ever-changing focus.
And I shared OB with my roommate today. With any luck, he too will find these paths.
I doubt you need the encouragement, but keep up the great work.
I’ll jump in with the “thank you”s too. :) Reading your posts generally seemed to not just teach me something, but “tickled my brain” the right way to actually help me absorb whatever the subject matter was.
But yeah, back to work on… the other stuff! :)
Does anyone know about any Chicago area Singularity-esque groups at which doing might be done? I am interested in volunteering amateur labor in the hopes of progressing toward volunteering professional specialized labor.
Anyone interested in “volunteering amateur labor in the hopes of progressing toward volunteering professional specialized labor” around the risks and promise in AI should probably send me an email in the next few days; if we do take a bunch of summer interns, a lot of the idea will be to get people up to a level where they can later do useful work on their own. Folks interested in “progressing toward professional specialized labor” who aren’t free to come to the bay area this summer should probably still contact me.
I know there’s a Chicago-area transhumanist meetup, or was. Closest thing to a local effort might be the Transhumanist Student Network, if that’s still around and still run out of Chicago. But it’s been a while.
Thanks, Eliezer. I’ve been reading since the days of sysopmind and sl4. Imagine my stunned delight when I checked yudkowsky.net one day, and discovered that you were blogging … daily! Fortunately for me, I think I still have a substantial backlog of your OB posts to read. You remain my favorite author, and I always look forward to your future works.
Another “thank you”. It’s been...remarkable.
I hope you can continue to use your symbolic position to keep us cohesive for a while even when you’re not posting at quite the same rate. And I look forward to seeing more of your often very interesting ad hoc observational posts now that you don’t feel compelled to stick to a sequence.
I hope you don’t actually withdraw from the community. I imagine most of us would be very sad to see that happen—not because the community depends on you for its survival, but simply because we enjoy your contributions. (Maybe I shouldn’t admit to this, but I have found your posts to be a great source of pleasure reading, whether or not I was explicitly learning anything new.)
Most folks here presumably have “day jobs” of one sort or another that aren’t specifically involved with rationality per se; yet they still find time to hang out here. I hope it can be likewise with you.
Good luck.
Good thing, I was planning a coup to turn everyone here into theists.
(Maybe you read what I wrote. It didn’t sound sincere coming from me..even though it was. So I took it out.)
I’ll continue my job of being ‘the outsider’ that helps define what the group is not..