After Eliezer Yudkowsky was conceived, he recursively self-improved to personhood in mere weeks and then talked his way out of the womb.
ata
This makes it sound more like a cult rather than a group of rational people working together.
...they “grew long mustaches which they would twirl with melodramatic flair as they savaged a programmer’s code”, for god’s sake. This is just a group of people who decided to have fun with their identities, go about their jobs in a bit more theatrical a manner than usual, and make people’s days more surreal, and managed to get their work done more effectively and more enjoyably in the process. (Rational doesn’t mean boring.) I’m sort of used to random things in nearby memespace regions being accused of being cults, but this doesn’t even seem to have the surface similarities that are usually brought up to support those accusations.
- 18 Mar 2012 13:10 UTC; 8 points) 's comment on The Best Comments Ever by (
- 16 Mar 2012 9:40 UTC; 0 points) 's comment on Cult impressions of Less Wrong/Singularity Institute by (
While searching creationist websites for the half-remembered argument I was looking for, I found what may be my new favorite quote: “Mathematicians generally agree that, statistically, any odds beyond 1 in 10 to the 50th have a zero probability of ever happening.”
That reminds me of one of my favourites, from a pro-abstinence blog:
When you play with fire, there is a 50⁄50 chance something will go wrong, and nine times out of ten it does.
I donated $100 yesterday. I hope to donate more by the end of the matching period, but for now that’s around my limit (I don’t have much money).
Eliezer seals a cat in a box with a sample of radioactive material that has a 50% chance of decaying after an hour, and a device that releases poison gas if it detects radioactive decay. After an hour, he opens the box and there are two cats.
Anyone who predicts that some decision may result in the world being optimized according to something other than their own values, and is okay with that, is probably not thinking about terminal values. More likely, they’re thinking that humanity (or its successor) will clarify its terminal values and/or get better at reasoning from them to instrumental values to concrete decisions, and that their understanding of their own values will follow that. Of course, when people are considering whether it’s a good idea to create a certain kind of mind, that kind of thinking probably means they’re presuming that Friendliness comes mostly automatically. It’s hard for the idea of an agent with different terminal values to really sink in; I’ve had a little bit of experience with trying to explain to people the idea of minds with really fundamentally different values, and they still often try to understand it in terms of justifications that are compelling (or at least comprehensible) to them personally. Like, imagining that a paperclip maximizer is just like a quirky highly-intelligent human who happens to love paperclips, or is under the mistaken impression that maximizing paperclips is the right thing could do and could be talked out of it by the right arguments. I think Ben and Robin (“Human value … will continue to [evolve]”, “I think it is ok (if not ideal) if our descendants’ values deviate from ours”) are thinking as though AI-aided value loss would be similar to the gradual refinement of instrumental values that takes place within societies consisting of largely-similar human brains (the kind of refinement that we can anticipate in advance and expect we’ll be okay with), rather than something that could result in powerful minds that actually don’t care about morality.
And I feel like anyone who really has internalized the idea that minds are allowed to fundamentally care about completely different things that we do, and still thinks they’re okay with that actually happening, probably just haven’t taken five minutes to think creatively about what kinds of terrible worlds or non-worlds could be created as a result of powerfully optimizing for a value system based on our present muddled values plus just a little bit of drift.
I suppose what remains are the people who don’t buy the generalized idea of optimization processes as a superset of person-like minds in the first place, with really powerful optimization processes being another subset. Would Ben be in that group? Some of his statements (e.g. “It’s possible that with sufficient real-world intelligence tends to come a sense of connectedness with the universe that militates against squashing other sentiences. But I’m not terribly certain of this, any more than I’m terribly certain of its opposite.” (still implying a privileged 50-50 credence in this unsupported idea)) do suggest that he is expecting AGIs to automatically be people in some sense.
(I do think that “Value Deathism” gives the wrong impression of what this post is about. Something like “Value Loss Escapism” might be better; the analogy to deathism seems too much like a surface analogy of minor relevance. I’m not convinced that the tendency to believe that value loss is illusory or desirable is caused by the same thought processes that cause those beliefs about death. More likely, most people who try to think about AI ethics are going to be genuinely really confused about it for a while or forever, whereas “is death okay/good?” is not a confusing question.)
- 31 Oct 2010 4:49 UTC; 5 points) 's comment on Ben Goertzel: The Singularity Institute’s Scary Idea (and Why I Don’t Buy It) by (
MBlume is the sort of person who could run up to someone while wearing a Nazi uniform, covered in blood, with swords in both hands, shouting about an imminent nuclear blast, all without coming off as threatening.
It’s true. I’ve seen him do it.
I’m pretty sure you can guess where that conversation went...
You started eating month-old infants?
I think you’re mistakenly equivocating between “wrong with” referring to morality and rational justification. If there are no moral truths, then of course it’s not immoral to believe there are moral truths, but it’s not epistemically rational, which is the relevant point among people who care about epistemic rationality.
The allegation that cryonics is pseudoscience reminds me of the allegations that Singularitarianism/Transhumanism are “atheist religion”, “the rapture for nerds”, etc. That confusion, I think, comes when people see the questions we’re investigating — “Could we live forever?”, “Could we end suffering?”, etc. — and assume that we’re answering the questions in a way similar to how religion does… or they don’t even think to remember why they believe religion is bad, and they assume that it’s the questions rather than the answers. Obviously, the problem with religion isn’t the questions it asks, nor their motives for asking those questions; the problem is the way religion acquires answers to those questions. The same applies to seeking eternal life. Eternal life as a goal isn’t wishful thinking; it’s wishful thinking when people mistakenly believe that the goal is easy or has already been reached (“you can live forever if you believe in Jesus”, etc.). Yet it’s not surprising that many perfectly intelligent people buy into these memes. They are used to hearing completely bullshit answers to these completely legitimate questions, so they get to the point where the questions themselves set off their bullshit alarms, even in the context of attempting to investigate them within a rigorous scientific/rational framework.
The “singularity == religion” and “cryonics == pseudoscience” memes are comparable to someone in the early 1960s comparing the Apollo program to the story of the Tower of Babel, and then dismissing the program on that basis as a technically infeasible religious fantasy.
I paint question marks on boxes and leave hallucinogenic mushrooms in them.
Today I was listening in on a couple of acquaintances talking about theology. As most theological discussions do, it consisted mainly of cached Deep Wisdom. At one point — can’t recall the exact context — one of them said: “…but no mortal man wants to live forever.”
I said: “I do!”
He paused a moment and then said: “Hmm. Yeah, so do I.”
I think that’s the fastest I’ve ever talked someone out of wise-sounding cached pro-death beliefs.
As usual, this is better settled by experiment than by “I just know”. My favourite method is holding my nose and seeing if I can still breathe through it. Every time I’ve tried this while dreaming, I’ve still been able to breathe, and, unsurprisingly, so far I’ve never been able to while awake. So if I try that, then whichever way it goes, it’s pretty strong evidence. There — now it’s science and there’s no need to assume “I feel that I know I’m awake” implies “I’m awake”.
Of course, if you’re the sort of person who never thinks to question your wakefulness while dreaming, then the fact that you’ve thought of the question at all is good evidence that you’re awake. But you need a better experiment than that if you also want to be able to get the right answer while you actually are dreaming.
[Apologies if replying to super-old comments is frowned upon. I’m reading the whole blog from the beginning and occasionally finding that I have things to say.]
“A witty saying proves nothing.”—Voltaire
I’ve always found that useful to keep in mind when reading threads like this.
- 3 Feb 2011 4:16 UTC; 11 points) 's comment on Rationality Quotes: February 2011 by (
- 7 Apr 2010 23:09 UTC; 2 points) 's comment on Rationality quotes: April 2010 by (
- 14 Nov 2013 3:02 UTC; 1 point) 's comment on Rationality Quotes November 2013 by (
I think there are other important reasons for the comparative success and effectiveness of PUA; the lack of concern for sugar-coating and political correctness is probably part of it, but that may be more of a consequence of what drives it, rather than a necessary precondition for it.
They have something to protect. Not a Great Cause, certainly, but a thing-to-protect nonetheless. PUA may not immediately sound like it matches “more than one’s own life has to be at stake, before someone becomes desperate enough to override comfortable intuitions”, but consider why the prospect of having commitment-free sex with lots of beautiful women may indeed seem higher stakes than life itself, for many heterosexual men...
(I’m reminded of the words of Philip J. Fry: “So you have to choose between life without sex and a hideous, gruesome death? . . . Tough call.”)
They’re playing to win, not just to convince themselves that they tried. I expect that PUA communities don’t reward trying nearly as much as they reward winning (if they reward trying at all). (And, of course, male brains themselves reward winning (at this particular thing) much more than they reward trying. As do many male social hierarchies.)
They have a natural drive to become stronger. I’m guessing that, for many of the guys who’d be into PUA in the first place, the prospect of even more and/or better sex would never fail to be compelling (or would at least have a very high ceiling), no matter how successful they already are.
Although rationality (including instrumental rationality, including most of what we’d call “self-help”) is a common interest of many causes, focused communities develop stronger and more precise arts. And I can’t think of a more single-mindedly focused instrumental-rationality community than PUA. Probably one big problem with self-help is that it aims to help all kinds of people with all kinds of problems achieve all kinds of goals; there’s too much ground to cover. Whereas PUA aims to help a few kinds of people with a few kinds of problems achieve essentially one goal. Its target demographic is large enough to produce successful communities, but specific enough to produce finely-targeted advice.
(Disclaimer: This comment shall not be taken as an endorsement of PUA. Overall I’m not a fan of it. But that should be separate from whether we can discuss it in the context of understanding the generalizable aspects of its instrumental success.)
I’m seeing tons of this on Facebook regarding Haiti relief. A proliferation of groups and events like “Wear Red for Haiti” and “Pray for Haiti” and “For every person who joins this group, I’ll give $1 for Haiti, because I’m a millionaire attention whore, and hey look someone wrote ‘gullible’ on the ceiling” (paraphrasing, granted) and “Sending Reiki Energy Healing to Haiti” (*RAGE*). I feel like they could all have the same title: “Join here to feel better about not donating actual money to actual people doing actual helpful work in Haiti.”
Apparently, many humans have a superpower whereby they can force themselves to do things they do not already feel pull-motivated to do, as though lifting themselves by their own bootstraps. I’m very jealous of this power and also very frustrated that most people who do have it are also unfamiliar with the typical mind fallacy and are confused about free will and think they understand their power but can only “explain” it in terms that sound to me like childish platitudes by now and certainly don’t have any technical content, so of course they usually don’t believe me or don’t understand when I say that I cannot even imagine what the fuck that ability would feel like. (Actually, worse, usually they think they understand and believe me but they clearly don’t, because the next minute they’re right back to the childish platitudes and the free will confusion and the acting like sentences like “Put one foot in front of the other” are somehow magically supposed to move me.) Urgh.
From Luke’s interview with Eliezer:
LUKE: Well Eliezer one last question. I know you have been talking about writing a book for quite a while and a lot of people will be curious to know how that’s coming along.
ELIEZER: So, I am about to finished with the first draft. The book seems to have split into two books. One is called How to Actually Change Your Mind and it is about all the biases that stop us from changing our minds. And all these little mental skills that we invent in ourselves to prevent ourselves from changing our minds and the counter skills that you need in order to defeat this self-defeating tendency and manage to actually change your mind.
It may not sound like an important problem, but if you consider that people who win Nobel prizes typically do so for managing to change their minds only once, and many of them go on to be negatively famous for being unable to change their minds again, you can see that the vision of people being able to change their minds on a routine basis like once a week or something, is actually the terrifying Utopian vision that I am sure this book will not actually bring to pass. But, it may none the less manage to decrease some of the sand in the gears of thought.
LUKE: Well it sounds excellent to me and what’s the second book that this has become?
ELIEZER: That’s all the basics of rationality that ought to be taught in grade school and are actually just taught piece meal in various post-graduate courses.
What is truth? What is evidence? Probability is in the mind. What does it mean to say that a hypothesis is simple? How do you do induction?
Reductionism. What does it mean to be any universe where complex things are made of simple parts. Just covering all the basics really.
From John Baez’s interview:
Right now my short-term goal is to write a book on rationality (tentative working title: The Art of Rationality) to explain the drop-dead basic fundamentals that, at present, no one teaches; those who are impatient will find a lot of the core material covered in these Less Wrong sequences:
though I intend to rewrite it all completely for the book so as to make it accessible to a wider audience. Then I probably need to take at least a year to study up on math, and then—though it may be an idealistic dream—I intend to plunge into the decision theory of self-modifying decision systems and never look back. (And finish the decision theory and implement it and run the AI, at which point, if all goes well, we Win.)
As far as I know, little if anything has been announced in any official capacity (release dates, publisher, etc.).
At least in some cases, the demand for specific alternatives and self-justification may serve as a conversation halter, when you’re criticizing something that someone doesn’t want criticized. I recall that when I was 13 or 14 or so, I was arguing politics with a friend, and when I argued against the merits of some particular policy of the then-current presidential administration, or said something that implied I thought the administration was bad, he would often say something to the effect of “Well, I suppose you think you could do a better job running the country?” At the time, I might have flippantly replied “Yes!” (I don’t quite remember what I did say, but I was probably far from arguing rationally and in good faith myself), but regardless, that does seem to be a logically rude rhetorical pattern, in that it shifts the discussion from the argument to the arguer, when that may not be at all relevant to the actual points being made. (And of course, you see that pattern being employed by plenty of Mature Adults and TV pundits and such, not just by young teenage boys.)
Also, status hierarchies probably come into play in disapproval of criticism; if you’re an ordinary powerless voter and you criticize the president, or if you’re a low-level worker and you criticize your company’s CEO, it may come off as a status grab from the perspective of people with higher status than you (even if they are still below whomever you are criticizing), possibly whether you propose alternatives or not. (Perhaps criticizing religion is perceived as literally criticizing God by those who believe in him.) To counteract this, I don’t think it is always necessary to have a detailed alternative, but I think it is necessary to appear to care (and, ideally, actually care) about the problem and about finding a solution. It’s easier to get away with criticizing the morality of religion if you clearly care about morality than if you dismiss morality as arbitrary or imaginary. Your examples (“The fact that I don’t know everything won’t make the problem go away”, etc.) should work too, if they can successfully push the discussion away from self-reference back to the object level.
Edit: I also agree with AlexMennen that it’s well worth making sure that your criticisms are internally consistent, so that you can at least reasonably evaluate proposed solutions even if you don’t think of your own.
(Photoshopped version of this photo.)