From London, now living in the Santa Cruz mountains.
Paul Crowley
“IN CASE OF UNFRIENDLY AI, IT IS TOO LATE TO BREAK GLASS”
I love this response. “No, I urinate on other people’s rugs; see my previous urination history. Clearly I thought it worth the slight risk it might descend into boring conversation about who should urinate where.”
Not precommitting to be on my own before making a major life decision.
I once bought something in an New York shop through high-pressure sales. I looked at it and said something about how I would like to have it but I couldn’t nearly afford it, and he asked me how much I would pay for it. Foolishly, I named a price; he looked insulted and said that it was far too low. I tried to explain that that was what I meant, that I couldn’t afford it at any reasonable price, but he skilfully turned it into haggling, and I walked out with the thing and considerably poorer. I then resolved never to buy anything expensive without leaving the shop first, so I could just walk off if I changed my mind.
Many years later, I met up with my girlfriend’s girlfriend for dinner and drinks so we could discuss whether it would work for her to move in with us. There were a lot of warning signs that it wouldn’t, to say the least. I pressed her on things that were worrying me, and got wholly unsatisfactory answers. But we very often had good and enjoyable conversations, and this was one of those times. So at the end she sort of said “OK, that’s all great, shall we announce online that I’m moving in?” and it wasn’t easy to say no. The result was very costly for all of us; it was definitely the biggest and most predictable mistake of my last decade.
Going into such a conversation another time, I’d have said well in advance that I wouldn’t be making any decisions until the next day, when I was on my own. I think there’s every chance that that simple precaution would have saved untold suffering and money for all concerned.
News flash, dearies: there’s lots of areas of life that aren’t ‘science’ where people do tend to get a mite hung up on particulars of what is and is not, in fact, true. Like in bookkeeping. Like in criminal investigations. Like when they’re trying to establish where their spouse was last night.
Like, in fact, in most facets of life, hundreds of times a day, even if accounting isn’t your field and you’re not the accused at a criminal trial, and you’re not even married. Getting the facts right isn’t a concern of ‘science’, specifically. It’s a general concern of human beings. Getting reality right is, frequently, indeed, rather important if you wish to stay alive. It’s not a particularly academic question whether the car is or is not coming, when you cross the road. It’s the sort of thing one likes to get right. And we don’t generally call this ‘science’, either. We call it ‘looking’.
-- AJ Milne
I surveyed.
COMPLAIN! I have one partner but I’m definitely not monogamous. Sorry :)
I like it, but I couldn’t really say that the belief that terrorists hate our freedom led to a great increase in freedom.
“Erudition can produce foliage without bearing fruit.”—Georg Christoph Lichtenberg
By far my biggest problem with the way you discusses rationality is the way that you draw on the tropes of Eastern martial arts instruction, and it’s because of exactly this sort of thing—those tropes are appropriate for one who wants to be considered a guru, which is the opposite of your stated aims. It’s something I have to warn people about if I’m recommending something you’ve written.
“If you’re running an event that has rules, be explicit about what those rules are, don’t just refer to an often-misunderstood idea” seems unarguably a big improvement, no matter what you think of the other changes proposed here.
Please don’t do this. Over the past year we have had tremendous success in being taken more seriously. Please don’t make us look silly.
- 20 Sep 2014 17:07 UTC; 26 points) 's comment on Street action “Stop existential risks!”, Union square, San Francisco, September 27, 2014 at 2:00 PM by (
If you’re able to directly discuss what her plans are for after death (cremation etc) then could you just talk about your own plans in the same context—don’t explicitly suggest that she get it done, just discuss that it’s what you want for yourself.
What’s so intimidating? You don’t need much to post here, just a basic grounding in probability theory, decision theory, metaethics, philosophy of mind, philosophy of science, computer science, cognitive bias, evolutionary psychology, the theory of natural selection, artificial intelligence, existential risk, and quantum mechanics—oh, and of course to read a sequence of >600 3000+ word articles. So long as you can do that and you’re happy with your every word being subject to the anonymous judgment of a fiercely intelligent community, you’re good.
- 1 Mar 2010 9:41 UTC; 16 points) 's comment on Open Thread: March 2010 by (
- 17 Feb 2010 6:10 UTC; 1 point) 's comment on Issues, Bugs, and Requested Features by (
Why on Earth do people keep saying this? Sending out a party invite via email is a technical solution to a social problem, and it’s great! For God’s sake, taking the train to see a friend is a technical solution to a social problem. This phrase seems to have gained currency through repetition despite being trivially, obviously false on the face of it.
I can only assume this is a logarithmic scale, or something.
How good at playing chess would a chess computer have to be before it started trying to feed the hungry?
I don’t think there’s a single defining point of difference, but I tend to think of it as the difference between the traditional social standard of having beliefs you can defend and the stricter individual standard of trying to believe as accurately as possible.
The How to Have a Rational Discussion flowchart is a great example of the former: the question addressed there is whether you are playing by the rules of the game. If you are playing by the rules and can defend your beliefs, great, you’re OK! This is how we are built to reason.
X-rationality emphasizes having accurate beliefs over having defensible beliefs. If you fail to achieve a correct answer, it is futile to protest that you acted with propriety. Instead of asking “does this evidence allow me to keep my belief or oblige me to give it up?”, it asks “what is the correct level of confidence for me to have in this idea given this new evidence?”
(Brief foreword: You really should read much more of the sequences. In particular How to Actually Change Your Mind, but there are also blog posts on Religion. I hope that one thing that comes out of this discussion is a rapid growth of those links on your wiki info page...)
What are the requirements to be a member of the LessWrong community? If we upvote your comments, then we value them and on average we hope you stay. If we downvote them, we don’t value them and we hope either that they improve or you leave. Your karma is pretty positive, so stay.
You seem to be expecting a different shape of answer, about certain criteria you have to meet, about being an aspiring rationalist, or being above the sanity waterline, or some such. Those things will likely correlate with how your comments are received, but you need not reach for such proxies when asking whether you should stay when you have more direct data. From the other side, we need not feel bound by some sort of transparent criteria we propose to set out in order to be seen to be fair in the decisions we make about this; we all make our own judgement calls on what comments we value with the vote buttons.
I think you’re led to expect a different sort of answer because you’re coming at this from the point of view of what Eliezer calls Traditional Rationality—rationality as a set of social rules. So your question is, am I allowed this belief? If challenged, can I defend it such that those who hear it acknowledge I’ve met the challenge? Or can I argue that it should not be required to meet these challenges?
This of course is an entirely bogus question. The primary question that should occupy you is whether your beliefs are accurate, and how to make them more accurate. This community should not be about “how can I be seen to be a goodthinking person” but “how can I be less wrong?”
Also, it seems very much as if you already know how things are going to swing when you subject your theistic beliefs to critical examination. That being so, it’s hard to know whether you actually believe in God, or just believe that you believe in God. I hope you will decide that more accurate beliefs are better in all areas of study for several reasons, but one is that I doubt that you are maximizing your own happiness. You are currently in a state of limbo on the subject of religion, where you openly admit that you daren’t really think about it. I think that you will indeed find the process of really thinking about it painful, but it will be just as painful next year as it will be now, and if you do it now you’ll avoid a year of limbo, a year of feeling bad about yourself for not goodthinking, and a year of being mistaken about something very important.
Yeah, people do that all the time.
A brief response: Yes, cryonic preservation causes all sorts of severe damage far beyond our current ability to overcome; all the damage discussed in this paper is well understood and widely discussed by cryonics practitioners. This paper doesn’t seem to quite engage with the central contention of cryonics: that so long as the information that makes up memory and personality is preserved, future technology may find a way to repair the damage caused by cryopreservation. Two distinct paths to this end are widely talked about: molecular nanotechnology, and scanning/WBE. As far as I can tell, no argument is made in the paper that human cryopreservation causes information-theoretic death, and neither of these repair options are discussed at all. As a result, this paper, while it is vastly vastly ahead of the arguments made by other critics of cryonics, is some way behind the arguments already considered and answered by cryonics advocates.
In this community, agreeing with a poster such as yourself signals me as sycophantic and weak-minded; disagreement signals my independence and courage. There’s also a sense that “there are leaders and followers in this world, and obviously just getting behind the program is no task for so great a mind as mine”.
However, that’s not the only reason I might hesitate to post my agreement; I might prefer only to post when I have something to add, which would more usually be disagreement. Since I don’t only vote up things I agree with, perhaps I should start hacking on the feature that allows you to say “6 members marked their broad agreement with this point (click for list of members)”.