Count yourself as having other-optimised ;-p
David_Gerard
Glanced over them. I started with the Intuitive Explanation and my brain slid off it repeatedly. I fear that if that’s the “intuitive” explanation, then all the merely quite bright people are b*ggered. Needs rewriting for the merely quite bright, as opposed to the brilliant. This is what I meant about how, if you have a pitch, it better target the merely quite bright if you have any serious interest in spreading your favoured ideas.
This ties into my current interest, books that eat people’s brains. I’m increasingly suspecting this has little to do with the book itself. I realise that sentence is condensed to all but incomprehensibility, but the eventual heartbreaking work of staggering genius will show a lot more of the working.
Lured in by ciphergoth, who successfully irritated me into looking. Finally irritated into creating a login to comment on a post that wasn’t listing its sources.
I also write a lot on RationalWiki, with subjects of local interest being the cryonics and LessWrong articles. Please remember that we love you really, we’re just annoying about it.
Having given it some thought, I don’t label myself “rationalist”. “Whatever-works-ist” is probably more accurate. LessWrong’s ambit claim upon the word “rationalist” is very irritating.
LessWrong irritating me seems good for me. Or productive, anyway. This may not be the same thing.
- 18 Nov 2010 21:30 UTC; 3 points) 's comment on “Target audience” size for the Less Wrong sequences by (
- 18 Nov 2010 20:30 UTC; 2 points) 's comment on “Target audience” size for the Less Wrong sequences by (
On a quick glance, the intuitive explanation article seems several times longer than people who would want to get a quick idea about what all the Bayes stuff is about would be prepared to read.
That’s another factor. But I just couldn’t get a feel for the numbers in the breast cancer example. This is noting that I found Bruce Schneier’s analogous numbers on why security theatre is actively damaging [couldn’t find the link, sorry] quite comprehensible.
(I certainly used to know maths. Did the Olympiad at high school. Always hated learning maths despite being able to, though, finally beaching about halfway through second-year engineering maths twenty years ago. I recently realised I’ve almost completely forgotten calculus. Obviously spent too long on word-oriented artistic pursuits. I suppose it’s back to the music industry for me.)
As someone who is definitely smart but has adopted a so far highly productive life strategy of associating with people who make me feel stupid by comparison, I am happy to be a test stupid person for these purposes.
I’m guessing this refers to books that start cults, not just books that will consume limitless amount of brainpower if you let them? In any case, quite interested in hearing more about this.
More a reference to how to cure a raging memetic cold. Cults count (I am/was an expert on Scientology ), Ayn Rand sure counts (this being the example that suggests a memetic cold is not curable from the outside and you have to let the disease run its course). What struck me was that quite innocuous works that I don’t get a cold from have clearly caused one in others.
“Memetic cold”: an inadequate piece of jargon I made up to describe the phenomenon of someone who gets a big idea that eats their life. As opposed to the situation where someone has a clear idea but is struggling to put it into words, I’m not even entirely sure I’m talking about an actual phenomenon. Hence the vagueness.
Possible alternate term: “sucker shoot” (Florence Littauer, who has much useful material but many of whose works should carry a “memetic hazard” warning sign). It’s full of apparent life and vitality, but sucks the energy out of the entire rest of your life. When you get an exciting new idea and you wake up a year later and you’ve been evicted and your boyfriend/girlfriend moved out and your friends look at you like you’re crazy because that’s the external appearance. Or you don’t wake up and you stay a crank. The catch then is when the idea is valid and it was all worth it. But that’s a compelling trope because it’s not the usual case.
I just looked over my notes and didn’t entirely understand them, which means I need to get to work if this is ever to make coherent sense and not just remain a couple of tantalising comments and an unreleased Google doc.
- 18 Nov 2010 14:36 UTC; 25 points) 's comment on “Target audience” size for the Less Wrong sequences by (
- 1 Nov 2010 11:49 UTC; 6 points) 's comment on Making your explicit reasoning trustworthy by (
You can get a bad memetic cold by deliberately compromising your memetic immune system: decompartmentalising too aggressively, getting a not quite so magical click and it all becomes terribly clear: the infidel must die!
That’s an extreme failure mode of decompartmentalisation, of course. (Some lesser ones are on RationalWiki: Engineers and woo.) But when you see a new idea and you feel your eyes light up with the realisation that it’s compelling and brilliant, that’s precisely the time to put it in a sandbox.
Maybe. I’m not sure yet. It feels a bit like deliberate stupidity. On the other hand, we live in a world where the most virulent possible memes are used to sell toothpaste. Western civilisation has largely solved food and shelter for its residents, so using infectious waste as a token of social bonding appears to be what we do with the rest of our lives.
A rationality without dancing is not a rationality worth having. If there won’t be dancing at the rationality, I’m not coming.
I wrote large chunks of the RationalWiki article, which has actually attracted RW readers. LW is an interesting site even for lurkers. Someone started an EY article too.
So, start an article or post a relevant link in your preferred social space as relevant. Taking care to set a good example, since humans tend to judge ideas by their advocates before they judge the content, irrational or not.
(I myself consume LW as a work-avoidance amusement somewhere in with Slashdot, RW and answering all my email (one of my most effective recent life hacks was to actually keep Gmail closed and not check it more often than hourly), so I may not be the best example to follow.)
If you want to convince averagely-rational people of something, be a living example of it working. Use that blog like the Internet native you are. Kill the Buddha and write your own understanding of it—you don’t own the material until you could teach it, after all. You can expect people who know you to listen to you personally, because they know you. Give success stories—think of them as worked examples rather than anecdotes, if that helps.
Attract, don’t push—EY didn’t do a sales push for OB/LW.
(This is what I meant when I said I didn’t understand your idea of publicity, and I still don’t understand why I had to dredge for three months for your video rather than you doing the obvious and publicising it on your increasingly dusty blog or even in the same place where you publicised the talk. But this comment indicates you do in fact have an interest in spreading these ideas, rather than a cunning plan I don’t understand that involves not doing so.)
Handing your friends a book full of interesting memes doesn’t work, because people don’t take advice in general, and they particularly don’t take advice by reference—handing them a book is expecting them to convince themselves of your arguments for you.
It does work well enough in some examples, which is why people with a particularly compelling memetic infection wake you up far too early on a Saturday morning wanting to give you a book. But doing this to your friends is, I suspect, more likely to lead to them categorising you with the people waking them up far too early on a Saturday morning, thus losing both you and the book valuable reputation points.
(That said, knocking on doors and giving free books on rationality to strangers strikes me as an amusing idea. Though not amusing enough to do it myself.)
It is heuristically justifiable not to take on others’ meme infections lightly, and even to avoid doing so. Western culture is made of the most virulent obtainable memes, and they’re usually selling toothpaste or car insurance. Beliefs should pay rent in one’s head; an apparently infectious meme, producing the “convert” effect in its host, is one to avoid more than less infectious ones. Being a living success story is a very convincing counterargument. Worked for EY.
Ya got me there. I haven’t watched it. (Watching video? Painful. Must read transcript.) And that is indeed a good reason. (Though one of the reasons I ask is that blog.ciphergoth.org ends on an unresolved note, looking like something untoward has happened to you.) Hope the rest of the comment is useful.
It wouldn’t necessarily make you a believer. Worked example: I joined in the battle of Scientology vs. the Net in 1995 and proceeded to learn a huge amount about Scientology and everything to do with it. I slung the jargon so well that some ex-Scientologists refused to believe I’d never been a member (though I never was). I checked my understanding with ex-Scientologists to see if my understanding was correct, and it largely was.
None of this put me an inch toward joining up. Not even slightly.
To understand something is not to believe it.
That said, it’ll provide a large and detailed pattern in your head for you to form analogies with, good or bad.
Well, yeah. Scientology is sort of the Godwin example of dangerous infectious memes. But I’ve found the lessons most useful in dealing with lesser ones, and it taught me superlative skills in how to inspect memes and logical results in a sandbox.
Perhaps these have gone to the point where I’ve recompartmentalised and need to aggressively decompartmentalise again. Anna Salamon’s original post is IMO entirely too dismissive of the dangers of decompartmentalisation in the Phil Goetz post, which is about people who accidentally decompartmentalise memetic toxic waste and come to the startling realisation they need to bomb academics or kill the infidel or whatever. But you always think it’ll never happen to you. And this is false, because you’re running on unreliable hardware with all manner of exploits and biases, and being able to enumerate them doesn’t grant you immunity. And there are predators out there, evolved to eat people who think it’ll never happen to them.
My own example: I signed up for a multi-level marketing company, which only cost me a year of my life and most of my friends. I should detail precisely how I reasoned myself into it. It was all very logical. The process of reasoning oneself into the mouth of a highly evolved predator tends to be. The cautions my friends and family gave me were all heuristic. This was before I studied Scientology in detail, which would I suspect have given me some immunity.
I should write a post on the subject (see my recent comments) except Anna’s post covers quite a lot of it.
- 18 Nov 2010 14:36 UTC; 25 points) 's comment on “Target audience” size for the Less Wrong sequences by (
- 21 Nov 2010 11:02 UTC; 2 points) 's comment on What I’ve learned from Less Wrong by (
How I attempted to nutshell it for the RW article on EY:
“Yudkowsky identifies the big problem in AI research as being that there is no reason to assume an AI would give a damn about humans or what we care about in any way at all—not having a million years as a savannah ape or a billion years of evolution in its makeup. And he believes AI is imminent. As such, working out how to create a Friendly AI (one that won’t kill us, inadvertently or otherwise) is the Big Problem he has taken as his own.”
It needs work, but I hope does justice to the idea in trying to get it across to the general public, or at least people who are somewhat familiar with SF tropes.
Reading the sucker shoot analogy in a Florence Littauer book (CAUTION: Littauer is memetic toxic waste with some potentially useful bits). That was the last straw after months of doubts, the bit where it went “click! Oh, this is actually really bad for me, isn’t it?” Had my social life been on the internet then (this was 1993) this would have been followed with a “gosh, that was stupid, wasn’t it?” post. I hope.
It may be relevant that I was reading the Littauer book because Littauer’s books and personality theories were officially advocated in the MLM in question (Omegatrend, a schism of Amway) - so it seemed to be coming from inside. I worry slightly that I might have paid insufficient attention had it been from outside.
I’d be interested to know how others (a) suffered a memetic cold (b) got out of it. Possible post material.
I actually first started reading alt.religion.scientology because I was interested in the substance of Scientology (SPOILER: there isn’t any) from being a big William S. Burroughs fan. The lunacy is pretty shallow below the surface, which is why the Church was so desperately keen to keep the more esoteric portions from the public eye as long as possible.
But, um, yeah. Point.
OTOH, all the Scientologists I knew personally before that emitted weirdness signals. Thinking back, they behaved like they were trying to live life by a manual rather than by understanding. Memetic cold ahoy!
I’d suggest it can be for some people. I really don’t understand why people work sixteen hour days on Wall Street making more money than they could ever need for anything otherwise. (That’s a statement about my map being deficient.) Perhaps as a token for status.
Western culture is, I posit, pretty much entirely composed of the most viral and addictive materials anyone can come up with. The civilisation has pretty much solved the food and shelter problems for its residents—imagine, in Britain it’s regarded as a serious social problem that the poor people are too fat! As problems go, this is a vast improvement over the ones we had 60 years ago, when food was rationed—so exchanging memetic toxic waste as tokens of social intercourse appears to be what we do with the rest of our time. Frequently for the purpose of selling toothpaste or car insurance.
Western civilisation is not visibly collapsing, so I suggest that potentially fatal susceptibility to meme addiction is not sufficiently widespread to take us out. That said, even a lesser susceptibility can be a problem if you want to get stuff done. I should be fixing a build right now …
- 12 Apr 2012 12:12 UTC; 3 points) 's comment on against “AI risk” by (
The general case was analysed by Clay Shirky in A Group Is Its Own Worst Enemy.
[Those of us around Wikipedia reading Shirky’s article in 2004-2005 giggled in horror at Wikipedia being named as an aversion of this trope.]
Basically, every social space (in general) grows and dies. This is normal. Start new ones as the old ones go bad.
+1
Unfortunately, this is a common conversational pattern.
Q. You have given your estimate of the probability of FAI/cryonics/nanobots/FTL/antigravity. In support of this number, you have here listed probabilities for supporting components, with no working shown. These appear to include numbers not only for technologies we have no empirical knowledge of, but particular new scientific insights that have yet to occur. It looks very like you have pulled the numbers out of thin air. How did you derive these numbers?
A. Bayesian probability calculations.
Q. Could you please show me your working? At least a reasonable chunk of the Bayesian network you derived this from? C’mon, give me something to work with here.
A. (tumbleweeds)
Q. I remain somehow unconvinced.
If you pull a number out of thin air and run it through a formula, the result is still a number pulled out of thin air.
If you want people to believe something, you have to bother convincing them.
- 2 Nov 2010 11:31 UTC; 7 points) 's comment on What I would like the SIAI to publish by (
- 3 Nov 2010 9:30 UTC; 0 points) 's comment on What I would like the SIAI to publish by (
I find myself confused at the fact that Drexlerian nanotechnology of any sort is advocated as possible by people who think physics and chemistry work. Materials scientists—i.e. the chemists who actually work with nanotechnology in real life—have documented at length why his ideas would need to violate both.
This is the sort of claim that makes me ask advocates to document their Bayesian network. Do their priors include the expert opinions of materials scientists, who (pretty much universally as far as I can tell) consider Drexler and fans to be clueless?
(The RW article on nanotechnology is mostly written by a very annoyed materials scientist who works at nanoscale for a living. It talks about what real-life nanotechnology is and includes lots of references that advocates can go argue with. He was inspired to write it by arguing with cryonics advocates who would literally answer almost any objection to its feasibility with “But, nanobots!”)
- 3 Nov 2010 9:30 UTC; 0 points) 's comment on What I would like the SIAI to publish by (
Drexler-style nanofactories don’t operate in a vacuum, because they don’t exist and no-one has any idea whatsoever how to make such a thing exist, at all. They are presently a purely hypothetical concept with no actual scientific or technological grounding.
The gravel analogy is not so much an argument as a very simple example for the beginner that a nanotechnology fantasist might be able to get their head around; the implicit actual argument would be “please, learn some chemistry and physics so you have some idea what you’re talking about.” Which is not an argument that people will tend to accept (in general people don’t take any sort of advice on any topic, ever), but when experts tell you you’re verging on not even wrong and there remains absolutely nothing to show for the concept after 25 years, it might be worth allowing for the possibility that Drexlerian nanotechnology is, even if the requisite hypothetical technology and hypothetical scientific breakthroughs happen, ridiculously far ahead of anything we have the slightest understanding of.
Although it’s not marked as the inspiration, this post comes straight after an article by many-decades cryonicist Charles Platt, which he wrote for Cryonics magazine but which was rejected by the Alcor board:
Cryoptimism Part 1 Part 2
Platt discusses what he sees as the dangerously excessive optimism of cryonics, particularly with regard to financial arrangements: that because money shouldn’t be a problem, people behave as though it therefore isn’t a problem. When it appears clear that it is. To quote:
The above post may make more sense considered as a response to Platt’s article.