The background strands pass over the foreground strands. Is that intentional?
Incorrect
Unless lack of existential happiness is considered as a factor in determining public perception of immortality and cryonics. Existentially happy people then make decisions based on that public perception.
If you will pardon the digression I’d love to ask you a few questions.
Can you still experience sensory information for a moment after the source is no longer present? For example, if you focus on an object and suddenly close your eyes, can you still perceive the object for fractions of a second?
If you don’t hear things in your mind does that mean you never have a song stuck in your head?
For me, a really useful purpose for visualization is for triggering related memories. For example, if I am trying to remember what groceries I need to buy, I will picture my refrigerator and mentally scan over the shelves to help myself recall what items usually reside there. What would you do in a situation like this?
Can you visualize spaces with object shapes and positions as distinct from images where you have to worry about color and more precise details of perspective? For me this is much easier than visualizing images.
You say your thoughts take the form of “silently talking to myself. There are only words. ” Don’t you ever sometimes think with concepts in place of words?
You may be interested that some people dream in black and white.
Today, one of the chief pieces of advice I give to aspiring young rationalists is “Do not attempt long chains of reasoning or complicated plans.”
Advice more or less completely ignored by everyone, including EY himself.
An alternative interpretation is that we should break up long chains of reasoning into individually analyzed lemmas and break up complicated plans into subgoals.
Hell, even LW has counter-memes against entering politics.
Avoiding discussing politics directly is not the same as not personally entering politics.
Traditional Rationalists can agree to disagree.
The LW advance in this area seems to consist entirely of agreeing to disagree after quoting Aumann’s theorem and continuing the argument well past the point of diminishing returns
It’s good advice, but only if both parties are truly following it; an admittedly implausible prospect.
If math is what will save us from making interesting new mistakes, we clearly aren’t doing enough of it.
What about requiring all new users to solve some different numbers of Euler problems to comment, vote, post top level, have cool neon color names, etc.? Alternatively or conjunctively, breaking up the site into “fuzzy self help” and “1337 Bayes mathhacker” sections might help.
(λf.(λx.f (x x)) (λx.f (x x))) {image of a brain}
That might be misinterpreted to mean “mind blowing.”
It’s called the Y combinator. If evaluated lazily it wont necessarily run forever.
Concealing unconventional beliefs with high inferential distance to those you are speaking with makes sense. Dismissing those beliefs with the absurdity heuristic does not.
Also, I think you underestimate the utility of rhetorical strategies. For example, you could:
Talk about these weird beliefs in a hypothetical, facetious manner (or claim you had been).
Close the inferential difference gradually using the Socratic method.
Introduce them to the belief indirectly. For example, you could link them to a more conventional LessWrong sequence post and let them investigate the others on their own.
Ask them for help finding what is objectively and specifically wrong with the weird belief.
He may be more familiar with certain other internet communities and assume most LessWrong readers have low status.
Related: On interpreting maverick beliefs as signals indicating rationality:
Undiscriminating Skepticism
[With reference to .com] This is an open TLD; any person or entity is permitted to register. Though originally intended for for-profit business entities, for a number of reasons it became the “main” TLD for domain names and is currently used by all types of entities including nonprofits, schools and private individuals.
http://en.wikipedia.org/wiki/List_of_Internet_top-level_domains
For a person of average rationality skills, all arguments beyond a certain inferential distance are dangerous because they are unable to determine their validity; many of the arguments sound right and yet the conclusions seem unintuitive. Those who allow themselves to be persuaded by such arguments can commit completely illogical or amoral actions.
I think in this light the similarity of the words “rationalization” and “rationality” makes sense for use by the common person for whom any naive attempt at rationality would do more harm than good.
That’s not to say that such people couldn’t benefit from adopting a particular rationalist strategy, for example, using expected value calculations when gambling (or rather not gambling); it’s pure reasoning from actions to consequences that is too dangerous to attempt.
I’m not sure what internalize means in this context. How is internalization accomplished?
You can’t always do it like that in the least convenient possible world.
Why not? (This is a serious question. I don’t know why not.)
What about focusing on actively defending against uFAI threats?
I think the name “SIAI Emergency Optimization Suppression Unit” sounds pretty cool.
I understand why group selection is problematic: Individual selection trumps it.
However, when group and individual selective pressure coincide, the mutation could survive to the point where it exists in a group at which point the group will have better fitness because of the group selective pressure.
Is this incorrect?
Offensive to whom? If you personally aren’t really offended by anything, are you immune to this?
In three worlds collide there’s the non-consensual sex plot point that some people misunderstood as being misogynist rather than the result of EY selecting a particularly unusual point in culturespace. Is this the kind of analysis you are talking about?
I really just don’t see how this meme could infect you unless you already have a disposition of looking for things to take offense to.
Do you have any other examples?
What’s wrong with the superhappies?
TVtropes calls him a hero and cites Normal Cryonics.
Perhaps this could be worked into the Wikipedia article.