More of that delicious creamy soup you made would be pretty awesome.
Will_Newsome
You’re assuming, of course, that you wouldn’t be voted down to below −50, in which case no one wins.
Some people can perform surgery to save kittens. Eliezer Yudkowsky can perform counterfactual surgery to save kittens before they’re even in danger.
If and only if you can explain UDT in text at least as clearly as you explained it to me in person; I don’t think that would take a very long post.
Eliezer Yudkowsky can slay Omega with two suicide rocks and a sling.
Unlike Frodo, Eliezer Yudkowsky had no trouble throwing the Ring into the fires of Mount Foom.
This is amazing.
I for one think you should turn it into a post. Brilliant artwork should be rewarded, and not everyone will see it here.
(May be a stupid idea, but figured I’d raise the possibility.)
It’s sad to see such denial, and yet so humbling. ;P
(PROTIP: Maxwell House)
MOAR PEDANTRY: I always thought we Less Wrongers avoided ‘rationalism’ because it already has a history as a philosophy; one which most Less Wrongers wouldn’t endorse. However, both you and FAWS have used it in this thread, so I’m confused.
- 19 Mar 2011 3:29 UTC; 5 points) 's comment on Can we stop using the word “rationalism”? by (
As if you could kill time without injuring eternity.
-- Henry David Thoreau
It is also interesting to note that everyone I’ve met who is involved with Less Wrong is an xNxx, the vast majority being xNTx. Predictably, INTP is the most common.
Additionally, if anyone has experience living with other rationalists
One quirk I’ve seen at Benton house is that people are much more open and honest about criticisms than your average community. This seems to be a large advantage for the house members thus far, as it’s hard to improve yourself if you can’t recognize your flaws; however, it’s possible that an easily upset or passive-aggressive person might not find the environment comfortable for extended periods.
An annoying thing about the RQ test (rot13′d):
Jura V gbbx gur ED grfg gurer jnf n flfgrzngvp ovnf gbjneqf jung jbhyq pbzzbayl or pnyyrq vagrerfgvat snpgf orvat zber cebonoyr naq zhaqnar/obevat snpgf orvat yrff cebonoyr. fgrira0461 nyfb abgvprq guvf. Guvf jnf nobhg 1 zbagu ntb. ebg13′q fb nf abg gb shegure ovnf crbcyrf’ erfhygf.
Another one that I think has yet to escape Benton house is ‘cesire’, along the same lines.
Eliezer Yudkowsky is a superstimulus for perfection.
What am I doing? Working for SIAI. For the last hour or so I’ve been making a mindmap of the effects of ‘weird cosmology’ on strategies to reduce existential risk: whether or not the simulation hypothesis changes how we should be thinking about the probability of an existential win (conditional on the probability (insofar as probability is a coherent concept here) that something like all possible mathematical/computable structures exist); whether or not we should look more closely at possible inflationary-magnetic-monopole-infinite-universe-creation-horrors; how living in a spatially infinite universe might affect ethics (.pdf warning. Also, I found it a lot easier to think about it as infinite pizza instead of infinite ethics. I don’t remember this leading to any significant problems besides a strong desire for pizza. YMMV.); et cetera.
Why am I doing it? I’m not sure if many of these ideas have been compiled into a single place to be synthesized and tested against each other. Weird things happen when you put a recursive ‘simulation’ in a Tegmark level 4 multiverse with an infinite amount of inflationary universes being formed out of magnetic monopoles with further universes coming into existence at the moment blackholes decohere and then play ‘follow the measure’ (with a heavy dose of anthropic reasoning, of course). (If someone has done something like this and found interesting results, please let me know, as an hour thinking up crazy stuff does not seem like nearly enough analysis.)
Why do I care about that? It seems like there’s a good chance we’re missing some much-needed information and it’s hidden in a fog of metaphysics. And we really do need it if we want to maximize the probability of a continued humanity.
So what, why does that matter? Well, I love many people and many things, and I would like them to continue existing; and each millionth of a millionth of a percent chance that humanity can live on and flourish, reaching its greatest potential, whatever that may be, is worth whatever effort I can put into it.
If I recall correctly it seemed that you mostly argued for an objective morality instead of using it as the (explicit or implicit) linchpin of a larger argument. The former is well and good but the latter is irritating (e.g. “Deity X must exist because there is an objective morality”).
The general point of noticing when arguments are starting from backwards assumptions is a good one. This particular hypothesis also seems solid.