My impression was that it was the screwing around that was lacking.
Armok_GoB
You mean an absolutely awesome alliteration?
I don’t know, but I strongly prefer the old one. :(
Many comments seem to imply and provide evidence for this, but I’m going to state it explicitly so it’s easier to comment on:
This way of writing long articles seems superior in many ways and you should probably do mostly this instead of the short single-point posts.
but I’ve never heard of a daemon tempting anyone.
RSS reader/other notification of new procrastination available.
Karma Bounties
LW seems to reward actually doing things disproportionally little compared to talking about them. My suggestion for this are “bounty” pools for doing various things, and when anyone does them they are rewarded the karma in the pool.
More info here: http://lesswrong.com/lw/56p/do_meetups_really_have_to_go_on_the_front_page/3wfn?context=4#comments
Example: someone points out a problem with the LW Source, but rather than nothing happening unless some hero does it by themselves, there is a consensus reached in the comments and someone ends up proposing a bounty, then many people who might not otherwise have been interested give a bit of karma, and the pool ends up much larger than could be expected to gain from just commenting out after the problem was solved and asking for it. This motivates someone to do the change, then an admin verifies it and the pool is given to the person who fixed the problem.
Because Tegmark 4 isn’t mainstream enough yet to get it down to one.
If there is a way to reduce it to zero or not is one discovery I’m much looking forward to, but there probably isn’t. It certainly seems totally impossible, but that only really means “I can’t think of a way to do it”.
It really seems you need to taboo “real” here, and instead ask some related questions such as:
which types of universes could observe which other types of universe (an universe which can observe you you can also, obviously, “travel” to)? Which universes could trade, in the broadest senses of the word, with which other universe? What types of creatures in which types of universes are capable of consistently caring about things in what types of universes?
Specifically it seems likely that your usage of “real” in this case refers to “things that humans could possibly, directly or indirectly, in principle care about at all.”, which is the class of universes we must make sure to include in our priors for where we are.
- 29 Nov 2012 16:59 UTC; 1 point) 's comment on Causal Universes by (
This article makes some great point, however I think you are other optimizing. Specifically, these seem more like techniques for Unlocking Massive Latent Potential (that most people don’t have), or curing lazy/spoiled but already awesome people. That’s very much worth writing an article over, since those are probably where most potential rationalists will come from, but it’s not the same as an universal formula for awesomeness.
That wouldn’t be a problem—social/environmental stimulation and diversity of experience are good for you even if it doesn’t turn you into a badass. However, many of the techniques are dangerous if tried by a median human; getting rid of de-stressing activities and entertainment or taking on more responsibilities or than you can handle can burn you out, doing hard things and overloading yourself risks downright trauma and injury, and other hard things or quitting your job could leave someone in permanent financial ruin and unemployment.
What I suspect has happened here is the same type of selection effect as in books on how to get rich by extremely rich people—just because almost all members of desirable group X did Y, doesn’t mean doing Y is a good idea; you never heard of all the many more people that did Y and failed ending up in a much worse position than if they had just stuck with status quo. Being member of an elite doesn’t just select for strategy it also selects heavily for talent and luck, and different strategies may be optimal depending on the amount of talent and luck you have.
The universe is full of sharp things, waiting to skewer us.
No idea what I got the sudden urge to respond with that.
Just for the Least Convenient World, what if the zombies build a supercomputer and simulate random universes, and find that in 98% of simulated universes life forms like theirs do have shadow brains, and that the programs for the remaining 2% are usually significantly longer?
There are way to many amazing posts with very little karma and mediocre posts with large amounts of karma.
Not enough productive projects related to the site, like site improvements and art. The few that do show up get to little attention and karma.
To much discussion about things like meetups and growing the community and converting people. Those things are important but they dosn’t belong on LW and should probably have their own site.
There is a category of religiously inspired posts that creep me out and set of cult alarms. It contains that post about staring from Scientology and that Transcendental meditation stuff that while I found it interesting and perhaps useful doesn’t seem to belong on LW and now recently these Mormon posts abut growing organizations. shudder
Yes, except with the awesome twist that it’s presumably not a simulation, but an actual collection of quarks with no in built fail-safes. If my judgement of authorial intent is right, they machines don’t even have ubiquitous nanotech or beat chaos theory generally, they are just that good at xantos gambits. Which makes it a fantastic illustrative example of the thing a truly superhuman intelligence could manage to do.
I dislike it but not for that reason. There are so many great hooks for rationalist lessons in the actual show, but instead he makes an anvilicious alternate universe to take a cheap shot at a completely unrelated subject. It’s such a waste. I am disappointed.
I know I am a sheep and hero worshipper, and then the typical mind fallacy happened.
One possible view is that the entire notion of “X identity” is broken, and things like “gender” and “species” are simply not applicable to minds. Anyone who thinks they got a “female mind” are wrong, regardless of if their body are male or female, because such a thing dosn’t exist.
Another thing one could argue for is that there are no qualitative differences, but that there are objective classifications based on statistical correlation between physical and mental traits. This seems to agree with your intuitions: In a Turing test you can probably distinguish females and males in which case most transsexuals hopefully come out as the gender they consider themselves do be, otherkin that are not brain damaged come out as humans on account on being able to read or type in the first place, and fae is inapplicable because you cant find any real fae to run the test with.
A third view is that identity is just a set of suggestively named tags a mind can apply to itself, and every mind if free to chose what it wants. By this view “goth”, “plumber”, “female”, “gay”, “brony”, “ratioanlist” and “black” are all the exact same type of label, and a pink-skinned person with a male body in white clothes who likes females, have never watched my little pony, and cant fix a leak if her life depended on it are able to call herself all those things and should be able to expect evrypony to treat her like it. While this is counter-intuitive and has obvious drawbacks, there are strong social reasons to consider this view.
I use all these definitions in different kinds of situations depending on context, and probably end up confusing them quite a bit. Having different words for them would probably be useful.
For practical purposes, assuming the first interpretation when someone says “identity” naively, and then using somehting like “apparent identity” (hard to find somehting politically correct there) for the turing test one and “presented identity” for the labels one might work. More suggestions on that welcome.
I am extremely surprised by this, and very confused. This is strange because I technically knew each of those individual examples… I’m not sure what’s going on, but I’m sure that whatever it is it’s my fault and extremely unflattering to my ability as a rationalist.
How am I supposed to follow my consensus-trusting heuristics when no consensus exists? I’m to lazy to form my own opinions! :p
Which MY head continues: The nature of magic turned out to be sensitive to that kind of notion, and the flavours not predetermined, so the very next one tastes like dementor or strangelet.
Make differently coloured bands corresponding to each task. you’re only allowed to work on the task if you’re wearing the correctly coloured band. The bands are placed far away from where you do the tasks or in a box with a complicated lock or something like that. You chose which band to wear first using dice.
IRRATIONALITY GAME
Eliezer Yudovsky has access to a basilisk kill agent that allows him to with a few clicks untraceably assassinate any person he can get to read a short email or equivalent, with comparable efficiency to what is shown in Deathnote.
Probability: improbable ( 2% )