Eliezer Yudkowsky is a high school dropout with no formal education in anything.
nawitus
It was an example how embarrassing facts (yes, there was a small error, which didn’t change the point though) about the Great Leader are being hidden by the karma system by “Eliezer-follower cultists”.
I’m not a troll, but we all know rationalism doesn’t apply when it comes to silencing critique of the Great Leader.
This game (along with the prisoner’s dilemma and tragedy of the commons) nicely shows how the best choice to make is heavily influenced with how much you know about the other players (and therefore what they vote). If you know that the other players are “rationalists”, then you can safely submit 0 (assuming that this hypothetical rational intelligence indeed submits 0). In real world tests you can pretty safely assume that the players are not-perfectly-rational humans. It may also be possible (as you can here) to influence other players.
If you’re an AI, you do not have to (and shouldn’t) pay the first $1000, you can just self-modify to pay $1000 in all the following coin flips (if we assume that the AI can easily rewrite/modify it’s own behaviour in this way). Human brains probably don’t have this capability, so I guess paying $1000 even in the first game makes sense.
Sexbots will be like bots in a computer game. They will be fun, sure, but real players will be even more fun. I don’t really see the relevance of ‘disapproving bots’, some people will prefer them (or maybe be their only choice), and some will choose real players. If someone even enjoys playing with bots more than with real players, let them, it’s what they want to do.
The problem with this argument is, that it doesn’t explain anything nor does it solve the hard problem of consciousness. You simply redefine consciousness to mean something experimentally detectable, and then use that to claim p-zombies are impossible. You can move on, but that doesn’t leave the original problem answered.
“Consciousness, whatever it may be—a substance, a process, a name for a confusion—is not epiphenomenal; your mind can catch the inner listener in the act of listening, and say so out loud.” That’s simply a fact about human brains, and is of course empirically detectable, and we can in principle write out algorithms and then create a consciousness detector. That doesn’t explain anything about qualia though, and that’s the hard problem.
There are many valid arguments or reason to believe in the existence of qualia, you can’t simply say that because we cannot use qualia to predict anything at this point, then you can just ignore qualia. Qualia is “mysterious” in the same way the universe is, we don’t know it’s properties fully.
Qualia is not a full explanation as of yet, you can think of it as a philosophical problem. There are many arguments to believe in the existence of qualia. It might be possible to show all of them to be false, in fact Dennet has attempted this. After you’ve shown them all to be false, it’s okay to say “qualia doesn’t exist”. However, it’s irrational to claim that since the concept/problem of qualia doesn’t predict anything, qualia therefore doesn’t exist.
If you’re referring to WTC 7, it didn’t spontaneously collapse, it collapsed because of a fire. There was 91 000 liters of diesel fuel stored in that building for generators. Anyway, a few years ago a similar university building collapsed in Netherlands I believe. Even if it didn’t, just because something happens the first time, doesn’t mean the official report is wrong. A lot of things happen the first time, like a nuclear plant has exploded only once in history.
That’s a pretty good explanation. Another way to look at it is to think what would happen if the propeller was not connected to the wheels. In that situation, the cart would travel as fast as the wind, but the propeller would spin at high speed. If you connect the propeller to the wheels that energy is used to further increase velocity.
In fact, it would work if you place a radio controlled clutch between the propeller and the wheels. First wait for the cart to accelerate to wind speed, and the propeller to rotate faster than the wheels (if it’s 1:1 ratio without gears), then engage the clutch. The end result would be that the wheels would rotate at a higher speed and thus the cart would travel faster than the wind.
A person is not really either a rationalist or a irrationalist. There’s no general “rationality level”. Person can be more or less rational depening on the subject or time etc. Belief in God may not be that irrational depending on how you define God. And the community should not of course ban someone based on their beliefs in some particular matter. You can probably have a “rational discussion” on other subjects quite well.
Also, there’s nothing inherently irrational about chasing UFOs or buying lottery tickets.
Isin’t it more sane to donate money to organizations fighting against existential risks rather than spending money on cryonics?
Somewhat offtopic, but I’d like to see someone writing a GreaseMonkey script which hides the name of the commenter and the current score level on all comments, so you’re not being influenced by the status of the commenter and/or the current score level on that comment. The commenter name could be seen with a mouseover so you can reply to it though.
I think people shy away from wireheading because a future full of wireheads would be very boring indeed. People like to think there’s more to existence than that. They wan’t to experience something more interesting than eternal pleasure. And that’s exactly what an FAI should allow.
What we need are studies of damage from vitrification when the operation was not done immediately after death, but after few hours as it usually happens.
You can’t compare those, because the economic crisis happened mostly after Bush. Large debts have been taken by pretty much all Western nations.
Well, hormones, and chemicals such as DMT or endocannabinoids etc surely affect the thinking progress. But the phrasing of the question is not really clear to say if you can count these.
Does the MWI make rationality irrelevant? All choices are done in some universe (because there’s atleast one extremely improbable quantum event which arranges the particles in your brain to make any choice). Therefore, you will make the correct choice in atleast 1 universe.
Of course, this leads to the problems of continuing conscious experience (or the lack of), and whether you should care of what happens to you in all the possible future worlds that you will exist in.
The license of Reddit is free software, so it’s better to use that term instead of open source.