If the zombies are writing these consciousness papers, then they would have to have our beliefs, and they would strongly believe that THEY were conscious. So how do we know that we’re conscious? If we weren’t, we would still think we were, so there’s really no way to determine if we’re actually the zombies.
While the guess that seems like it has the highest probability is the most important to test, anything that seems to have a moderately high probability should be tested, as long as it doesn’t take up too many resources. This is particularly important when experiments take a long time- if Hypothesis A is more likely than Hypothesis B, but testing either would take 3 years, you don’t want to just test A and risk taking 3 years when you could test both at the same time and determine if either was correct.
It’s probably best to not update based on expertise. Even though that would usually improve accuracy, because the experts are more likely to be right than chance, or than most people’s opinions, it stops anyone from creating anti-expert opinions. Accuracy isn’t as important as discovery, and the only way anyone can discover anything new is if they find things that seem probable despite disagreeing with the experts, and if you update too much just because of who believes something, you’ll very rarely make any scientific progress.
What about Eliezer? He founded Less Wrong- why isn’t he part of the team anymore?
I was wondering- what happened on June 16, 2017? Most of the users on Less Wrong, including Eliezer, seemed to have “joined” at that point, but Less Wrong was created on February 1, 2009, and I’ve seen posts from before 2017.
Is there a 2018 or 2019 survey anywhere? I tried to find it, and I’ve seen some things from both you and Yvain, but I can’t find any surveys past this one.
Zyzzx Prime could always do either:
1. No rulers; every single member votes on every issue
2. Select scientists (not leading scientists, of course, just average ones) and have them work on genetic engineering. No one can know who they are, and they work at minimum wage. (Of course, it could be hard to convince them to do this.)
From what I’ve seen, most people seem to argue two-box, and the one-boxers usually just say that Omega needs to think you’ll be a one-boxer, so precommit even if it later seems irrational… I haven’t seen this exact argument yet, but I might have just not read enough.
Since Newcomb’s Problem, the boxes, and Omega don’t actually exist, we can’t physically conduct the experiment. However, based on the rules of the problem we can calculate the average amount of profits. In this fictional world, we are already told that Omega guesses correctly 99% of the time, and since we learned that from Newcomb himself it counts as a fact about this fictional world. This means that 99% of the time, the one-boxer gets $1,000,000 and 99% of the time the two-boxer gets $1,000. That’s like saying that we can’t be sure of whether purebloods are stronger in HPMOR. Even though we haven’t seen any evidence in our world, since there’s no purebloods in the real world, Yudkowsky tells us the facts in HPMOR, and since Yudkowsky’s word is fact about HPMOR, this confirms hypothesis “purebloods are no stronger than other wizards.” And even though we haven’t seen any Omega evidence in our world, Newcomb tells us the facts in his problem, and since Newcomb’s word is fact about Newcomb’s problem, this confirms hypothesis “one-boxers almost always do better than two-boxers.”
If a pre-Galileo person wrote a fictional story about a different land in which heavier objects fell faster, in that world, heavier objects would fall faster. By simple mathematics, we can prove that under the conditions stated by Newcomb, we should take both boxes.
You do realize that other people work on AI? Sure, Eliezer might be the most important, but he is not the only member of MIRI’s team. I’d definitely sacrifice several people to save him, but nowhere near 3^^^3. Eliezer’s death would delay the Singularity, not stop it entirely, and certainly not destroy the world.
Use your wonderful “inventions” and knowledge about the “future” to show your amazing powers. Then explain to them that you are Mercury, god of a lot of different things, including some forms of prophecy. But like Jupiter had done previously to Neptune and Apollo, Jupiter has now sent you down to Earth in the form of a human to work off a debt as you have committed a grave crime against Jupiter (Neptune and Apollo had tried to overthrow him.)
As Mercury, you are assigned by Jupiter to serve the Emperor of Rome. Continue to impress them, and as they worship you, gain power and strength in the society. Also, use your modern rationality/science to advise the Emperor until you control most of his decisions, leaving him as merely a puppet while you receive most of the praise and make most of the actual laws of Rome.
While you are gaining power, you also are trusted by the Emperor and manage to steal money. Even if you are caught (which, if possible, you aren’t) they would never dare beat or kill a god, and it wouldn’t hurt your image as “Mercury”- after all, one of the things he’s best known for is being the god of thieves. Eventually, you start bribing officials to help you. You build trust inside of the leaders of Rome.
When the Emperor is “mysteriously assassinated” you, Mercury, prophet, inventor, god, nobleman, wise, skilled at rulership, wealthy, trustworthy, high-ranked, and adored, become his replacement. If anyone asks why a servant is going to become the Emperor, you tell them that your orders were to serve the government of Rome, and its people, and what better way to do that than to rule it in a way that makes life for the people better?! Especially after you make some donations from the treasures of Rome that appease some of the groups that include the people questioning you, and kill the other questioners for blasphemy.
You are the Emperor of Rome.
I know this solution requires a lot of luck, and could be foiled, but it seems to me that impersonating a god would be the best option.
The two experiments would differ. In Experiment 1, we now have received evidence of a 70% probability of a cure. However, Experiment 2 doesn’t offer the same evidence, because it will stop as soon as it gets significantly over 60%. Based on the randomness of results, it will not always fit the true probability. If the real probability was 70%, wouldn’t it have most likely gotten up to 70% with 7 out of 10, or 14 out of 20? For most of Experiment 2, less than 60% of the patients were cured. The fact that by 100 patients it happened to go up was most likely an error in the data, and if the experiment was continued it would probably drop back below 60%.
Just say, “I’m not able to assign a very high probability to any possibility, since I don’t have very much information, but the possibility that I would assign the highest probability to is the tree having ___ to ___ apples, with a probability of ___%.” You don’t know how many there are, but you can still admit that you don’t know while assigning a probability.
Um… what do all of those comments mean? Also, I’m wondering how Harry became so smart. I know part of it was from [Spoiler from Book Six] but that really wouldn’t have been enough, even combined with science. Why is it that Harry was able to think rationally and create a test, but Michael wasn’t even willing to consider the idea?
Argument is of course a good thing among rational people, since refusing to argue and agreeing to disagree solves nothing- you won’t come to any agreement and you won’t know what’s right.. But I think the reason many people see argument as a bad thing is because most people are too stubborn to admit they are wrong, so argument among most people is pointless because one or both sides is unwilling to actually debate. If people admitted they were wrong, argument wouldn’t be treated as such a bad thing, but as it is, with no one willing to see truth, it often ends up accomplishing nothing.
Apart from planning, optimism seems to be a problem in many situations. Since I’ve read this article and others, I’ve tried to correct my incorrect beliefs, and whenever I have the belief “this scenario is how I want it to be” I immediately take it as a warning sign and reevaluate the belief, and most of the time I’ve been too optimistic… I remember earlier in my life, in fourth grade, being positive that a certain person who I had had a crush on liked me. I overheard a conversation in which she stated that she liked someone else. I went over why I had believed that and realized I had had absolutely zero evidence of anything. My “intuition” had told me what I thought was right.
Intuition is insanely biased. Whatever you think, it’s probably way too happy unless you evaluate the probability from the outside view, find an estimate that seems accurate, and then chop it in half.
I think that since so few people have even heard of Glomarization or meta-honesty they’ll be too suspicious. It’s better to just say you haven’t done it. Now, everyone on here or other websites who knows about these things and rationality, or a Gestapo soldier who knows I know about this- to them, I would Glomarize. If one of you asked me if I had robbed a bank, I would tell you I couldn’t answer that because of its effect on my counterfactual selves. If anyone else, who didn’t know about Glomarization, asked me if I had robbed a bank, I would tell them I hadn’t. I mean, imagine being a police officer, going to a suspect’s house, asking if they had robbed a bank, and hearing “I refuse to answer that question.” They would take that as a confession.
One of the problems people have with complete immortality is a lack of purpose. They think that if we were immortal, then we would never get anything done because we could always just put it off a few hundred years, and time would be meaningless. Also, we would be bored with life after we did everything. But we could always invent new technology, and create some sort of law system that gave special privileges to those who worked.
And even if they’re worried about immortality, complete immortality is impossible. But what’s wrong with a long lifespan? What’s wrong with thousands or millions of years? People seem to think that the suffering in life should make it not worth it to live… but in that case, why are they living today? Very few people want to die today. Tomorrow, they won’t want to die. The next day, they won’t want to die. And if they put a certain limit on life, I expect that if they get that old, they won’t want to die, no matter what they said. Why, then, do they insist now that they will want to die, and refuse cryonics or other lifespan-increasing options?