luzr: The AI can have a provable and predictable goal system and still have free will. Pretty much the same way humans have free will.
Lightwave
7. Analogous action: administer the potion described in 6.
:D
Barry Schwartz has a TED talk on the topic:
http://www.ted.com/index.php/talks/barry_schwartz_on_the_paradox_of_choice.html
@Will: we need to figure out the nonperson predicate only, the FAI will figure out the person predicate afterwards (if uploading the way we currently understand it is what we will want to do).
Another way of putting those statements would be:
1. It is (physically) possible for some people to jump off a cliff.
2. It is (physically) impossible for NonSuicidalGuy to jump off a cliff.Or
It is physically possible only for some people to jump off a cliff.
Or
1. It is physically possible for NonSuicidalGuy to jump off a cliff, if he wanted.
2. It is physically impossible for NonSuicidalGuy to want to jump off a cliff.
Is it possible that a non-conscious zombie exists that behaves exactly like a human, but is of a different design than a human, i.e. it is explicitly designed to behave like a human without being conscious (and is also explicitly designed to talk about consciousness, write philosophy papers, etc). What would be the moral status of such a creature?
So if we created a brain emulation that wakes up one morning (in a simulated environment), lives happily for a day, and then goes to bed after which the emulation is shut down, would that be a morally bad thing to do? Is it wrong? After all, living one day of happiness surely beats non-existence?
Ian C.: [i]”Yes, a designed life form can have paper clip values, but I don’t think we’ll encounter any naturally occurring beings like this. So our provincial little values may not be so provincial after all, but common on many planets.”[i] Almost all life forms (especially simpler ones) are sort of paperclip maximizers, they just make copies of themselves ad infinitum. If life could leave this planet and use materials more efficiently, it would consume everything. Good for us evolution couldn’t optimize them to such an extent.
Why wouldn’t the Informers inform the public what he Persuaders are trying to do (i.e. they’re not providing unbiased information, they just want you to believe them)?
Something to Protect seems like the best way to get people to care about Rationality. I’d definitely want them to read that.
Can we get a date next to each post title in the recent posts page?
How can you be sure that in the historical scenario, the Byzantine Emperor actually did the “right thing”, i.e. he wouldn’t have done better by doing something else? It’s the teachers who have to decide that. Also, what if the Emperor got the “right answer” for the wrong reasons, and the student also got the “right answer” for the wrong reasons? It’s up to the teacher to decide that as well. The best thing you can do is have several groups of rationalists selecting the scenarios and verifying the students’ answers, but ultimately, when using either real life or fictional scenarios, you’re comparing the teachers to the students.
Same thing with measuring “success” of people in real life. They could’ve arrived at the correct answer for the wrong reasons, it’s up to the teachers to decide whether the reasons were right or wrong, i.e. whether they were actually rational or just lucky.
In order to assess the rationality of the students you need to use the sort of things/tests that convinced you that the teachers are rational in the first place. The same things that make the teacher’s tastes real can be matched against the student’s tastes.
Precommitting should be, as someone already said, signing a paper with a third party agreeing to give them $1000 in case you fail to give the $100 to Omega. Precommitment means you have no other option. You can’t say that you both precommitted to give the $100 AND refused to do it when presented with the case.
Which means, if Omega presents you with the scenario before the coin toss, you precommit (by signing the contract with the third party). If Omega presents you with the scenario after the coin toss AND also tells you it has already come up tails—you haven’t precommited, therefore you shouldn’t give it $100.
EDIT: Also, some people objected to not giving the $100, because they might be the emulation which Omega uses to predict whether you’d really give money. If you were an emulation, then you would remember precommitting in expectation to get $10,000 with a 50% chance. It makes no sense for Omega to emulate you in a scenario where you don’t get a chance to precommit.
Okay, I agree that this level of precomitting is not necessary. But if the deal is really a one-time offer, then, when presented with the case of the coin already having come up tails, you can no longer ever benefit from being the sort of person who would precommit. Since you will never again be presented with a newcomb-like scenario, then you will have no benefit from being the precommiting type. Therefore you shouldn’t give the $100.
If, on the other hand, you still expect that you can encounter some other Omega-like thing which will present you with such a scenario, doesn’t this make the deal repeatable, which is not how the question was formulated?
I think you are using “rational” with two different meanings. If looking down will cause you to freeze and panic, then the rational thing is not to look down. If knowledge of the fact you’re taking sugar pills destroys the placebo effect, then the rational thing is not to know you’re taking sugar pills (assuming you’ve exhausted all other options). It’s either that, or directly hacking your brain.
A better way to describe this might be to call these phenomena “irrational feelings”, “irrational reactions”, etc. The difference is, they’re all unintentional. So while you’re always rational in your intentional actions, you can still be unintentionally affected by some irrational feelings or reactions. And you correct for those unintentional reactions (which supposedly you can’t just simply remove) by changing your intentional ones (i.e. you intentionally and rationally decide not to look down, because you know you will otherwise be affected by the “irrational reaction” of panicking).
This game sounds a lot like Mafia).
Maybe this could be announced in advance next time? Like a couple of hours instead of 7 minutes. >.>
Given the stakes, it seems to me the most rational thing to do here is to try to convince the other person that you should both cooperate, and then defect.
The difference between this dilemma and Newcomb is that Newcomb’s Omega predicts perfectly which box you’ll take, whereas the Creationist cannot predict whether you’ll defect or not.
The only way you can lose is if you screw up so badly at trying to convincing him to cooperate (i.e. you’re a terrible liar or bad at communicating in general and confuse him), that instead he’s convinced he should defect now. So the biggest factor when deciding whether to cooperate or defect should be your ability to convince.
In this scenario you can actually replace Omega with a person (e.g. a mad scientist or something), who just happens to be the only one who has, say, a cure for the disease which is about to kill a couple of billion people.
I propose the following solution as the most optimal. It is based on two assumptions.
We’ll call the two sides Agent 1 (Humanity) and Agent 2 (Clippy).
Assumption 1: Agent 1 knows that Agent 2 is logical and will use logic to decide how to act and vise-versa.
This assumption simply means that we do not expect Clippy to be extremely stupid or randomly pick a choice every time. If that were the case, a better strategy would be to “outsmart” him or find a statistical solution.
Assumption 2: Both agents know each other’s ultimate goal/optimization target (i.e. Agent 1 - saving as many people as possible, Agent 2 - making as many paperclips as possible).
This is included in the definition of the dilemma.
Solution: “Cooperate on the first round, and on succeeding rounds do whatever your opponent did last time, with the exception of the last (100th) round. Evaluate these conditions at the beginning of each round.”
Any other solution will not be as optimal. Let’s consider a few examples (worst-case scenarios):
1. Agent 1 cooperates. Agent 2 defects.
2..100 Agent 1 defects. Agent 2 defects.
1. Agent 1 cooperates. Agent 2 cooperates.
2..X Agent 1 cooperates. Agent 2 defects.
X..100 Agent 1 defects. Agent 2 defects.
1..99 Agent 1 cooperates. Agent 2 cooperates.
100. Agent 1 cooperates. Agent 2 defects.
So, in the worst case you “lose” 1 round. You can try to switch between cooperating and defecting several times, in the end one side will end up with only 1 “loss”, as else will be equal.
Note that the solution says nothing about the 100th round (where the question of what to do only arises if both sides cooperated on the 99th round).