woops, yes that was rather stupid of me. Should be fixed now, my most preferred is me backstabbing Clippy, my least preferred is him backstabbing me. In the middle I prefer cooperation to defection. That doesn’t change my point that since we both have that preference list (with the asymmetrical ones reversed) then it’s impossible to get either asymmetrical option and hence (C,C) and (D,D) are the only options remaining. Hence you should co-operate if you are faced with a truly rational opponent.
I’m not sure whether this holds if your opponent is very rational, but not completely. Or if that notion actually makes sense.
Sorry for being a pain, but I didn’t understand exactly what you said. If you’re still an active user, could you clear up a few things for me? Firstly, could you elaborate on counterfactual definiteness? Another user said contrafactual, is this the same, and what do other interpretations say on this issue?
Secondly, I’m not sure what you meant by the whole universe being ruled by hidden variables, I’m currently interpreting that as the universe coming pre-loaded with random numbers to use and therefore being fully determined by that list along with the current probabilistic laws. Is that what you meant? If not, could you expand a little on that for me, it would help my understanding. Again, this is quite a long time post-event so if anyone reading this could respond that would be helpful.
In reality, not very surprised. I’d probably be annoyed/infuriated depending on whether the actual stakes are measured in billions of human lives.
Nevertheless, that merely represents the fact that I am not 100% certain about my reasoning. I do still maintain that rationality in this context definitely implies trying to maximise utility (even if you don’t literally define rationality this way, any version of rationality that doesn’t try to maximise when actually given a payoff matrix is not worthy of the term) and so we should expect that Clippy faces a similar decision to us, but simply favours the paperclips over human lives. If we translate from lives and clips to actual utility, we get the normal prisoner’s dilemma matrix—we don’t need to make any assumptions about Clippy.
In short, I feel that the requirement that both agents are rational is sufficient to rule out the asymmetrical options as possible, and clearly sufficient to show (C,C) > (D,D). I get the feeling this is where we’re disagreeing and that you think we need to make additional assumptions about Clippy to assure the former.
I understood that Clippy is a rational agent, just one with a different utility function. The payoff matrix as described is the classic Prisoner’s dilemma where one billion lives is one human utilon and one paperclip on Clippy utilon; since we’re both trying to maximise utilons, and we’re supposedly both good at this we should settle for (C,C) over (D,D).
Another way of viewing this would be that my preferences run thus: (D,C);(C,C);(D,D);(C,D) and Clippy run like this: (C,D);(C,C);(D,D);(D,C). This should make it clear that no matter what assumptions we make about Clippy, it is universally better to co-operate than defect. The two asymmetrical outputs can be eliminated on the grounds of being impossible if we’re both rational, and then defecting no longer makes any sense.
7 years late, but you’re missing the fact that (C,C) is universally better than (D,D). Thus whatever logic is being used must have a flaw somewhere because it works out worse for everyone—a reasoning process that successfully gets both parties to cooperate is a WIN. (However, in this setup it is the case that actually winning would be either (C,D) or (C,D), both of which are presumably impossible if we’re equally rational).
Are you sure that’s right chronologically? Just because in the UK we use dd/mm/yy and we say “Fourteenth of March, twenty-fifteen”.
Japan apparently uses yy/mm/dd which makes even more sense, but I have no idea how they pronounce their dates. Point being, I’m not sure which order things actually evolved in.
This would to some extent letting Harry keep his wand- he wants to have some fun after all, and Harry should be given a very limited chance to win. Not much, maybe strip him naked, surround him by armed killers and point a gun at his head, whilst giving him only a minute to think. But leave him his wand, and do give him the full 60 seconds, don’t just kill him if he looks like he’s stalling.
Well, seeing as he was almost prophesied to fail, it was sensible to make sure Harry would have someone to stop him in the future. And as it turns out, this was a very good idea.
It’s actually the same tactic as the Weasley twins used to cover the “engaged to Ginever Weasley” story- plant so many make newspaper reports that everyone gets confused. And it kinda happens again after the Hermione/Draco incident. Guess Eliezer like the theme of people not being able to discern the truth from wild rumours if the truth’s weird enough.
So… what we should do now is to work out all the things Quirrell should have before this. He couldn’t predict partial transfiguration, true. But he knew that Harry had a power he knew not, and had a long time to plan for contingencies.
Personally, I think he should have had the death eaters disillusioned, surround Harry but from a distance, cast holograms to confuse him and then use ventriliquo charms. At the very least disillusionment should be as much of a general tactic as a massed finite and the death eaters could have been hidden.
The massively more obvious solution is just to kill Harry quickly, and moreover to not EVER offer the protagonist 60 seconds to try to save himself, no matter how interesting that sounds.
Any other general/specific tactics that LV could and should have thought of in advance? He had an entire year to plan this, and has Harry level intelligence. He should have predicted and outplayed.
Other than modesty, letting Hermione take credit is a very elegant solution to a slew of problems: he can say that House Potter owes her no enmity after her defeating of their common foe, and this gets rid of any lingering doubts about her attacking Malfoy (probably...).
1 G is a high acceleration, but it’s not that fast initially. That gives him about half a second before his head falls below ground level (0.64s to fall 2m).
I think Dumbledore’s been suggested, but I have no idea and I’m pretty sure there isn’t conclusive evidence anywhere.
At least one other person has suggested stating plainly, in Parseltongue, that the optimal way to kill Harry would be to send him to Azkaban and let him kill the dementors. If that doesn’t kill him, then continue with the previous plan.
I doubt this is in fact the safest way to dispose of Harry, but it might be possible as an extra idea to gain time.
Yes, that particular plan is highly improbable, and LV can search the globe for Harry-builders in his own time.
The elements of this that are threatening are: if you kill me that might no avert the prophecy; and if you kill me I might come back to haunt you (means unspecified in both cases). The standard answer to the former is that if prophecies can’t be averted then this is all a waste anyway, so LV might as well try to kill Harry. The second is harder, but I model Voldemort as rejecting this, although I don’t quite know why.
Accusations in Parseltongue are not true, the speaker merely believes them. (Actually, this raises the possibility of lying using a confundus charm. I’ll assume that’s banned by some Rule). If you were trying to mitigate the chance of someone destroying the world, you place a very high probability on them trying to trick you. The response is to use Hermione’s algorithm that defeated LV earlier and place an ethical injunction on not killing Harry.
Now, that’s probably a little harsh for the exam question, and LV won’t necessarily adopt his enemy’s tactic (even though it defeated him once and that’s one of his rules), but I should think he requires substantial evidence to not kill Harry. More than an accusation of not being ambitious, which is explained by Harry’s naivety.
So what you’re saying… is that Harry should sing?
You can’t say 2+2=3, so no. You will input the word ‘true’ as the simplest fix.
My model of Voldemort is highly risk averse when it comes to existential risk. His response to this is to laugh at having been told he has no ambition, then to kill Harry.
Voldemort trusts himself not to destroy the world, just the same way as Harry trusts himself. Maybe we shouldn’t be so trusting of either.
This is new as far as I can tell. Please write up a review based around this, and based on a cursory read through of Reddit, it might be best not to do this in prose, it takes even longer to evaluate apparently and Eliezer’s plan has backfired
http://tvtropes.org/pmwiki/pmwiki.php/Main/GoneHorriblyRight (look at last entry in “Fan Works” tab)