The old version is there if you scroll down.
tim
They learned the phrase from Dumbledore when they gave him the map. The fact that they don’t remember the map but imperfectly remember the phrase does support the “poorly done obviation” hypothesis. (either because the obliviator missed removing the phrase or the obliviator intended for them to keep the phrase but accidentally messed it up)
(or, less likely, ‘Deligitor Prodi’ would not have worked for the twins and the obliviator intentionally altered it so that it would)
A few moments later, Fred and George were handing over the Map to the Headmaster, wincing only slightly at the sacrilege of giving their precious piece of the Hogwarts security system to the person who actually owned it, and the old wizard was frowning at the apparent blankness.
“You’ve got to say,” they explained, “I solemnly swear that I am up to no good—”
“I decline to lie,” said the old wizard. He held the Map high and bellowed, “Hear me, Hogwarts! Deligitor prodi!” An instant later the Headmaster was wearing the Sorting Hat, which looked scarily right upon his head, as though Dumbledore had always been waiting for a patchwork pointed hat to complete his existence.
(Fred and George immediately memorized this phrase, just in case it would work for somebody besides the Headmaster, and began trying to think of pranks that would involve the Sorting Hat.)
Possibly. Since Fred and George thought it would have been a good deal scarier “if she hadn’t said the same thing to every single other student in their Divination class” rather than “if she hadn’t said the same thing every year/week/day,” its weakly implied that this is the first time she’s make this prediction.
edit: although “every single other student in their Divination class” is somewhat ambiguous and could mean that over the course of the semester she has made this prediction for everyone at different times.
Well, (in chapter 90), McGonagall’s first visit seemed to be of her own accord but then the Defense Professor went in and upon returning said this to her:
And though it is not my own area of expertise, Deputy Headmistress, if there is any way you can imagine to convince the boy to stop sinking further into his grief and madness—any way at all to undo the resolutions he is coming to—then I suggest you resort to it immediately.”
Manipulating and convincing people of things is absolutely Quirrell’s area of expertise and it seems plausible that he realizes that putting immense pressure on McGonagall to do something (because poor old Quirrell sure can’t!) will cause her to make poor decisions regarding whether Harry should be left alone and/or unobstructed in his activities.
Further supported by Snape’s line from when he enters the room at the beginning of chapter 51:
“I also cannot imagine what the Deputy Headmistress is thinking,” said the Potions Master of Hogwarts. “Unless I am meant to serve as a warning of where it will lead you, if you decide to take the blame for her death upon yourself.”
and by the continuing pressure Quirrell exerts on McGonagall at the end of chapter 52:
“That would be worse than pointless. Dumbledore cannot reach the boy. At best he is wise enough to know this and make things no worse. I lack the requisite frame of mind. You are the one who—but I see that you still look for others to save you.”
Again Quirrell cites his own inability to help with the problem and now disqualifies Dumbledore as well. The last part in particular echos Harry’s criticism of her ineffectiveness and I wouldn’t be surprised if Quirrell was somehow aware of their exchange and using McGonagall’s weakened confidence to spur her to action.
So Quirrell seems to be manipulating McGonagall directly and everyone else by extension.
In the same vein, I get easily distracted when reading text and the ability to click around, select and deselect text that I’m reading helps me to stay engaged.
Writing that out it sounds like it would be super distracting but its not (for me). Possibly related to the phenomenon where some people work better with noise in the background rather than in silence. Clicking around might help maintain a minimum level of stimulation while reading.
On the flip-side, I know almost nothing about music, was unable to understand a lot of the video, and still enjoyed it quite a bit.
Yeah, it sounded like a first person perspective of Harry-in-shock to me.
The only people who would view this event as “killing an enemy under Dumbledore’s protection” and that the death of a first year girl makes Lucius look like a winner are going to be the people already on Lucius’ side.
Note that this seems to contradict the glowing bat experiments performed in chapter 22.
“Seriously? You seriously have to say Oogely boogely with the duration of the oo, eh, and ee sounds having a ratio of 3 to 1 to 2, or the bat won’t glow? Why? Why? For the love of all that is sacred, why?”
Sure, I didn’t mean to imply that there were literally zero situations that could be described as Newcomb-like (though I think that particular example is a questionable fit). I just think they are extremely rare (particularly in a competitive context such as poker or sports).
edit: That example is more like a prisoner’s dilemma where Kate gets to decide her move after seeing Joe’s. Agree that Newcomb’s definitely has similarities with the relatively common PD.
I don’t see how those are Newcomb situations at all. When I try to come up with an example of a Newcomb-like sports situation (eg football since plays are preselected and revealed simultaneously more or less) I get something like the following:
you have two plays A and B (one-box, two-box)
the opposing coach has two plays X and Y
if the opposing coach predicts you will select A they will select X and if they predict you will select B they will select Y.
A vs X results in a moderate gain for you. A vs Y results in no gain for you. B vs Y results in a small gain for you. B vs X results in a large gain for you.
You both know all this.
The problem lies in the 3rd assumption. Why would the opposing coach ever select play X? Symmetrically, if Omega was actually competing against you and trying to minimize your winnings why would it ever put a million dollars in the second box.
Newcomb’s works, in part, due to Omega’s willingness to select a dominated strategy in order to mess with you. What real-life situation involves an opponent like that?
Well, I am referring specifically to an instinctive/emotional impulse driven by the heavily ingrained belief that money does not appear or disappear from closed boxes. If you don’t experience that impulse or will always be able to override it then yes, one-boxing in real life would be just as easy as in the abstract.
As per my above response to shminux, I think this effect would be diminished and eventually reversed after personally observing enough successful predictions.
I was surprised by the more general statement “that in a real-life situation even philosophers would one-box.” In the specific example of an iterated Newcomb (or directly observing the results of others) I agree that two-boxers would probably move towards a one-box strategy.
The reason for this, at least as far as I can introspect, has to do with the saliency of actually experiencing a Newcomb situation. When reasoning about the problem in the abstract I can easily conclude that one-boxing is the obviously correct answer. However, when I sit and really try to imagine the two boxes sitting in front of me, my model of myself in that situation two-boxes more than the person sitting at his computer. I think a similar effect may be at play when I imagine myself physically present as person after person two-boxes and finds one of the boxes empty.
So I think we agree that observe(many two-box failures) --> more likely to one-box.
I do think that experiencing the problem as traditionally stated (no iteration or actually watching other people) will have a relationship of observe(two physical boxes, predictor gone) --> more likely to two-box.
The second effect is probably weak as I think I would be able to override the impulse to two-box with fairly high probability.
Really? I feel like I would be more inclined to two-box in the real life scenario. There will be two physical boxes in front of me that already have money in them (or not). It’ll just be me and two boxes whose contents are already fixed. I will really want to just take them both.
I’ve been curious why all the formulations of Newcomb’s I’ve read give Omega/Predictor an error rate at all. Is it just to preempt reasoning along the lines of “well he never makes an error that means he is a god so I one-box” or is there a more subtle, problem-relevant reason that I’m missing?
My issue with this argument is that you are implicitly claiming that social interaction --> manipulation. On the face of it this is probably more or less true. Most social interactions do involve (mild) manipulations such as suggesting an activity, asking someone to pass the [object], or telling a story to elicit sympathy/respect. However, you then claim that these types of manipulations are ones intelligent people “feel iffy about.”
I’m certainly willing to accept that there are types of manipulation that makes the manipulator feel guilty and could possibly cause socially awkwardness. But I very much doubt the claim that most social interactions consist of these types of manipulation and that this is what leads to the social clumsiness some smart people exhibit.
Also, the evo-psych justification that “evolution has programmed us to have repulsion towards unfairly manipulating others” seems like a big stretch. I would actually expect the opposite to be true to the extent that your manipulations weren’t blatant enough to trigger retaliation.
Perfectly was a poor choice of words. I would expect there to be much more variation in the combinations of beliefs that people hold than is observed. People who favor more aid to the poor are likely to also be pro choice. People who are pro war are likely to be pro life (these are true for US politics at least).
It is not obvious why these particular beliefs should be connected. I think you could make a convincing “just so” story for the sets of beliefs as they are and for their opposites.
edit: in a world where people thought through each of their beliefs independently I would expect the ratio of numBelieves(pro war, pro life) : numBelieves(pro war, pro choice) to be a lot closer to 1 than we observe.
In a universe where the majority of people did not form clusters of beliefs centered around a political identity I would be extremely surprised to find so many people whose beliefs happened to match up perfectly[redacted] with one of only a few political stereotypes.
In my experience it seems that people choose their political identity based on a few beliefs that are important to them and pick up the rest as part of the identity package.
I would imagine its because explaining your downvote (unless there is a particularly good reason) also decreases the signal to noise ratio.