I guess different readers see things very differently, because I thought that McGonagall was a total badass in this chapter.
When someone makes a major mistake, based on an accumulation of errors from years of acting on a distorted version of their values, it takes a high-level rationalist and an impressive level of control and insight to be able to acknowledge their mistake, clearly see the values that were distorted, and set a new course that repudiates their old ways and appropriately takes their values into account. To be able to do that within a few hours, publicly, when they learned of their mistake through a vicious, personal, inappropriate chewing-out, seems like it might require one of those rumored double rationalists.
Or, if you must view it as a Harry vs. McGonagall conflict, McGonagall kicks his ass. In precisely the way that he needed to have his ass kicked.
Strength of membership in the LW community was related to responses for most of the questions. There were 3 questions related to strength of membership: karma, sequence reading, and time in the community, and since they were all correlated with each other and showed similar patterns I standardized them and averaged them together into a single measure. Then I checked if this measure of strength in membership in the LW community was related to answers on each of the other questions, for the 822 respondents (described in this comment) who answered at least one of the probability questions and used percentages rather than decimals (since I didn’t want to take the time to recode the answers which were given as decimals).
All effects described below have p < .01 (I also indicate when there is a nonsignificant trend with p<.2). On questions with categories I wasn’t that rigorous—if there was a significant effect overall I just eyeballed the differences and reported which categories have the clearest difference (and I skipped some of the background questions which had tons of different categories and are hard to interpret).
Compared to those with a less strong membership in the LW community, those with a strong tie to the community are:
Background:
Gender—no difference
Age—no difference
Relationship Status—no difference
Sexual Orientation—no difference
Relationship Style—less likely to prefer monogamous, more likely to prefer polyamorous or to have no preference
Political Views—less likely to be socialist, more likely to be libertarian (but this is driven by the length of time in the community, which may reflect changing demographics—see my reply to this comment)
Religious Views—more likely to be atheist & not spiritual, especially less likely to be agnostic
Family Religion—no difference
Moral Views—more likely to be consequentialist
IQ—higher
Probabilities:
Many Worlds—higher
Aliens in the universe—lower (edited: I had mistakenly reversed the two aliens questions)
Aliens in our galaxy—trend towards lower (p=.04)
Supernatural—lower
God—lower
Religion—trend towards lower (p=.11, and this is statistically significant with a different analysis)
Cryonics—lower
Anti-Agathics—trend towards higher (p=.13) (this was the one question with a significant non-monotonic relationship: those with a moderately strong tie to the community had the highest probability estimate, while those with weak or strong ties had lower estimates)
Simulation—trend towards higher (p=.20)
Global Warming—higher
No Catastrophe—lower (i.e., think it is less likely that we will make it to 2100 without a catastrophe, i.e. think the chances of xrisk are higher)
Other Questions:
Singularity—sooner (this is statistically significant after truncating the outliers), and more likely to give an estimate rather than leave it blank
Type of XRisk—more likely to think that Unfriendly AI is the most likely XRisk
Cryonics Status—More likely to be signed up or to be considering it, less likely to be not planning to or to not have thought about it