LessWrong developer, rationalist since the Overcoming Bias days. Jargon connoisseur.
jimrandomh(Jim Babcock)
You’re wrong about this. Trust in the CDC is not a single-variable scale and not a generically useful resource. Trust in the CDC is a mix of peoples’ estimation of the CDC’s competence, and their estimation of whether the CDC is biased towards under-response or over-response. It is severely harmful for people to over-estimate the CDC’s competence, or to fail to recognize that the CDC is biased towards under-response.
Having previously over-estimated CDC’s competence caused many parties which could have been bypassing the CDC to create and deploy tests, to fail to respond in time. I expect that decision-makers currently relying on the CDC’s competence will implement distancing measures and ban gatherings much too late.
The main reason we might want people to over-estimate the CDC’s competence is that this trust could be used to solve coordination problems. However, the coordination problems that CDC could plausibly solve—closing airports, banning public gatherings, and implementing quarantines—are problems that it solves using legal power, not using generic community trust. To the extent that community trust is required to implement such measures, knowing that the CDC has been consistently biased towards under-response will make it easier, to a greater degree than knowing that they’ve been incompetent will make it harder.
My evaluation is that reducing trust in the CDC has net-positive consequences. But note that, separately, I don’t think an evaluation of this depth is typically required before truthfully speaking about an organization’s credibility. I expect that nearly all of the time, when trading off between speaking truth and empowering an institution, speaking truth is the correct move, and those who think otherwise will be mistaken.
There’s a model-fragment that I think is pretty important to understanding what’s happened around Michael Vassar, and Scott Alexander’s criticism.
Helping someone who is having a mental break is hard. It’s difficult for someone to do for a friend. It’s difficult for professionals to do in an institutional setting, and I have tons of anecdotes from friends and acquaintances, both inside and outside the rationality community, of professionals in institutions fucking up in ways that were traumatizing or even abusive. Friends have some natural advantages over institutions: they can provide support in a familiar environment instead of a prison-like environment, and make use of context they have with the person.
When you encounter someone who’s having a mental break or is giving off signs that they’re highly stressed and at risk of a mental break, the incentivized action is to get out of the radius of blame (see Copenhagen Interpretation of Ethics). I think most people do this instinctively. Attempting to help someone through a break is a risky and thankless job; many more people will hear about it if it goes badly than if it goes well. Anyone who does it repeatedly will probably find name attached to a disaster and a mistake they made that sounds easier to avoid than it really was. Nevertheless, I think people should try to help their friends (and sometimes their acquaintances) in those circumstances, and that when we hear how it went, we should adjust our interpretation accordingly.
I’ve seen Michael get involved in a fair number of analogous situations that didn’t become disasters and that no one heard about, and that significantly affects my interpretation, when I hear that he’s been in the blast-radius of situations that did.
I think Scott Alexander looked at some stories (possibly with some rumor-mill distortions added on), and took a “this should be left to professionals” stance. And I think the “this should be left to professionals” stance looks better to him, as a professional who’s worked only in above-average institutions and who can fix problems when he sees them, than it does to people collecting anecdotes from others who’ve been involuntarily committed.
Lots of the comments here are pointing at details of the markets and whether it’s possible to profit off of knowing that transformative AI is coming. Which is all fine and good, but I think there’s a simple way to look at it that’s very illuminating.
The stock market is good at predicting company success because there are a lot of people trading in it who think hard about which companies will succeed, doing things like writing documents about those companies’ target markets, products, and leadership. Traders who do a good job at this sort of analysis get more funds to trade with, which makes their trading activity have a larger impact on the prices.
Now, when you say that:
the market is decisively rejecting – i.e., putting very low probability on – the development of transformative AI in the very near term, say within the next ten years.
I think what you’re claiming is that market prices are substantially controlled by traders who have a probability like that in their heads. Or traders who are following an algorithm which had a probability like that in the spreadsheet. Or something thing like that. Some sort of serious cognition, serious in the way that traders treat company revenue forecasts.
And I think that this is false. I think their heads don’t contain any probability for transformative AI at all. I think that if you could peer into the internal communications of trading firms, and you went looking for their thoughts about AI timelines affecting interest rates, you wouldn’t find thoughts like that. And if you did find an occasional trader who had such thoughts, and quantified how much impact they would have on the prices if they went all-in on trading based on that theory, you would find their impact was infinitesimal.
Market prices aren’t mystical, they’re aggregations of traders’ cognition. If the cognition isn’t there, then the market price can’t tell you anything. If the cognition is there but it doesn’t control enough of the capital to move the price, then the price can’t tell you anything.
I think this post is a trap for people who think of market prices as a slightly mystical source of information, who don’t have much of a model of what cognition is behind those prices.
(Comment cross-posted with the EA forum version of this post)
There’s been a lot of previous interest in indoor CO2 in the rationality community, including an (unsuccessful) CO2 stripper project, some research summaries and self experiments. The results are confusing, I suspect some of the older research might be fake. But I noticed something that has greatly changed how I think about CO2 in relation to cognition.
Exhaled air is about 50kPPM CO2. Outdoor air is about 400ppm; indoor air ranges from 500 to 1500ppm depending on ventilation. Since exhaled air has CO2 about two orders of magnitude larger than the variance in room CO2, if even a small percentage of inhaled air is reinhalation of exhaled air, this will have a significantly larger effect than changes in ventilation. I’m having trouble finding a straight answer about what percentage of inhaled air is rebreathed (other than in the context of mask-wearing), but given the diffusivity of CO2, I would be surprised if it wasn’t at least 1%.
This predicts that a slight breeze, which replaces their in front of your face and prevents reinhalation, would have a considerably larger effect than ventilating an indoor space where the air is mostly still. This matches my subjective experience of indoor vs outdoor spaces, which, while extremely confounded, feels like an air-quality difference larger than CO2 sensors would predict.
This also predicts that a small fan, positioned so it replaces the air in front of my face, would have a large effect on the same axis as improved ventilation would. I just set one up. I don’t know whether it’s making a difference but I plan to leave it there for at least a few days.
(Note: CO2 is sometimes used as a proxy for ventilation in contexts where the thing you actually care about is respiratory aerosol, because it affects transmissibility of respiratory diseases like COVID and influenza. This doesn’t help with that at all and if anything would make it worse.)
I don’t know how most articles get into that section, but I know, from direct communication with a Time staff writer, that Time reached out and asked for Eliezer to write something for them.
IAWYC, and introspective access to what my mind was doing on this timescale was one of the bigger benefits I got out of meditation. (Note: Probably not one of the types of meditation you’ve read about). However, I don’t think you’ve correctly identified what went wrong in the example with red. Consider this analogous conversation:
What’s a Slider? It’s a Widget.
What’s a Widget? It’s a Drawable.
What’s a Drawable? It’s an Object.In this example, as with the red/color example, the first question and answer was useful and relevant (albeit incomplete), while the next two were useless. The lesson you seem to have drawn from this is that looking down (subclassward) is good, and looking up (superclassward) is bad. The lesson I draw from this is that relevance falls off rapidly with distance, and that each successive explanation should be of a different type. It is better to look a short distance in each direction rather than to look far in any one direction. Compare:
X is a color. This object is X. (One step up, one step down)
X is a color. A color is a quality that things have. (Two steps up)
This object is X. That object is also X. (Two steps down)I would expect the first of these three explanations to succeed, and the other two to fail miserably.
- 5 Sep 2011 6:19 UTC; 3 points) 's comment on What Direct Instruction is by (
Reusing a response I made to a previous UFO story, on a mailing list, lightly edited because the same logic still applies.
There’s one core truth that you need to understand, and then all the talk of UFOs, videos, and the reactions to them make sense.
The US military has secret aircraft. Other militaries also have secret aircraft. These are kept in reserve for high-stakes operations. For example, in 2011, a previously-unseen model of stealth helicopter crashed in the middle of the raid on Osama bin Laden’s compound. Rumor is that the Chinese military got to inspect the wreckage; if true, this would be a pretty major fuckup, since it would enable them to plan around its capabilities, to design radars to detect it, and to attribute any operations using it to the United States.
The performance characteristics of secret military aircraft are military secrets. They are highly prototypical military secrets. That means the secrecy radiates a few conceptual steps outward: our own country’s aircraft are secret, what we know about other countries’ aircraft is secret, what we know that other countries know about our aircraft is secret, and so on. Deliberate disinformation is expected; if you look far back enough in time for things to be declassified, you’ll find publicly-reported examples of the US putting out fake aircraft mockups for Soviet satellites to photograph, and similar tricks. There are a few videos taken from fighter-jet sensor packages floating around; these require some expertise to interpret, or else you’ll wind up thinking that the sharpen filter is a glowing aura, or that the parallax is a fast movement speed, or that image-stabilization problems are fast accelerations. As it happens, the characteristics of fighter-jet sensor packages are *also* military secrets (perhaps a bit less well kept), which means that 100% of the people who are qualified to interpret those videos, are also legally forbidden from talking publicly about them.
With that as background, there’s nothing left to explain. Given a specific video, it can be hard to tell whether it’s an aircraft with a surprising capability, or a fake video, or a sensor issue. That’s the point; foreign military strategists will also look at those videos, and encounter the same problems. Dispelling the confusion would mean accepting a substantial handicap in future military operations, and there’s no reason to do that.
This post seems mostly wrong and mostly deceptive. You start with this quote:
“After many years, I came to the conclusion that everything he says is false. . . . “He will lie just for the fun of it. Every one of his arguments was tinged and coded with falseness and pretense. It was like playing chess with extra pieces. It was all fake.”
This is correctly labelled as being about someone else, but is presented as though it’s making the same accusation, just against a different person. But this is not the accusation you go on to make; you never once accuse him of lying. This sets the tone, and I definitely noticed what you did there.
As for the concrete disagreements you list: I’m quite confident you’re wrong about the bottom line regarding nonphysicalism (though it’s possible his nosology is incorrect, I haven’t looked closely at that). I think prior to encountering Eliezer’s writing, I would have put nonphysicalism in the same bucket as theism (ie, false, for similar reasons), so I don’t think Eliezer is causally upstream of me thinking that. I’m also quite confident that you’re wrong about decision theory, and that Eliezer is largely correct. (I estimate Eliezer is responsible for about 30% of the decision-theory-related content I’ve read). On the third disagreement, regarding animal consciousness, it looks likevalues question paired with word games, I’m not sure there’s even a concrete thing (that isn’t a definition) for me to agree or disagree with.
I am now reasonably convinced (p>0.8) that SARS-CoV-2 originated in an accidental laboratory escape from the Wuhan Institute of Virology.
1. If SARS-CoV-2 originated in a non-laboratory zoonotic transmission, then the geographic location of the initial outbreak would be drawn from a distribution which is approximately uniformly distributed over China (population-weighted); whereas if it originated in a laboratory, the geographic location is drawn from the commuting region of a lab studying that class of viruses, of which there is currently only one. Wuhan has <1% of the population of China, so this is (order of magnitude) a 100:1 update.
2. No factor other than the presence of the Wuhan Institute of Virology and related biotech organizations distinguishes Wuhan or Hubei from the rest of China. It is not the location of the bat-caves that SARS was found in; those are in Yunnan. It is not the location of any previous outbreaks. It does not have documented higher consumption of bats than the rest of China.
3. There have been publicly reported laboratory escapes of SARS twice before in Beijing, so we know this class of virus is difficult to contain in a laboratory setting.
4. We know that the Wuhan Institute of Virology was studying SARS-like bat coronaviruses. As reported in the Washington Post today, US diplomats had expressed serious concerns about the lab’s safety.
5. China has adopted a policy of suppressing research into the origins of SARS-CoV-2, which they would not have done if they expected that research to clear them of scandal. Some Chinese officials are in a position to know.
To be clear, I don’t think this was an intentional release. I don’t think it was intended for use as a bioweapon. I don’t think it underwent genetic engineering or gain-of-function research, although nothing about it conclusively rules this out. I think the researchers had good intentions, and screwed up.
- 18 Apr 2020 20:47 UTC; 10 points) 's comment on March Coronavirus Open Thread by (
There’s something I think you’re missing here, which is that blackmail-in-practice is often about leveraging the norm enforcement of a different community than the target’s, exploiting differences in norms between groups. A highly prototypical example is taking information about sex or drug use which is acceptable within a local community, and sharing it with an oppressive government which would punish that behavior.
Allowing blackmail within a group weakens that group’s ability to resist outside control, and this is a very big deal. (It’s kind of surprising that, this late in the conversation about blackmail, no one seems to have spotted this.)
There’s a wrinkle here that I think changes the model pretty drastically: people vary widely in how readily they pick up skills. The immediate implication is that selecting on skills is selecting on a mix of age, teachability, and alignment between their past studies and the skillset you’re testing. Counterintuitively, this means that a test which is narrowly focused on the exact skillset you need will do worse at testing for teachability, so if most of what you need is ultimately going to come from future training and study, then the more broad the skillset tested, the better.
Was Sam Altman acting consistently with the OpenAI charter prior to the board firing him?
So, Nonlinear-affiliated people are here in the comments disagreeing, promising proof that important claims in the post are false. I fully expect that Nonlinear’s response, and much of the discussion, will be predictably shoved down the throat of my attention, so I’m not too worried about missing the rebuttals, if rebuttals are in fact coming.
But there’s a hard-won lesson I’ve learned by digging into conflicts like this one, which I want to highlight, which I think makes this post valuable even if some of the stories turn out to be importantly false:
If a story is false, the fact that the story was told, and who told it, is valuable information. Sometimes it’s significantly more valuable than if the story was true. You can’t untangle a web of lies by trying to prevent anyone from saying things that have falsehoods embedded in them. You can untangle a web of lies by promoting a norm of maximizing the available information, including indirect information like who said what.
Think of the game Werewolf, as an analogy. Some moves are Villager strategies, and some moves are Werewolf strategies, in the sense that, if you notice someone using the strategy, you should make a Bayesian update in the direction of thinking the person using that strategy is a Villager or is a Werewolf.
Deep Learning systems don’t look like they FOOM. Stochastic Gradient Descent doesn’t look like it will treacherous turn.
I think you’ve updated incorrectly, by failing to keep track of what the advance predictions were (or would have been) about when a FOOM or a treacherous turn will happen.
If foom happens, it happens no earlier than the point where AI systems can do software-development on their own codebases, without relying on close collaboration with a skilled human programmer. This point has not yet been reached; they’re idiot-savants with skill gaps that prevent them from working independently, and no AI system has passed the litmus test I use for identifying good (human) programmers. They’re advancing in that direction pretty rapidly, but they’re unambiguously not there yet.
Similarly, if a treacherous turn happens, it happens no earlier than the point where AI systems can do strategic reasoning with long chains of inference; this again has an idiot-savant dynamic going on, which can create the false impression that this landmark has been reached, when in fact it hasn’t.
This feels like a nice crisp example of how Twitter is broken in ways that generate disinformation. A user with 531 followers, not someone who can reasonably be expected to treat their tweets as a journalistic product or employ a factchecker, made an understandable mistake. This produced a politically-potent but inaccurate soundbite.
A substantial fraction of the quote-tweets are refutations. These are only visible if you check for them explicitly. This creates a confirmation bias trap: if you think the economic left is pushing a false narrative, you’ll be more likely to check, see that the tweet is wrong, and reinforce that belief. And if you think that inequality is a huge problem and the economic situation of the poor is dire, then you’ll be less likely to check, again reinforcing that belief.
This is specifically a property of systems that don’t have downvotes and of systems that don’t have a good way of sorting replies, which is why this sort of thing is especially common and especially bad on Twitter..
Extracting and signal boosting this part from the final blog post linked by Winterford:
One time when I was being sexually assaulted after having explicitly said no, a person with significant martial arts training pinned me to the floor. … name is Storm.
I had not heard this accusation before, and do not know whether it was ever investigated. I don’t think I’ve met Storm, but I’m pretty sure I could match this nickname to the legal name of someone in the East Bay by asking around. Being named as a rapist in the last blog post of someone who later committed suicide is very incriminating, and if this hasn’t been followed up it seems important to do so.
This is an unusually difficult post to review. In an ideal world, we’d like to be able to review things as they are, without reference to who the author is. In many settings, reviews are done anonymously (with the author’s name stricken off), for just this reason. This post puts that to the test: the author is a pariah. And ordinarily I would say, that’s irrelevant, we can just read the post and evaluate it on its own merits.
Other comments have mentioned that there could be PR concerns, ie, that making the author’s existence and participation on LessWrong salient is embarrassing. I don’t think this is an appropriate basis for judging the post, and would prefer to judge it based on its content.
The problem is, I think this post may contain a subtle trap, and that understanding its author, and what he was trying to do with this post, might actually be key to understanding what the trap is.
Ialdabaoth had a metaproblem, which was this: he had conspicuous problems, in a community full of people who would try start conversations where they help analyze his problems for him; but if those people truly understood him, they might turn on him. So he created narratives to explain why those conversations were so confusing, why he wouldn’t follow the advice, and why the people trying to help him were actually wronging him, and therefore indebted. This post is one such narrative. Here’s another.
The core idea of this post is that spectrum-direction advice is structured as a pair of failure modes, which may have either a variably-sized gap or a variably-sized overlap, depending on the post. This is straightforwardly true. But I think that the next inferential step the post takes after that, about how people do and should respond to that, is wrong. Charles, David, and Edgar should all be rejecting the frame in which they’re tuning {B}, and instead be looking for third options which make {B} irrelevant. This is easy to overlook when {B} is a generic placeholder rather than a specific behavior, but becomes clear when applied to specific examples. Edgar, in particular, is described as doing a probably-catastrophically-wrong thing, presented as though it were the obvious reaction to circumstances.
I suspect that, if this concept were widespread and salient, especially presented in its current form, the main effect would be to help people rationalize their way out of doing the obvious things to solve their problems, and to explain their confusion when other people seem to not be doing the obviously things. I think there’s a next-inferential-step post that I would be happy with, but this one isn’t it.
- 13 Dec 2019 10:31 UTC; 30 points) 's comment on ialdabaoth is banned by (
- ialdabaoth is banned by 13 Dec 2019 6:34 UTC; 29 points) (
- 14 Dec 2019 6:21 UTC; 23 points) 's comment on ialdabaoth is banned by (
- 22 Dec 2019 10:02 UTC; 12 points) 's comment on Decoupling vs Contextualising Norms by (
- 13 Dec 2019 9:32 UTC; 8 points) 's comment on ialdabaoth is banned by (
This is probably not the solution Harry’s going to use in Chapter 81 (I’m writing this before it was posted), but a friend and I were discussing it and came up with a possible solution. I decided it would be much more fun as a piece of fanfanfiction rather than an abstract description, so here it is. I hope you have as much fun reading it as I did writing.
Chapter 81b: Alternate Solution
Beyond all panic and despair his mind began to search through every fact in its possession, recall everything it knew about Lucius Malfoy, about the Wizengamot, about the laws of magical Britain; his eyes looked at the rows of chairs, at every person and every thing within range of his vision, searching for any opportunity it could grasp -
And the start of an idea formed—not a plan, but a tiny fragment of one. He spelled out N-O-T-E on his fingers, and, as discretely as he could, drew a piece of paper out from his bag that he did not remember putting there. It read:
"Mess with time if you want!"
And then he heard a loud bang, and another while he was stuffing the note back in his bag, and he looked up to see that a circular piece had pushed out from the wall, (that wall that could’ve withstood a nuclear explosion), far in the back where no one had been looking. Heads turned in unison to look as four glowing, silver human shapes emerged from the three-foot diameter hole, and began walking down the aisle towards Hermione. No one in the room but Harry and Dumbledore suspected they were Patronuses.
Prime Minister Fudge should have been angry, that magical creatures would dare barge in; but for some reason he couldn’t quite place, he was calm. Auror Gawain was too busy casting shield spells to acknowledge how scared he was. Harry had a pretty good idea where this was going, but decided that “confused” was the best expression to wear. Professor McGonagall nearly had a stroke. Lucius Malfoy’s angry expression had vanished, leaving his face perfectly blank. His entire row had stood up, and drawn their wands. To his left, five wizards Harry didn’t recognize were pointing at the human Patronuses; to his right, seven wizards pointed their wands at Dumbledore.
Lucius himself had his wand, and his gaze, fixed firmly on Harry. For a brief and accidental moment, the boy who thought he was a rock looked back.
Wands too numerous to count followed those glowing figures, as they walked down the aisle towards Hermione. Harry noticed that Fawkes had perched silently on her shoulder, and she was taking slow, deep breaths.
Behind each wand, a wizard thought that someone else ought to do something. A rare upside to the bystander effect, Harry would later note. For the time being, his mind was busy choreographing the movements of four invisible figures, who were definitely not bumping into each other. When the Patronuses had reached the bottom-most platform, where Hermione sat, they stopped, and looked up at Dumbledore’s platform.
“Who dares interrupt these proceedings?” Dumbledore’s voice boomed out. In fact, he was glad that they had been interrupted, and knew exactly who he was talking to; but as Chief Warlock, he had to express indignance, or else someone else would have gone and done it for him.
This better be good, Harry thought, because I won’t be able to think of anything else once I’ve been anchored.
“We are the Guardians of Merlin”, said the first Patronus, in Harry’s best impression of a Scottish accent.
“In that case, I yield the floor to the Guardians of Merlin”, said Dumbledore. “May I ask why you are here?”
“We were a safeguard created by Merlin, to protect the purity of the Wizengamot. In his wisdom, Merlin set down a list of especially vile deeds; should this assembly should decide to perform one, we awaken. And so we are here.”
Lucius turned away from Harry, and towards the front. “Ridiculous. This is no different than the many other times we have punished murderers, and no ghosts or apparitions appeared then.” He put a slight emphasis on “ghosts or apparitions”. He had no idea what they really were, but there was ample precedent saying ghosts and apparitions weren’t allowed to do things.
Harry wondered what lie his future self would tell. Then the second patronus spoke, in exactly the same voice as the first. “It is different, because sending this girl to Azkaban would satisfy the first requirement for a ritual!”
The murmurs stopped. Several members of the audience suddenly noticed the dementor in the room, on a level where they had not noticed it before. Professor McGonagall actually did have a stroke, but it was a small one, of a kind that could be fully repaired by magic later. For a moment, Dumbledore lost himself in his role and forgot that he was speaking to four copies of Harry Potter.
Five seconds passed before Dumbledore broke the silence. “Are you saying that this trial is part of a dark ritual?”
“Yes”, said all four patronuses simultaneously, convincing several members of the assembly to abandon the idea that they were all controlled by one person. The figures were new, important, and mysterious. Hermione was no longer salient.
“Do you know who could be behind this?” Dumbledore asked.
Heads turned towards Lucius, who looked around and noted exactly whose heads they were, handling the sudden deluge of important information by recording only the ways in which it differed from what he would have expected. Lucius knew then, that he had to lose; not only was he facing four new and completely unknown pieces, pieces which had been powerful enough to carve a hole in the indestructible wall of the Wizengamot, his own role was looking altogether too suspicious. He looked left, met the eyes of his servant, August Stoessel, and sent a thought.
Two seats left, August stood up and shouted, “It must be Lord Voldemort!” The audience’s attention shifted slightly. Lucius decided that four days later, Stoessel—Imperiused and falsely rumored to be a perfect occlumens—would confess to the whole thing, claiming (though no one would believe the last part) to have been Imperiused by Lord Voldemort himself.
Dumbledore looked very disturbed. Onlookers did not find this surprising, but they would have been surprised by the reason, if they knew. Dumbledore had just put the pieces together—Harry had performed an advanced plot, and time turned in spite of his time turner’s locked shell, just as he must have done on the day Bellatrix Black broke out of Azkaban.
“Talk of dark rituals is unfit for discussion here”, Dumbledore said, a little shakily. “If there are no objections, I believe we can suspend the previous vote and reconvene tomorrow morning, after the Ministry has had a chance to speak with these Guardians. We will vote whether to release or punish Hermione then, with fuller information.”
Lucius did not object. He would have a whole day to plan his next move. Harry did not object. He would have a whole day to plan his next move.
The Guardians of Merlins left first, through the strange hole from which they had come. Then the Aurors left, taking Hermione, their patronuses, and the dementor, slightly smaller but still intact. Then the audience left, Harry among them, and he excused himself to go to the bathroom, where he anchored his time turner inside its shell like Quirrell had shown him, and spun the shell twice. Finally Dumbledore left; but he was only two steps out the door when he disillusioned himself, spun his time turner twice, and reentered.
Two hours earlier, an invisible Harry Potter was wandering around the Wizengamot building, first looking for his earlier self so he could place the “Mess with time if you want” note, then looking for the other side of the wall he had seen cut open. He found it in a secluded storeroom, with ten minutes to spare, set down a piece of paper and marked it with a single tally. Soon he was joined by another Harry, who had used his time turner only once, and another, and another. Rather than take off their invisibility cloaks, they announced their arrival by marking the paper with a second, third, and fourth tally.
Dumbledore watched invisibly from inside the Wizengamot chamber as four invisible Harry Potters used partial transfiguration to cut a hole in the wall. He watched invisibly as four Human Patronuses entered the room. And then an invisible Harry Potter bumped into the invisible Dumbledore, changing events from how they were meant to go; and the entire twisted tangle of time loops collapsed into a paradox and never was. Reality would take a different path, one in which Harry chose a simpler solution, one that did not require three things to all happen.
Addendum: A whistleblower claims that CDC wanted to advise elderly and fragile people to not fly on commercial airlines, but removed this advice at the White House’s direction.
Where the CDC and White House are in conflict, I believe the CDC is more credible (and I believe this is consensus); however, this looks like a clear-cut case where the CDC’s political situation forced it to be less honest and understate risk.
- 9 Mar 2020 12:08 UTC; 3 points) 's comment on Credibility of the CDC on SARS-CoV-2 by (
I’m sure there are many people whose inner experience is like this. But, negative data point: Mine isn’t. Not even a little. And yet, I still believe AGI is likely to wipe out humanity.