LessWrong developer, rationalist since the Overcoming Bias days. Jargon connoisseur.
jimrandomh(Jim Babcock)
[Link] Still Alive—Astral Codex Ten
Memetic Hazards in Videogames
LessWrong Now Has Dark Mode
[Question] Will COVID-19 survivors suffer lasting disability at a high rate?
Less Wrong Polls in Comments
A Significant Portion of COVID-19 Transmission Is Presymptomatic
You’re wrong about this. Trust in the CDC is not a single-variable scale and not a generically useful resource. Trust in the CDC is a mix of peoples’ estimation of the CDC’s competence, and their estimation of whether the CDC is biased towards under-response or over-response. It is severely harmful for people to over-estimate the CDC’s competence, or to fail to recognize that the CDC is biased towards under-response.
Having previously over-estimated CDC’s competence caused many parties which could have been bypassing the CDC to create and deploy tests, to fail to respond in time. I expect that decision-makers currently relying on the CDC’s competence will implement distancing measures and ban gatherings much too late.
The main reason we might want people to over-estimate the CDC’s competence is that this trust could be used to solve coordination problems. However, the coordination problems that CDC could plausibly solve—closing airports, banning public gatherings, and implementing quarantines—are problems that it solves using legal power, not using generic community trust. To the extent that community trust is required to implement such measures, knowing that the CDC has been consistently biased towards under-response will make it easier, to a greater degree than knowing that they’ve been incompetent will make it harder.
My evaluation is that reducing trust in the CDC has net-positive consequences. But note that, separately, I don’t think an evaluation of this depth is typically required before truthfully speaking about an organization’s credibility. I expect that nearly all of the time, when trading off between speaking truth and empowering an institution, speaking truth is the correct move, and those who think otherwise will be mistaken.
There’s a model-fragment that I think is pretty important to understanding what’s happened around Michael Vassar, and Scott Alexander’s criticism.
Helping someone who is having a mental break is hard. It’s difficult for someone to do for a friend. It’s difficult for professionals to do in an institutional setting, and I have tons of anecdotes from friends and acquaintances, both inside and outside the rationality community, of professionals in institutions fucking up in ways that were traumatizing or even abusive. Friends have some natural advantages over institutions: they can provide support in a familiar environment instead of a prison-like environment, and make use of context they have with the person.
When you encounter someone who’s having a mental break or is giving off signs that they’re highly stressed and at risk of a mental break, the incentivized action is to get out of the radius of blame (see Copenhagen Interpretation of Ethics). I think most people do this instinctively. Attempting to help someone through a break is a risky and thankless job; many more people will hear about it if it goes badly than if it goes well. Anyone who does it repeatedly will probably find name attached to a disaster and a mistake they made that sounds easier to avoid than it really was. Nevertheless, I think people should try to help their friends (and sometimes their acquaintances) in those circumstances, and that when we hear how it went, we should adjust our interpretation accordingly.
I’ve seen Michael get involved in a fair number of analogous situations that didn’t become disasters and that no one heard about, and that significantly affects my interpretation, when I hear that he’s been in the blast-radius of situations that did.
I think Scott Alexander looked at some stories (possibly with some rumor-mill distortions added on), and took a “this should be left to professionals” stance. And I think the “this should be left to professionals” stance looks better to him, as a professional who’s worked only in above-average institutions and who can fix problems when he sees them, than it does to people collecting anecdotes from others who’ve been involuntarily committed.
History’s Biggest Natural Experiment
LW Beta Feature: Side-Comments
Open Thread With Experimental Feature: Reactions
Lots of the comments here are pointing at details of the markets and whether it’s possible to profit off of knowing that transformative AI is coming. Which is all fine and good, but I think there’s a simple way to look at it that’s very illuminating.
The stock market is good at predicting company success because there are a lot of people trading in it who think hard about which companies will succeed, doing things like writing documents about those companies’ target markets, products, and leadership. Traders who do a good job at this sort of analysis get more funds to trade with, which makes their trading activity have a larger impact on the prices.
Now, when you say that:
the market is decisively rejecting – i.e., putting very low probability on – the development of transformative AI in the very near term, say within the next ten years.
I think what you’re claiming is that market prices are substantially controlled by traders who have a probability like that in their heads. Or traders who are following an algorithm which had a probability like that in the spreadsheet. Or something thing like that. Some sort of serious cognition, serious in the way that traders treat company revenue forecasts.
And I think that this is false. I think their heads don’t contain any probability for transformative AI at all. I think that if you could peer into the internal communications of trading firms, and you went looking for their thoughts about AI timelines affecting interest rates, you wouldn’t find thoughts like that. And if you did find an occasional trader who had such thoughts, and quantified how much impact they would have on the prices if they went all-in on trading based on that theory, you would find their impact was infinitesimal.
Market prices aren’t mystical, they’re aggregations of traders’ cognition. If the cognition isn’t there, then the market price can’t tell you anything. If the cognition is there but it doesn’t control enough of the capital to move the price, then the price can’t tell you anything.
I think this post is a trap for people who think of market prices as a slightly mystical source of information, who don’t have much of a model of what cognition is behind those prices.
(Comment cross-posted with the EA forum version of this post)
Salvage Epistemology
Attributions, Karma and better discoverability for wiki/tag features
Transformative VR Is Likely Coming Soon
Karma-Change Notifications
There’s been a lot of previous interest in indoor CO2 in the rationality community, including an (unsuccessful) CO2 stripper project, some research summaries and self experiments. The results are confusing, I suspect some of the older research might be fake. But I noticed something that has greatly changed how I think about CO2 in relation to cognition.
Exhaled air is about 50kPPM CO2. Outdoor air is about 400ppm; indoor air ranges from 500 to 1500ppm depending on ventilation. Since exhaled air has CO2 about two orders of magnitude larger than the variance in room CO2, if even a small percentage of inhaled air is reinhalation of exhaled air, this will have a significantly larger effect than changes in ventilation. I’m having trouble finding a straight answer about what percentage of inhaled air is rebreathed (other than in the context of mask-wearing), but given the diffusivity of CO2, I would be surprised if it wasn’t at least 1%.
This predicts that a slight breeze, which replaces their in front of your face and prevents reinhalation, would have a considerably larger effect than ventilating an indoor space where the air is mostly still. This matches my subjective experience of indoor vs outdoor spaces, which, while extremely confounded, feels like an air-quality difference larger than CO2 sensors would predict.
This also predicts that a small fan, positioned so it replaces the air in front of my face, would have a large effect on the same axis as improved ventilation would. I just set one up. I don’t know whether it’s making a difference but I plan to leave it there for at least a few days.
(Note: CO2 is sometimes used as a proxy for ventilation in contexts where the thing you actually care about is respiratory aerosol, because it affects transmissibility of respiratory diseases like COVID and influenza. This doesn’t help with that at all and if anything would make it worse.)
I don’t know how most articles get into that section, but I know, from direct communication with a Time staff writer, that Time reached out and asked for Eliezer to write something for them.
I’m sure there are many people whose inner experience is like this. But, negative data point: Mine isn’t. Not even a little. And yet, I still believe AGI is likely to wipe out humanity.