Nice! That is a pretty good fit for the sorts of things the Telephone Theorem predicts, and potentially relevant information for selection theorems as well.
It’s not that I don’t want to believe it, it’s that long covid is the sort of thing I’d expect to hear people talk about and publish papers about even in a world where it isn’t actually significant, and many of those papers would have statistically-significant positive results even in a world where long covid isn’t actually significant. Long covid is a story which has too much memetic fitness independent of its truth value. So I have to apply enough skepticism that I wouldn’t believe it in a world where it isn’t actually significant.
No, these problems are most probably cause by a lack of oxygen getting through to tissues.
That sounds right for shortness of breath, chest pain, and low oxygen levels. I’m more skeptical that it’s driving palpitations, fatigue, joint and muscle pain, brain fog, lack of concentration, forgetfulness, sleep disturbance, and digestive and kidney problems; those sound a lot more like a list of old-age issues.
There’s definitely some truth to this, but I guess I’m skeptical that there isn’t anything that we can do about some of these challenges. Actually, rereading I can see that you’ve conceded this towards the end of your post. I agree that there might be a limit to how much progress we can make on these issues, but I think we shouldn’t rule out making progress too quickly.
To be clear, I don’t intend to argue that the problem is too hard or not worthwhile or whatever. Rather, my main point is that solutions need to grapple with the problems of teaching people to create new paradigms, and working with people who don’t share standard frames. I expect that attempts to mimic the traditional pipelines of paradigmatic fields will not solve those problems. That’s not an argument against working on it, it’s just an argument that we need fundamentally different strategies than the standard education and career paths in other fields.
“Baseline” does not mean they stick around. It means that background processes introduce new SnCs at a steady rate, so the equilibrium level is nonzero. As the removal rate slows, that equilibrium level increases, but that still does not mean that the “baseline” SnCs are long-lived, or that a sudden influx of new SnCs (from e.g. covid) will result in a permanently higher level.
At this point, I have yet to see any compelling evidence that any SnCs stick around over a long timescale, despite this being a thing which I’d expect to have heard about if anybody had the evidence. Conversely, it sure does look like treatments to remove senescent cells have to be continuously administered; a one-time treatment wears off on roughly the same timescale that SnCs turn over. That pretty strongly suggests that there are not pools of long-lived SnCs hanging around. And a noticeable pathology would take a lot of SnCs sticking around.
That is not how senescent cells work. They turn over on a fast timescale. If covid induces a bunch of senescent cell development (which indeed makes sense), those senescent cells should generally be cleared out on a timescale of weeks. Any long-term effects would need to be mediated by something else.
Note to self: use infinitely many observable variables Xi instead of just two, and the condition for U∗ should probably be that no infinite subset of the X’s are mutually dependent (or something along those lines). Intuitively: for any “piece of latent information”, either we have infinite data on that piece and can precisely estimate it, or it only significantly impacts finitely many variables.
Sorry, I was lumping together misattribution and the like under “psychosomaticity”, and I probably shouldn’t have done that.
This mostly sounds like age-related problems. I do expect generic age-related pathologies to be accelerated by covid (or any other major stressor), but if that’s the bulk of what’s going on, then I’d say “long covid” is a mischaracterization. It wouldn’t be relevant to non-elderly people, and to elderly people it would be effectively the same as any other serious stressor.
The object-level claims here seem straightforwardly true, but I think “challenges with breaking into MIRI-style research” is a misleading way to characterize it. The post makes it sound like these are problems with the pipeline for new researchers, but really these problems are all driven by challenges of the kind of research involved.
The central feature of MIRI-style research which drives all this is that MIRI-style research is preparadigmatic. The whole point of preparadigmatic research is that:
We don’t know the right frames to apply (and if we just picked some, they’d probably be wrong)
We don’t know the right skills or knowledge to train (and if we just picked some, they’d probably be wrong)
We don’t have shared foundations for communicating work (and if we just picked some, they’d probably be wrong)
We don’t have shared standards for evaluating work (and if we just picked some, they’d probable be wrong)
Here’s how the challenges of preparadigmicity apply the points in the post.
MIRI doesn’t seem to be running internships or running their AI safety for computer scientists workshops
MIRI does not know how to efficiently produce new theoretical researchers. They’ve done internships, they’ve done workshops, and the yields just weren’t that great, at least for producing new theorists.
You can park in a standard industry job for a while in order to earn career capital for ML-style safety. Not so for MIRI-style research.There are well-crafted materials for learning a lot of the prerequisites for ML-style safety.There seems to be a natural pathway of studying a masters then pursuing a PhD to break into ML-style safety. There are a large number of scholarships available and many countries offer loans or income supportGeneral AI safety programs and support—ie. AI Safety Fundamentals Course, AI Safety Support, AI Safety Camp, Alignment Newsletter, ect. are naturally going to strongly focus on ML-style research and might not even have the capability to vet MIRI-style research.
You can park in a standard industry job for a while in order to earn career capital for ML-style safety. Not so for MIRI-style research.
There are well-crafted materials for learning a lot of the prerequisites for ML-style safety.
There seems to be a natural pathway of studying a masters then pursuing a PhD to break into ML-style safety. There are a large number of scholarships available and many countries offer loans or income support
General AI safety programs and support—ie. AI Safety Fundamentals Course, AI Safety Support, AI Safety Camp, Alignment Newsletter, ect. are naturally going to strongly focus on ML-style research and might not even have the capability to vet MIRI-style research.
There is no standardized field of knowledge with the tools we need. We can’t just go look up study materials to learn the right skills or knowledge, because we don’t know what skills or knowledge those are. There’s no standard set of alignment skills or knowledge which an employer could recognize as probably useful for their own problems, so there’s no standardized industry jobs. Similarly, there’s no PhD for alignment; we don’t know what would go into it.
There’s no equivalent to submitting a paper. If a paper passes review, then it gains a certain level of credibility. There are upvotes, but this signaling mechanism is more distorted by popularity or accessibility. Further, unlike writing an academic paper, writing alignment forum posts won’t provide credibility outside of the field.
We don’t have clear shared standards for evaluating work. Most people doing MIRI-style research think most other people doing MIRI-style research are going about it all wrong. Whatever perception of credibility might be generated by something paper-like would likely be fake.
It is much harder to find people with similar interests to collaborate with or mentor you. Compare to how easy it is to meet a bunch of people interested in ML-style research by attending EA meetups or EAGx.
We don’t have standard frames shared by everyone doing MIRI-style research, and if we just picked some frames they would probably be wrong, and the result would probably be worse than having a wide mix of frames and knowing that we don’t know which ones are right.
Main takeaway of all that: most of the post’s challenges of breaking into MIRI-style research accurately reflect the challenges involved in doing MIRI-style research. Figuring out new paths, new frames, applying new skills and knowledge, explaining your own ways of evaluating outputs… these are all central pieces of doing this kind of research. If the pipeline did not force people to figure this sort of stuff out, then it would not select for researchers well-suited to this kind of work.
Now, I do still think the pipeline could be better, in principle. But the challenge is to train people to build their own paradigms, and that’s a major problem in its own right. I don’t know of anyone ever having done it before at scale; there’s no template to copy for this. I have been working on it, though.
Strong upvote, this is great info.
Good points. Some responses:
I put a lot more trust in a single study with ground-truth data than in a giant pile of studies with data which is confounded in various ways. So, I trust the study with the antibody tests more than I’d trust basically-any number of studies relying on self-reports. (A different-but-similar application of this principle: I trust the Boston wastewater data on covid prevalence more than I trust all of the data from test results combined.)
I probably do have relatively high prior (compared to other people) on health-issues-in-general being psychosomatic. The effectiveness of placebos (though debatable) is one relevant piece of evidence here, though a lot of my belief is driven by less legible evidence than that.
I expect some combination of misattribution, psychosomaticity, selection effects (e.g. looking at people hospitalized and thereby accidentally selecting for elderly people), and maybe similar issues which I’m not thinking of at the moment to account for an awful lot of the “long covid” from self-report survey studies. I’m thinking less like 50% of it, and more like 90%+. Basically, when someone runs a survey and publishes data from it, I expect the results to mostly measure things other than what the authors think they’re measuring, most of the time, especially when an attribution of causality is involved.
Good point. If we take that post’s analysis at face value, then a majority of reported long covid symptoms are probably psychosomatic, but only just barely a majority, not a large majority. Though looking at the post, I’d say a more accurate description is that at least a majority of long covid symptoms are psychosomatic, i.e. it’s a majority even if we pretend that all of the supposedly-long-covid symptoms in people who actually had covid are “real”.
This is not going to be kind, but it’s true and necessary to state. I apologize in advance.
Had you asked me in advance, I would have said that Katja in particular is likely to buy into long covid even in a world where long covid is completely psychosomatic; I think you (Katja) are probably unusually prone to looking-for-reasons-to-”believe”-things-which-are-actually-psychosomatic, without symmetrically looking-for-reasons-to-”disbelieve”.
On the object level: the “Long covid probably isn’t psychosomatic” section of the post looks pretty compatible with that prior. That section basically says two things:
Just because reports of long covid are basically uncorrelated with having had covid does not imply that long covid does not happen
There is still evidence of higher-than-usual death rates among people who have had covid
If we take both of these as true, they point to a world where there are some real post-covid symptoms, but the large majority of reported long covid symptoms are still psychosomatic. That seems plausible, but for some reason it isn’t propagated into the other sections of the post. For instance, the very first sections of this post are talking about anecdotes and survey studies (at least I think they’re survey studies based on a quick glance, didn’t look too close), and I do not see in any of those sections any warning along the lines of “BY THE WAY THE LARGE MAJORITY OF THIS IS PROBABLY PSYCHOSOMATIC”. You’re counting evidence which should have been screened off by the lack of correlation between self-reported long covid symptoms and actually having had covid.
This was a concept which it never occurred to me that people might not have, until I saw the post. Noticing and drawing attention to such concepts seems pretty valuable in general. This post in particular was short, direct, and gave the concept a name, which is pretty good; the one thing I’d change about the post is that it could use a more concrete, everyday example/story at the beginning.
That might work in a tiny world model with only two possible hypotheses. In a high-dimensional world model with exponentially many hypotheses, the weight on happy humans would be exponentially small.
Simulacra levels were probably the biggest incorporation to the rationalist canon in 2020. This was one of maybe half-a-dozen posts which I think together cemented the idea pretty well. If we do books again, I could easily imagine a whole book on simulacra, and I’d want this post in it.
A lot of useful techniques can be viewed as ways to “get the first sample” in some sense. Fermi estimates are one example. Attempting to code something in Python is another.
(I’m not going to explain that properly here. Consider it a hook for a future post.)