Currently spending time on derisking research (see derisked.org). Previously worked at BERI, EpiFor/FHI, CEA, IPA. Generally US-based.
Josh Jacobson
It seems valuable to LOUDLY NOTE that Microcovid.org has not been updated for the Delta variant https://github.com/microcovid/microcovid/issues/869 and that the adjustment should be quite significant.
I’d be interested in perspectives on what adjustment should be implemented.
It’s interesting to tie some thoughts in his writing to EA, but based on just the evidence here, I’d object to calling him an EA.
I’d like to see that someone did significant good with their actions before calling them an EA, especially someone in a position of power.
His words, particularly on nukes, sound a lot more like prediction or speculation than advocacy to me:
If to these tremendous and awful powers is added the pitiless sub-human wickedness which we now see embodied in one of the most powerful reigning governments, who shall say that the world itself will not be wrecked, or indeed that it ought not to be wrecked? There are nightmares of the future from which a fortunate collision with some wandering star, reducing the earth to incandescent gas, might be a merciful deliverance.
It’s fun to call a famous figure an EA, but to me, identifying a risk in your writing = futurist, taking actions in pursuit of doing the most good you can = EA. I think to some doing things like calling famous figures EAs could be seen as the movement being spurious and status seeking, so I have a particular sensitivity to it that makes me want to flag this here.
Up until “Fuck The Symbols” I’m with you. And as an article for the general public, I’d probably endorse the “Fuck the Symbols” section as well.
In particular:
it’s usually worth at least thinking about how to do it—because the process of thinking about it forces you to recognize that the Symbol does not necessarily give the thing, and consider what’s actually needed.
To the extent this is advocacy, however, it seems worth noting that I think the highly engaged LW crowd is already often pretty good about this, (so I’d be more excited about this being read by new LWers). In fact, in my experience, the highly-engaged LW crowd’s bias is already too far toward “fuck the symbols”.
There’s a lot of information that can be gained by examining the symbols. For example, I think EA’s efforts toward global development are highly stunted by a lack of close engagement with many existing efforts to do good. Working at a soup kitchen is probably not the best use of a poverty-focused EA’s time. But learning about UN programs, the various development sectors and associated interventions, and the status and shortcomings of existing M&E, I think very likely are (for those who haven’t done so). Doing so revealed to me a myriad of interventions that I’d expect to be higher impact than those endorsed by GiveWell. The symbols often contain valuable information.
The symbols can also be useful. Ivy League MBAs probably have an easier time raising money for certain types of businesses than do others.
So ‘fuck the symbols’ just feels much too strong to me, and in fact in the opposite direction I’d advocate, for the particular audience reading this.
The tone of strong desirability for progress on WBE in this was surprising to me. The author seems to treat progress in WBE as a highly desirable thing; a perspective I expect most on LW do not endorse.
The lack of progress here may be a quite good thing.
An article published today on Reuters and elsewhere reads, “Israeli survey finds 3rd Pfizer vaccine dose has similar side effects to 2nd.” Buried within this article is the following:
About 0.4% said they suffered from difficulty breathing, and 1% said they sought medical treatment due to one or more side effect.
This seemed quite bad to me and like a worrisome result. I sought information on how many sought medical treatment after the second shot. I could not find this information, but I did find:
only 51 of some 650,000 people to have received the Pfizer shot sought medical attention for symptoms suffered
from a December 2020 article on Israeli vaccination. Comparing the 1% to 51⁄650000 = 0.008% I found that the current frequency of side effects requiring medical attention was 128x the level found after dose 1. This seemed like a bad sign.
I then sought out more information about side effects post dose 2 in Israel, which I did not find. But instead I looked at the CDC’s Advisory Committee on Immunization Practices’ Interim Recommendation for Moderna, and found the following:
The frequency of serious adverse events** observed was low in both the vaccine (1.0%) and placebo (1.0%) recipients
** Serious adverse events are defined as any untoward medical occurrence that results in death, is life-threatening, requires inpatient hospitalization or prolongation of existing hospitalization, or results in persistent disability/incapacity.
I can’t believe that this was 1%! That seems surprisingly high (for either group). I expect the outside-of-trial data has not been nearly that magnitude.
This 1% matches the current Israeli data, and with a more restrictive definition, so the Israeli data no longer seems particularly worrisome in comparison, though I may dig in to this further. In general, I feel somewhat confused by the situation.
Sources: Reuter’s article from today—https://www.reuters.com/business/healthcare-pharmaceuticals/israeli-survey-finds-3rd-pfizer-vaccine-dose-has-similar-side-effects-2nd-2021-08-08/
Article from December 2020 - https://www.timesofisrael.com/1-in-1000-israelis-report-mild-side-effects-from-vaccine/
CDC’s Advisory Committee on Immunization Practices’ Interim Recommendation for Moderna: https://www.cdc.gov/mmwr/volumes/69/wr/mm695152e1.htm
EDIT: This article’s statistics contrast with those of Reuter’s, and show data very similar to the 1st shot: https://www.timesofisrael.com/of-600000-israelis-who-received-3rd-dose-fewer-than-50-reported-side-effects/
- 18 Aug 2021 0:31 UTC; 7 points) 's comment on Josh Jacobson’s Shortform by (
Bringing over the outcome of a lot of recent discussion I’ve had on Facebook and some research I’ve done regarding the Narwall Mask:
-
I believe there’s currently a lot of uncertainty as to the effectiveness of the Narwall, with multiple meaningful reasons for there to be uncertainty. A lot of effectiveness outcomes would not surprise me. I do not believe it has been well-tested nor well-analyzed, at least compared to those that meet NIOSH standards.
-
I think there’s enough information out there to statistically estimate its effectiveness with some reasonable degree of confidence, but it would take me another 3-8 hours (on top of my existing research) to do so. Considering a P100 is just ~$30 for me, I’ve just switched to that + glasses when relevant for now instead. I think others should do the same if they can achieve good fit with a P100 (the Microcovid authors seem to think this can often be achieved.) https://www.microcovid.org/paper/14-research-sources#masks
-
I think theres a 75% chance that after estimating its effectiveness, I’d find it to be meaningfully less than a P100 (e.g. less than 98.5% on the relevant filtration). I think there’s a 50% chance I’d find it to be approximately equal to an N95 mask or worse.
Sharing this here because some LWers wear it and I think there’s some value in sounding a warning about the mask potentially not being as effective as most likely anticipate.
-
But even if I’m wrong about that, that is, as I said, none of the FDA’s damn business. The FDA’s damn business is whether the booster shots are safe and effective or not.
Is this defined somewhere? I see the FDA and CDC doing this frequently, so I’ve assumed part of their medical mandate is indeed to consider questions such as global supply. It is an odd separation of powers, with ambiguous overlap, where different groups decide on donation of vaccines… even across different types of vaccine (eg the CDC seems to have donated HPV vaccines, indirectly, in the past, and now the White House seems to be managing COVID vaccine supply? And donation targets?). Inefficient designation of responsible party for these decisions, from what I know.
I recently had what I thought was an inspired idea: a Google Maps for safety. This hypothetical product would allow you to:
Route you in such a way that maximizes safety, and/or
Route you in such a way that maximizes your safety & time-efficiency trade-off, according to your own input of the valuation of your time and orientation toward safety
First, I wanted to validate that such a tradeoff between safety and efficiency exists. Initial results seemed to validate my prior:
The WHO says crashes increase 2.5% for every 1 km/h increase in speed.
The Insurance Institute for Highway Safety (IIHS) reports fatalities increase by 8.5% when there is a 5mph increase in speed limit on highways, and 2.8% for the same speed limit increase on other roads.
The National Safety Council (NSC) cites speed as a factor in 26% of crashes.
Despite these figures, I felt none of these, on their own, provided sufficient information to analyze the scale of safety gains to be had. The WHO source was outdated and without context (although there was a link to follow for more information that I didn’t see at that time), the IIHS merely talked about increases in speed limits for two types of roads, rather than actual changes in speed that results nor relative safety of the two types of roads, and the NSC provided a merely binary result.
So I went searching for more data.
And I discovered that the US National Highway Traffic Safety Administration (NHTSA) releases a shocking amount of data on every fatal car crash. There’s useful data, such as what type of road the crash happened at, what the nature of the collision was, information on injuries and fatalities, whether alcohol was involved, etc.
(There’s also a surprising amount of information that I expect might make some people uncomfortable. For every crash this data includes VIN number of vehicles involved, driver’s height, weight, age, gender, whether they owned the vehicle, driving and criminal history. It also includes the exact time, date, and location of the crash.)
I used the former (useful) information for analysis on this question. Given the initial data found, I figured that one way to approximate the available gains and tradeoffs was to analyze safety-gained from turning on the “Avoid Highways” setting on Google Maps.
After some experimentation and reading others’ thoughts, it became clear that this setting avoids interstates (I-5, I-10, I-15, etc.) but not other types of highways. I used NHTSA data to calculate the number of deaths occurring on interstates vs. on other roads, and found that the Federal High Administration provides data on the number of miles driven in the US per year by type of road. Using these two sources of data, I calculated the number of miles driven per fatality on interstates vs on all other roads (for 2019):
Interstates: ~180 million miles / fatality
All Other Roads: ~104 million miles / fatality
It turns out that interstates appear to be (at least on this metric) safer than non-interstates! This was surprising to me, given the earlier cited results that pointed to speed being dangerous.
I decided that I’d do more validation of this result if this was surprising to most people, but wouldn’t perform more validation if this wasn’t. Asking around, it looks like this result is not surprising to most:
From Effective Altruism Polls:
From EA Corner Discord:
And from the LessWrong Slack:
So first of all, good job community, on seemingly being calibrated. Second, I followed my earlier plan and did not look further into this result given that it was aligned with most people’s priors. And finally, I do think this makes the expected value of a Google Maps for safety significantly lower than my prior.
Assuming this result would hold through further validation, there are still ways that a Google Maps for safety could be beneficial. A few examples of this:
Seeing if there are other road-type routing rules that would provide safer outcomes.
Using more specific data, such as crash reports by road, to identify particularly dangerous roads / intersections and avoid them.
There seem to be some behavioral economics-like results with road safety that could be leveraged during route design. For example, apparently roads with narrower lanes are safer than roads with wide lanes, presumably because narrower lanes have the effect of people driving more slowly, while having a lower effect on increased accident rate.
Digging further into data on factors that contribute to crashes (alcohol, weather, distraction, evening, etc.) could reveal patterns that provide clues as to the safer route by situation.
I think this could be a really cool app to have, and I’d support its development if someone were to take it on, but it seems like a big project. I was sad and surprised to find that the potential quick win of turning on the “avoid highways” option is seemingly not a win at all (although there exist confounders and further validation would be beneficial).
The Wait But Why article “Life is a Picture, But You Life in a Pixel” makes this same point and is what caused me to start explicitly focusing on evaluating jobs this way years ago.
A good read: https://waitbutwhy.com/2013/11/life-is-picture-but-you-live-in-pixel.html
-
Cryopreservation causes lots of damage, always. What would this show?
-
Brain biopsies, especially by cryo staff, sound dangerous.
-
Is there any indication that cryo companies would comply with this? What would associated costs be?
-
Epistemic status: just speculation, from a not very concrete memory, written hastily on mobile after a quick skim of the post.
My guess is that these results should be taken with a large grain of salt, but if I’m wrong, I’d be interested in hearing more about why.
Specifically, I think the “alignment researcher” population and “org leader” populations here are probably a far departure from what people envision when they hear these terms. I also expect other populations reported on to have a directionally similar skew to what I speculate below.
An anecdote for why I expect that (some aspects may be off):
I started the survey, based off the description that it’d be decently short. I found it long, involved, and asking various questions (marked as required) that I really wasn’t interested in answering (nor interested in the results of). IIRC it also had various ways in which the question phrasing was lacking. I accordingly abandoned it, while seeing there was still a long way to go to completion.
One additional factor for my abandoning it was that I couldn’t imagine it drawing a useful response population anyway; the sample mentioned above is a significant surprise to me (even with my skepticism around the makeup of that population). Beyond the reasons I already described, I felt that it being done by a for-profit org that is a newcomer and probably largely unknown would dissuade a lot of people from responding (and/or providing fully candid answers to some questions).
All in all, I expect that the respondent population skews heavily toward those who place a lower value on their time and are less involved. I expect this to generally be a more junior group, often not fully employed in these roles, with eg the average age and funding level of the orgs that are being led particularly low (and some of the orgs being more informal).
That’s a very legitimate and useful population to survey; I just think it also isn’t at all what people typically think of when hearing these terms.
I could be wrong about all of this! But my guess is it’s directionally useful for understanding this post.
FWIW, I do not think that Alcor > CI represents a consensus opinion; when I investigated this question ~1 year ago, it seemed likely to me that there was little difference other than cost (CI wins) and financial sustainability (Alcor wins).
I personally don’t believe most other differences are meaningful (especially e.g. profusion quality), although I’m not an expert on many aspects of this.
Beyond eventual self-funding, there are other reasons to potentially consider a term policy:
-
Even if you do not expect to self-fund, if your financial assets will increase in the future and are low right now, the much lower term-cost may be worthwhile. I pay $10 / month for my term coverage, and I would not have opted in to the ~$100/month average you project elsewhere.
-
If you expect that technological progress will greatly increase during your lifetime, e.g. short AGI timelines, or curing all disease, you may be primarily interested in coverage for the next ~20 years vs. after that time.
- 7 Jun 2021 21:34 UTC; 27 points) 's comment on Cryonics signup guide #1: Overview by (
-
Would anyone like the domain alignai.org ? Otherwise I’ll probably let it expire (bought for a previous org, which doesn’t want it).
EDIT (7/27/23) After very preliminary research, I now think “telling people to ride in SUVs or vans instead of sedans” may turn out to be worthwhile.
As I’m working on derisking research, I’m particularly aware of what I think of as “whales”… risks or opportunities that are much larger in scale than most other things I’ll likely investigate.
There are some things that I consider to be widely-known whales, such as diet and exercise.
There are others that I consider to be more neglected, and also less certain to be large scale (based on my priors). Air quality is the best example of this sort of whale, though 3-8 other potential risks or interventions are on my mind as candidates for this, and I won’t be surprised to discover a couple whales that did not seem to be so prior to investigation.
I thought that road safety and driving was a widely-known whale. Based on a preliminary investigation (more on what this means), I now tentatively think it is not.
This preliminary analysis yielded an expected ~17 days of lost life as a result of driving for an average 30 year old in the US over the next 10 years.
I’m not sure how many of these 17 days an intervention could capture. I suspect most likely readers of what I’d write already grab the low-hanging fruit of e.g. not driving while impaired and wearing a seatbelt. So it does not seem probable that I would discover an intervention that alleviated even 30% (~5 days) of risk. Furthermore, I suspect most interventions in this space could have large inconvenience or time costs, causing greater reduction in the expected gain of my research in this space.
While this analysis does neglect loss of QALDs due to injury, which I don’t know the scale of, I predict they are unlikely to greatly affect this conclusion.
The 10 year timeframe may seem odd to some. But if we assume that self-driving cars of a certain ability level will greatly increase the safety of vehicle travel, which I personally believe, then 10 years may be even longer than the relevant window for investigation. Metaculus predicts L3 autonomous vehicles by the end of 2022, L4 autonomous vehicles by the end of 2024, and L5 autonomous vehicles by mid 2031. It’s not entirely clear to me at which of these stages most of the safety benefits are likely to occur, nor how long widespread use will take after these are first available, but it does seem to me as though the dangers of car travel, at least for most people who are likely to read my content, will not persist long into the future.
I have some context for effect sizes I think I’m likely to find with various interventions. I have preliminary estimates for interventions affecting air quality & nuclear risk, and more certain estimates for interventions on smoke detectors and HPV vaccination. With that context, road safety does not seem to particularly differentiate itself from much else I expect to investigate. With this discovery that road safety does not seem to be a ‘whale’, I tentatively think I will not further investigate it in the near future.
This is a follow-up to https://www.lesswrong.com/posts/RRoCQGNLrz5vuGQYW/josh-jacobson-s-shortform?commentId=pZN32PZQuBMHtM8aS , where I noted that I found the following sentence in an article about an Israeli study on 3rd shot boosters:
About 0.4% said they suffered from difficulty breathing, and 1% said they sought medical treatment due to one or more side effect.
worrisome, and how I reconciled it.
When I posted that, I reached out to Maayan Hoffman, one of the authors of the original Israeli article, with these observations. She found these interesting enough that she reached out to Ran Balicer, the head of the study (Head of Research at Clalit Health), with my observations, and then she forwarded his response to me:
We used ACTIVE screening for AE—we surveyed 22% of the vaccinees. [The other report cited] (https://www.timesofisrael.com/of-600000-israelis-who-received-3rd-dose-fewer-than-50-reported-side-effects/) [includes] PASSIVE reports of AE that the vaccinees choose to share with the reporting system. These are complementary systems. Just like in the US and other countries. Both are important. … What we did is quite unprecedented. In terms of timing (same day—proactive calls—data gathering—analysis—informing the public). On 4500! People − 22% of all those with 7d experience after the 3rd shot. Even in Covid—I don’t think anyone has achieved anything like this. A clear message for the public to get vaccinated.
My thoughts:
-
There’s still something uncomfortable about the 0.4% having difficulty breathing to me. Based of what I cited previously from the Moderna study, and this additional context of active monitoring, the 1% seeking medical attention seems notable but not a big deal (after all, it matches placebo in the Moderna trial). It was still an update vs. my expectations when originally seeing it.
-
I think this makes me mildly more hesitant than before about the booster shot, but I definitely strongly believe the booster shot is worthwhile (in isolation, e.g. not considering global fungibility). Also, it’s not at all clear that this result is unique to booster-recipients vs. earlier vaccine reactions.
-
A dialogue between myself and Ruby that may be of interest (shared with permission):
Ruby: A question: why do you set single-car crashes to zero?
My response: It seemed you were interested in something like “if you’re a safe person, how safe is driving”, and I thought single-car accidents may be particular indicators of being ‘unsafe’ in some ways. I’d be happy to add calculations that include single-car accidents as well.
Ruby: Maybe, but there are reasons why a safe driver might be a single-car crash too:
- hit a pot hole,
- lost traction in bad weather (rain, snow)
- swerved out of the way of a another car (is that 1 car or 2 car crash?) or out of a pedestrian/animal/whatever.
- general car malfunction (tire blown, steering, breaks)
My response: Yeah I think what constitutes a ‘safe’ driver is pretty unknown, and I wasn’t ultimately sure what adjustment to make. A perfectly safe driver, for instance, could arguably prevent each of these examples. Additionally, it’s likely an oversimplification to remove all of a single driver’s share of distracted and drowsy driving crashes, as there’s likely some percentage of those that are unavoidable.
It’s interesting that you cite last year as evidence of your trading going well, at a 13.5% gain, while the S&P 500 (SPY) total return for 2021 was 28.7%. Can you elaborate on your perspective given that the market performed so well in general?
I’ve left relevant comments on a number of the sections, but I think it’s worth strongly emphasizing that you can have a much different experience than this sequence outlines! And having this different experience can be a very reasonable choice to make.
As someone financially constrained, who has high uncertainty on his finances and the state of technology 20+ years from now:
I pursued term life insurance; it was fast, easy and cheap. I pay ~$10 / month for my cryo coverage, with the rate locked in for the next 20 years. All three providers I moved forward with were compatible with cryo, around the same price, and easy to work with. The policy I settled on is with Haven Life. I expect every insurance policy is compatible with the Cryonics Institute; they work with you to find a solution, and there are many. See this comment for why term life insurance can be a good choice: https://www.lesswrong.com/posts/NPDSB3WEEAb8Swuyc/4-1-types-of-life-insurance?commentId=5sXoDYZzRr2AcafeF
I went with CI, and paid the lifetime membership fee. A post in this sequence estimates that cost as equivalent to $2 / month. If I accept that, my total financial outlay is $12 / month for cryo coverage for the next 20 years; this is much cheaper (although also potentially less feature-rich) than the over $100 / month this sequence provides guidance to obtaining.
Going with CI can be a very reasonable decision. Not only can it be significantly more affordable, but I personally don’t believe there are meaningful differences in cryopreservation quality (it’s all very bad and will require appx. equally advanced technology to reanimate). Furthermore, if you have short timelines, financial sustainability is less likely to matter between the two (it’s more likely both last for 30 years than for 500 years).
Many of the “optional additional steps” were a built-in part of the CI sign-up process, in my case.
Additionally, there are many more cryopreservation options and optional next steps you can potentially take. CI informs you of some of those (Alcor may as well) and there’s a lot of unique information shared in this FB group: https://www.facebook.com/groups/cryonicists/