Some lists that people have made of products that they use & recommend:
Sam Bowman, 2017
Sam Bowman, 2019
Robert Wiblin, 2019
Arden Koehler, 2019
Rosie Campbell, 2019
The Time Timer Audible Countdown Timer.
This is the timer that I like to use when working, e.g. if I decide “alright, I’m going to spend the next half hour working on this thing.” It is a visual timer, where the fraction of the circle that is red tells you what fraction of an hour is left. Ignore its bizarre name—its best feature is that it is completely inaudible.Features that I like:- it counts down silently, without any ticking- I can (and do) set it to end silently, without any alarm sound- it is easy to tell at a glance about how much time is left- it is quick & straightforward to set the timer, without any button pressing- it is a physical object rather than a program on a computing deviceFeatures that it lacks which some people might miss:- you can’t choose a nice sound for the alarm, either it’s silent or there’s the one kinda annoying alarm sound- it is not a program on your computing device, but rather a separate object you need to have with you- it can’t be set to more than an hour- it can’t be set precisely
The economic argument seems wrong in the “Burning coca leaves won’t win the war” section.
The total amount of a good that consumers buy must be less than or equal to the amount that is produced (and not destroyed). So if enough of the crop gets destroyed, then less of it will get consumed. And that’ll happen regardless of whether the suppliers are in a competitive market or monopsony or threaten people with guns.
I framed this in terms of quantities rather than prices because the argument seems more straightforward this way. Also, it seems like reducing the quantity sold is more directly related to what anti-drug folks care about than raising the price. Also, the street price for US consumers would presumably go up if the availability went down, since the people who sell drugs to consumers would be able to make more profit by raising their prices.
If there are problems with the economic argument in the post, that doesn’t necessarily mean the conclusion is wrong. “Burning lots of coca crops will have little to no effect on the price or quantity of cocaine in the US” does seem plausible, mainly because producers can just grow a lot more coca leaves than they need. Producers can predict in advance that lots of their crop might get destroyed (or their product lost in transit or similar), and growing coca leaves is not that expensive relative to their operation, so they can add a lot of slack by growing more than they need. (This doesn’t depend on monopsony or violence.)
One obviously mistaken model that I got a lot of use out of during a stretch of Feb-Mar is the one where the cumulative number of coronavirus infections in a region doubles every n days (for some globally fixed, unknown value of n).
This model has ridiculous implications if you extend it forward for a few months, as well as various other flaws. I was aware of those ridiculous implications and some of those other flaws, and used it anyways for several days before trying to find less flawed models.
I’m glad that I did, since it helped me have a better grasp of the situation and be more prepared for what was coming. And I don’t think it would’ve made much difference at the time if I’d learned more about SEIR models and so on.
It’s unclear how examples like this are supposed to fit with the One Mistake Rule or the exceptions in the last paragraph.
This seems important.
Another feature of competitive markets is that “not betting” is always available as a safe default option. Maybe that means waiting to bet until some unknown future date when your models are good enough, maybe it means never betting in that market. In many other contexts (like responding to covid-19) there is no safe default option.
In the broader rationality/EA community there was also a Siderea post on Jan 30 and an 80K podcast on Feb 3 (along with a followup podcast on Feb 14).
These two, plus Matthew Barnett’s late Jan EA Forum post (which you linked), are the three examples I recall which look most like early visible public alarms from the rationality/EA community.
Other writing was less visible (e.g., on Twitter, Facebook, or Metaculus), less alarm-like (discussions of some aspect of what was happening rather than a call to attention), or later (like the putanumonit Seeing the Smoke post on Feb 27).
I think this post is giving the stock market too much credit.
I’d date the start of the stock market fall as February 24 rather than February 20. The S&P close on Feb 20 & Feb 21 was roughly the same as it had been over the previous couple weeks, and higher than the close on Feb 7, 5, 4, or 3. The first notable dip happened on February 24th; that was the first day that set a low for the month of Feb 2020 (and Feb 25 was the first day that set a low for calendar year 2020).
Also, that was just the start of the crash. The stock market continued falling sharply and erratically for a couple more weeks, and didn’t get within 10% of its current level until March 12th (2.5 weeks after it started its fall on Feb 24).
This is now my favorite way to read HPMOR. I love the Star Wars feel.
I think Scott linked to Pueyo’s essay as an illustration of the ideas, not as the source from which the smart people got the ideas.
Which means that this post’s attempt to track & evaluate the information flows is working off of an inaccurate map of how information has flowed.
Keep in mind that the trend in the number of confirmed cases only provides hints about the trend in new infections. The number of confirmed cases is highly dependent on the amount of testing, and increases in testing capacity will tend to lead to more confirmed cases. Also, there is a substantial delay between when a person is infected and when they test positive, typically somewhere in the range of 1-2 weeks (with the length of the delay also depending on the testing regime).
I think that’s right. Although the data still can tell us something after we get into that ambiguous range where it’s hard to distinguish increasing covid and decreasing flu.
One nice thing about this pattern is that it provides some evidence that the anti-covid interventions are reducing the spread of fever-inducing diseases. And the size of the drop in total fevers tells us something about how well they’re working on the whole, even if it doesn’t tell us the precise trend in covid cases.
Another thing that might be possible is to find other sources of data on the actual prevalence of flu, and use that to come up with a better “baseline” which reflects actual current conditions rather than an estimate of the trendline in the counterfactual world where there was no coronavirus pandemic.
A third thing is that 0 is a lower bound on the number of non-covid fevers, so the trend in total fevers is an upper bound on the number of covid cases.
This third thing already tells us something about Seattle (King County). Their peak in excess fevers happened March 9 at 1.76 scale points (observed minus expected), and the March 22 data show the total fevers at 2.77 scale points. As an upper bound, if those are all covid fevers, that is 1.6x as many new daily cases on March 22 compared to March 9. That’s 13 days, and not even a full doubling in the number of daily new fevers. Which suggests that suppression there is either working or coming very close to working (even though the number of confirmed cases has kept curving upward, at least through March 21).
If you look at the time series for King County (Seattle area), it shows a spike peaking on March 9 with the upward trend beginning sometime around Feb 28 - Mar 2.
I think the pattern of a spike and then flattening & maybe decline (which has happened at different times in different regions) reflects a drop in the number of influenza cases, as people’s anti-covid precautions also prevent flu transmission. So the baseline estimate of how many new fevers there would be if there wasn’t a coronavirus pandemic doesn’t actually represent the number of non-covid fevers, because there are fewer non-covid fevers than there would’ve been without this pandemic.
Elizabeth’s comment also describes this.
Kinsa, a company that sells smart thermometers, has a dashboard that shows which regions of the US have an unusually high number of fevers. They have previously used these methods to track regional flu trends in the US. (FitBit has done something similar.)
I wrote a post here describing my attempt to turn their data into a rough estimate of the total number of coronavirus infections in the United States. Something similar could be done for smaller regions.
I agree that a lot could be done with those sorts of data.
One company that already is making some use of a similar dataset is Kinsa, who sells smart thermometers. They started a few years ago, tracking trends in the flu in the US based on the temperature readings of the people using their thermometers (along with location, age, and gender). Now they have a coronavirus tracking website up. It looks like the biggest useful thing that they’ve been able to do so far with their data is to quickly identify hotspots—parts of the country where there has been a spike in the number of people with a fever. That used to be a sign of a local flu outbreak, now it’s a sign of a local coronavirus outbreak. From the NYTimes:
Just last Saturday, Kinsa’s data indicated an unusual rise in fevers in South Florida, even though it was not known to be a Covid-19 epicenter. Within days, testing showed that South Florida had indeed become an epicenter.
Companies like Fitbit could make a similar pivot, looking to see if they can find atypical trends in their data in the Seattle area Feb 28 - Mar 9, the Miami area Mar 2-19, etc. And they might be able to take the extra step of identifying new indicators that help identify individuals who may have coronavirus (unlike Kinsa, as high body temperature was already a known indicator).
There are potentially a bunch more useful things that could be done with all of these datasets, if more researchers had access to them. For example, it might be possible to get much more accurate estimates of the number of people who have been infected with coronavirus. I may make another post about this soon.
Has there been research from other similarish diseases breaking down the household secondary attack rate by relevant variables? It seems like there could be large differences between:
romantic partners who sleep in the same bed vs. housemates who sleep in different rooms
circumstances where the household has heightened concerns and is taking precautions vs. unsuspecting households
situations where people are removed from the household shortly after they’re infected vs. households where people continue to live after infection
Group houses are mostly in the safer of the two possibilities for the first 2 of these 3.
I was looking at this paper (for other reasons) and saw that it estimated a mean serial interval of 6.3 days in Shenzhen while there was aggressive testing, contact tracing, and isolating. They report that the mean serial interval was 3.6 days among patients who were infected by someone who was isolated within 2 days of symptom onset, and 8.1 days among patients who were infected by someone who wasn’t isolated until 3+ days after symptom onset, for an overall average serial interval of 6.3 in their population. They found R=0.4 - an average of 0.4 known transmissions from each infected person.
This paper looks at cases which were confirmed in Shenzhen (Guangdong, China) Jan 14 - Feb 12, which is while coronavirus was being brought under control there (by the end of the study the cases had fallen to less than 1⁄3 of their peak). I suspect that they qualify for point 1, a place with an unusually good testing regime.
The paper reports that “Cases detected through symptom-based surveillance were confirmed on average 5.5 days (95% CI 5.0, 5.9) after symptom onset (Figure 3, Table S2); compared to 3.2 days (95% CI 2.6,3.7) in those detected by contact-based surveillance”, and also that the median incubation period was 4.8 days from infection to symptom onset (in the smaller sample where both of those dates were known).
Adding 5.5+4.8, that implies that an average of 10.3 days passed between when a person became infected and when they tested positive for cases detected based on symptoms, and 8.0 days for those detected by contact tracing. Since the paper reports that 77% of cases were detected through symptom-based surveillance, that gives an overall average of 9.8 days. (And this is only for the cases that were detected; it’s not adjusting at all for people who were infected by never got a positive test.)
That means that in places where testing is as good as it was in Shenzhen, then the number of positive tests is telling us about the number of infections 9.8 days ago. If the number of cases in that region is doubling every 4 days, then that’s 2.4 doublings, so the number of confirmed cases would only be 18% of the actual number of cases due to the delay in testing (again, without factoring in people who never got tested). (With a 3 day doubling period it would be 10%, with a 5 day doubling period 26%.)
So in places that don’t have a good testing regime it would be significantly less than that.
Yeah, I agree that contact tracing & testing/quarantining contacts is good, and that presymptomatic transmission is possible.
It looked to me like you were claiming that the hypothesis “stopping all symptomatic transmission is sufficient to prevent the number of COVID-19 cases from curving upwards” has been tested by some countries’ measures and found to be false, and I am questioning that apparent assertion.
I notice that the estimates of serial interval (almost?) all come from places that had pretty aggressive & successful containment measures in place, such as identifying & isolating potential carriers (including people who show symptoms, traced contacts, and high-risk travelers). That would tend to shorten the serial interval, since people who are identified early in their infection lose the opportunity to transmit during the later portion of their illness.
Are there estimates of what R was for these populations? If it’s a lot less than the 2-3 that other studies have found that would be some evidence that a lot of later-stage transmissions were prevented.