I think it is almost certain you would need to do this repeatedly
You only need one person with long lasting COVID infection to potentially restart the pandemic.
Animals can have COVID so all susceptible animals would need to be likewise isolated (or culled).
You would need to do this in every country that you trade with and every country that they trade with (if we’re trying to prevent damage to the economy). The proposal is even harder to implement in some countries.
I guess the question would be how long respite would this give you before having to repeat.
Say we’re going back completely to normal after the firebreak. Doubling times with the English strain were a little over a weak with the fairly strict December measures. With no measures say it speeds up to doubling every 4 days. This might be optimistic given how fast the original strain spread early in the pandemic.
If we have 10 cases that we’ve failed to eradicate then we get to 10,000,000 cases in about 11 weeks. So we have to repeat this 4 times a year?
10 months ago there was the coronavirus justified practical advice thread.
This resulted in myself (and many many others) buying a pulse oximeter.
Interested to hear now that these are now being provided in the to people in the UK who have COVID but are not in hospital and who are in a high risk category.
I note that there was some discussion on LW about how useful they were likely to be as people would probably notice difficulty in breathing which usually comes with low oxygen levels. It turns out that with COVID oxygen levels can get low without people noticing—this was mentioned later on LW (April).
Probably nowadays what Shorty missed was the difficulty in dealing with the energetic neutrons being created and associated radiation. Then associated maintenance costs etc and therefore price-competitiveness. I chose nuclear fusion purely because it was the most salient example of project-that-always-misses-its-deadlines.
(I did my university placement year in nuclear fusion research but still don’t feel like I properly understand it! I’m pretty sure you’re right though about temperature, pressure and control.)
In theory a steelman Shorty could have thought of all of these things but in practice it’s hard to think of everything. I find myself in the weird position of agreeing with you but arguing in the opposite direction.
For a random large project X, which is more likely to be true:
Project X took longer than expert estimates because of failure to account for Y
Project X was delivered approximately on time
In general I suspect that it is the former (1). In that case the burden of evidence is on Shorty to show why project X is outside of the reference class of typical-large-projects and maybe in some subclass where accurate predictions of timelines are more achievable.
Maybe what is required is to justify TAI as being in the subclass
I think this is essentially the argument the OP is making in Analysis Part1?
I notice in the above I’ve probably gone beyond the original argument—the OP was arguing specifically against using the fact that natural systems have such properties to say that they’re required. I’m talking about something more general—systems generally have more complexity than we realize. I think this is importantly different.
It may be the case that Longs’ argument about brains having such properties is based on an intuition from the broader argument. I think that the OP is essentially correct in saying that adding examples from the human brain into the argument does little to make such an argument stronger (Analysis part 2).
(1) Although there is also the question of how much later counts as a failure of prediction. I guess Shorty is arguing for TAI in the next 20 years, Longs is arguing 50-100 years?
Flying machines are one example but can we choose other examples which would teach the opposite lesson?
Nuclear Fusion Power Generation
Longs: The only way we know sustained nuclear fusion can be achieved is in stars. If we are confined to things less big than the sun then sustaining nuclear fusion to produce power will be difficult and there are many unknown unknowns.
Shorty: The key parameters are temperature and pressure and then controlling the plasma. A Tokamak design should be sufficient to achieve this—if we lose control it just means we need stronger / better magnets.
Given how fast we cross orders of magnitude these days, that means we are in the era of the Wright brothers.
I think this assumes the conclusion—it assumes that we know enough about intelligence to know what the key variables are and how effective they can be at compensating for other variables. Da Vinci could have argued how much more efficient his new designs were getting or how much better his new wings were but none of his designs could have worked no matter how much better he made them.
I don’t disagree with you in general but I think the effect of Longs’ argument should be to stretch out the probability distribution.
Also, the vaccine takes ~10 days to start having an effect. Plus say there is ~7 days delay from infection to test. 17 days ago Israel had vaccinated 6% so we wouldn’t expect to see much effect in the case numbers yet.
On increased infectiousness of the UK strain, analysis of contact tracing data in the UK gives 30-50% more infectious (although looking at the data 30%-45% with central estimate of 35% is probably a better summary—see pages 14-16).
Here’s the data from Denmark, suggesting 59% additional infectiousness.
If I’m understanding the link correctly the 0.59 refers to the UK rate, the Denmark rate is 0.45. I’m also not sure whether the “0.45” and “0.59″ are percentage increases or absolute increases in R—I think probably the latter (although if R=1 for the old strain as seems to be approximately true in Denmark then these are the same thing).
The paper he cites gives 72% increased infectiousness but with wide confidence interval:
The observed development in the occurrence of cluster B.1.1.7 in Denmark corresponds to an infection rate that is 72% (95% CI: [37, 115]%) higher than the average of other virus variants circulating in Denmark. (Google translate)
I suspect that this may come down as via regression towards the mean—region specific early data will bias towards regions where the new variant is growing fastest.
Am I being naive in thinking that most of the 50x comes from manufacturing the vaccine? In 1947 they had 650k vaccines ready to go, then got 7 pharma companies across the country to work round the clock to get them the rest. They were aiming to vaccinate 6.35 million, we are aiming to vaccinate 328 million just in the US (57x more doses needing to be manufactured). We’d expect to have more total capacity today of course but we have fewer companies doing the manufacturing.
I guess the mRNA vaccines are also more difficult to manufacture. Based on the cost the P/B is 5x more expensive than O/A viral vector. The latter is planning on producing more than twice as many in 2021 (I don’t know how the size and number of facilities compare but Pfizer are the bigger company).
Say we were giving every dose procured by the US evenly spread across the country—how many would we be doing a day in NYC?
I have tried something similar but not with money (I find my kids aren’t very motivated by money—not sure why). In our case the losing party usually has to formally acknowledge the victor with some silly phrase—“Dad is an amazing human / genius” or “Mark is a pro and I’m a noob”. This doesn’t allow for different odds (maybe I could tailor different phrases to achieve this?) although I will sometimes offer it without them being held to anything if I am sufficiently confident.
I do think there is some risk with this approach that the child will have a bad time just to get the money
I was worried about this too but similarly haven’t actually experienced it—I don’t think my kids have the willpower / concentration to keep this up for long enough!
Yes, that was my conclusion too.
I get the impression that the US response is best modelled on how much action individuals choose to take based on how scared they feel / how fed up they are with COVID restrictions.
In the UK I think people’s response is generally more directly linked to the government’s rules and guidance (with a fair bit of going slightly beyond the rules and a little bit of completely ignoring them).
In the latter case things can be put in place before the 60 day delay (for instance Scotland didn’t have many cases of the new strain but took drastic action despite that because they knew it would grow quickly). In the former case I think your description here is a good model of the response—we could slow it down by reacting early but we probably won’t.
Is there a reason for 1.3 per day? This seems very fast (like the beginning of the outbreak before we were taking any containment measures).
If the old strain is R=1 then we expect the new strain to be say R=1.65. This would mean (given generation time of 6.5 days) that the 1.3 would change to1.08.
Each 2 days in your analysis would be equivalent to about a week in the 1.08 model.
One other possibility which has been discussed is sub-orbital inter-continental travel. I think SpaceX have mentioned this in the past but not for a few years.
I think it would be really helpful if you could operationalise these possibilities into elicit predictions to give an idea of how likely you think each are.
For people not in England it probably helps to say that restrictions were loosened somewhat at the beginning of December (following 3 weeks of stricter lockdown which ended on the 3rd IIRC) and are being tightened significantly today (26th). Worst affected locations were already tightened on the 20th.
The November lockdown was probably marginally stronger than the one now in place but comparing end of November drop rates to whatever happens over the next few weeks will probably be a good indicator as the December growth rates are confounded by different levels of lockdown and results being a mixture of the two strains.
Ha, I don’t know how many times I have read that in the last couple of days and completely failed to notice!
The Berry-Essen theorem uses Kolmogorov-Smirnov distance to measure similarity to Gaussian—what’s the maximum difference between the CDF of the two distributions across all values of x?
As this measure is on absolute difference rather than fractional difference it doesn’t really care about the tails and so skew is the main thing stopping this measure approaching Gaussian. In this case the theorem says error reduces with root n.
From other comments it seems skew isn’t the best measure for getting kurtosis similar to a Gaussian, rather kurtosis (and variance) of the initial function(s) is a better predictor and skew only effects it inasmuch as skew and kurtosis/variance are correlated.
So my understanding then would be that initial skew tells you how fast you will approach the skew of a Gaussian (i.e. 0) and initial kurtosis tells you how fast you approach the kurtosis of a Gaussian (I.e. 3)?
Using my calibrated eyeball it looks like each time you convolve a function with itself the kurtosis moves half of the distance to 3. If this is true (or close to true) and if there is a similar rule for skew then that would seem super useful.
I do have some experience in distributions where kurtosis is very important. For one example I initially was modelling to a normal distribution but found as more data became available that I was better to replace that with a logistic distribution with thicker tails. This can be very important for analysing safety critical components where the tail of the distribution is key.
The graph showing Kurtosis vs convolutions for the 5 distributions could be interpreted as showing that distributions with higher initial kurtosis take longer to tend towards normal. Can you elaborate why initial skew is a better indicator than initial kurtosis?
The skew vs kurtosis graph suggests that there’s possibly a sweet spot for skew of about 0.25 which enables faster approach to normality than 0. I guess this isn’t real but it adds to my confusion above.
I think there was some talk after last year about adding a “endorse nomination” button so that not everyone had to write their own comment to provide a nomination if they just agreed with what someone else had already written. Is this available / planned?
Multicore, please leave a comment on the post so I can upvote you for winning!
I didn’t win, but our clique was probably instrumental to Multicore’s victory, so I’ll be content with that.
“Well,” thought the antelope, as it’s spirit floated above the scene, “at least that lion is getting plenty of sustenance from my corpse.” :)
I feel like Measure has good reason to feel at least a little smug for having predicted that something like this might happen:
How surprised would you be if someone managed to bypass the code checking and defect from the group?
For the record, I do agree that the presence of the clique made for an interesting contest!