Mortality rate estimation, at this stage, is very hard. The relevant problems:
Severe cases are more likely to be detected than minor cases (but we don’t know how much more likely)
Nearly all of the quantitative data comes from China, which is underreporting both cases and deaths in an unknown ratio
Cases which end in death resolve faster than cases which end in recovery
On top of which:
No one knows the morbidity rate at all
There are unconfirmed rumors that recovery may not confer lasting immunity (this is a priori unlikely but would make the situation much worse)
The prior is a sample from a distribution which contained SARS, MERS, and an large but unknown number of variations of common cold
The timeline for a cure could be anywhere from a month (chloroquine or remdesivir works) to never
The practical upshot of all of which is, the confidence intervals are all wide enough to drive a truck through.
Disruption of learning mechanisms by excessive variety and separation between nutrients and flavor. Endocrine disruption from adulterants and contaminants (a class including but not limited to BPA and PFOA).
I suspect that, thirty years from now with the benefit of hindsight, we will look at air travel the way we now look at tetraethyl lead. Not just because of nCoV, but also because of disease burdens we’ve failed to attribute to infections, in much the same way we failed to attribute crime to lead.
Over the past century, there have been two big changes in infectious disease. The first is that we’ve wiped out or drastically reduced most of the diseases that cause severe, attributable death and disability. The second is that we’ve connected the world with high-speed transport links, so that the subtle, minor diseases can spread further.
I strongly suspect that a significant portion of unattributed and subclinical illnesses are caused by infections that counterfactually would not have happened if air travel were rare or nonexistent. I think this is very likely for autoimmune conditions, which are mostly unattributed, are known to sometimes be caused by infections, and have risen greatly over time. I think this is somewhat likely for chronic fatigue and depression, including subclinical varieties that are extremely widespread. I think this is plausible for obesity, where it is approximately #3 of my hypotheses.
Or, put another way: the “hygiene hypothesis” is the opposite of true.
Comments from Facebook crossposts of this question:
Ryan C: The likeliest kind of disability would be pulmonary fibrosis or some other chronic lung condition from the trauma there.
William E: I don’t have any answers for you, but I’ve had the exact same thought myself. 😞 We will learn more in the coming months from the very first cases in Wuhan, as they recover full or partial function. I’m *personally* most worried about long term lung damage in people in our age cohort, as deaths have been so heavily concentrated in the elderly.
While this seems accurate in these cases, I’m not sure how far this model generalizes. In domains where teaching mostly means debugging, having encountered and overcome a sufficiently a wide variety of problems may be important. But there are also domains where people start out blank, rather than starting out with a broken version of the skill; in those cases, it may be that only the most skilled people know what the skill even looks like. I expect programming, for example, to fall in this category.
User acquisition costs are another frame for approximately the same heuristic. If software has ads in an expected place, and is selling data you expect them to sell, then you can model that as part of the cost. If, after accounting for all the costs, it looks like the software’s creator is spending more on user acquisition than they should be getting back, it implies that there’s another revenue stream you aren’t seeing, and the fact that it’s hidden from you implies that you probably wouldn’t approve of it.
Since you’re basically risking the loss of money you could have in a hypothetical future you wouldn’t want to live in anyway, and you are betting said money to maximize your utility in the future you want to live in.
Actually, this is backwards; by investing in companies that are worth more in worlds you like and worth less in worlds you don’t, you’re increasing variance, but variance is bad (when investing at scale, you generally pay money to reduce variance and are paid money to accept variance).
Some software costs money. Some software is free. Some software is free, with an upsell that you might or might not pay for. And some software has a negative price: not only do you not pay for it, but someone third party is paid to try to get you to install it, often on a per-install basis. Common examples include:
Unrelated software that comes bundled with software you’re installing, which you have to notice and opt out of
Software advertised in banner ads and search engine result pages
CDs added to the packages of non-software products
This category of software is frequently harmful, but I’ve never seen the it called out by the economic definition. For laypeople, about 30% of computer security is recognizing the telltale signs of this category of software, and refusing to install it.
Both. Uncontroversially, I think, though there is some room to quibble about the exact ratio of causality direction.
In the case of insulin, dosage is too complicated and illegible for insurance to restrict people to the amount they’re using without significant slack. This is good, because running out of insulin is much deadlier much faster than any other commonly-used medication.
The list is now shuffled (as a tiebreak after sorting by your own vote). The shuffle is done once per user, so each user should see the posts in a random order, but it’ll be the same order each time you revisit it. This change went live around the 13th.
The two of us could sign a contract where I pay you $100 and you agree not to disclose what you ate for breakfast this morning, and agree not to disclose the existence of the contract.
The relevant difference between this and an NDA is that this has the restriction on speech coming from a statute, rather than a contract between nongovernmental entities.
In practice I think this is unlikely to matter much for most people. If you’re applying for a job, and the job asks for your resume, they’re not going to go poking around dusty corners of the web looking to see if you had some other version with different contents.
Actually, I expect this will be discovered with nearly 100% reliability by ordinary due diligence on hires. Bankruptcies are necessarily very public and there are APIs for finding out whether someone has declared bankruptcy, so you just check whether each candidate has declared bankruptcy, and if so, you take the resume-URL they gave you and check that URL on archive.org just prior to their bankruptcy.
This doesn’t work, legally or practically speaking, because it’s trying to restrict speech-acts between parties that both want the information to be shared. You can’t legally stop people from truthfully disclosing that they have a reposessed degree, because of the first amendment. You can’t practically stop people from truthfully disclosing that they have a repossessed degree because they will have left many archived traces of that information, for example copies of their resume in the Internet Archive, they have an incentive to leave those traces in place, and removing those traces is too difficult and involves too many third parties to be a legal requirement.
But I think that my disagreement with this first class of alarmist is no very fundamental, we can probably agree on a few things such as:1. In principle, the kind of intelligence needed for AGI is a solved problem, all that we are doing now is trying to optimize for various cases.2. The increase in computational resources is enough to get us closer and closer to AGI even without any more research effort being allocated to the subject.
But I think that my disagreement with this first class of alarmist is no very fundamental, we can probably agree on a few things such as:
1. In principle, the kind of intelligence needed for AGI is a solved problem, all that we are doing now is trying to optimize for various cases.
2. The increase in computational resources is enough to get us closer and closer to AGI even without any more research effort being allocated to the subject.
This is definitely not something you will find agreement on. Thinking that this is something that alarmists would agree with you on suggests you are using a different definition of AGI than they are, and may have other significant misunderstandings of what they’re saying.
“Sealioning” is attempting to participate in “reasoned discourse” in a way that is insensitive to the appropriateness of the setting and to the buy-in of the other party. (Importantly, not “costs” of reasoned discourse; they are polite in some ways, like “oh sure, we can take an hour break for breakfast”.) People who have especially low buy-in to reasoned discourse use the word to paint the person asking for clarification as the oppressor, and themselves the victim. Importantly, they view attempting to have reasoned discourse as oppression. Thus it blends “not tracking buy-in” and “caring about reasoning over feelings” in a way that makes them challenging to unblend.
The part of sealioning that’s about setting can’t really apply to comments on LW. In the comic that originated the term, a sealion intrudes on a private conversation, follows them around and trespasses in their house; but LessWrong frontpage is a public space for public dialogue, so a LessWrong comment can’t have that problem no matter what it is.
So, conversational dynamics are worth talking about, and I do think there’s something in this space worth reifying with a term, preferably in a more abstract setting.
There was a mention of moderation regarding the term sealioning, so I’m addressing that. (We’re not yet addressing the thread-as-a-whole, but may do so later).
In general, it’s important to be able to give names to things. I looked into how the term sealioning seems to be defined and used on the internet-as-a-whole. It seems to have a lot of baggage, including (if used to refer to comments on LessWrong) false connotations about what sort of place LessWrong is and what behavior is appropriate on LessWrong. However, this baggage was not common knowledge. I see little reason to think those connotations were known or intended by Duncan. So, this looks to me look a good-faith proposal of terminology, but the terminology itself seems bad.
I fixed it.
Moderator hat on.
In general, I don’t think we’re going to have a moderator response time of ~4 hours (which is about how long Duncan’s comment had been up when you wrote yours). However, seeing a call for moderator action, we are going to be reviewing this thread and discussing what if anything to do here.
I’ve spent the last few hours catching up on the comments here. While Vaniver and Habryka have been participating in this thread and are site moderators, this seems like a case where moderation decisions should be made by people with more distance.
I reason (as is standard) that the only real way that my machine would be compromised is if someone has physical access; and if that’s the case there’s absolutely nothing you can do about it.
This is incorrect. The main ways computers get compromised are as part of broadly-targeted attacks using open ports, trojanized downloads, and vulnerabilities in the web browser, email client and other network-facing software. For physical-access attacks, the main one is that the computer gets physically stolen, powered off in the process, and never returned, in which case having encrypted the hard disk matters a lot.