An error that I’ve made more than once is to go through my existing hypotheses‘ likelihoods for the new evidence and update accordingly while managing not to realise that the likelihoods are low for all of the hypotheses, suggesting the true answer might be outside of my hypothesis space.
The description of the Kelly’s Criterion here seems like it is for the specific case where the house odds (as it were) are 1:1?
The Daily Beast article has some information about how other NYTimes employees are against de-anonymising Scott.
Yes, I’m meaning something along the lines of the actions suggested in the original comment but am doing a rubbish job at explaining this properly. Violence in particular was a poor choice of words and I have changed it to force in the grandparent comment.
All I was really wanting to say was that escalation isn’t the only solution and is usually a bad idea.
One example I’ve experienced is reading scientific papers. I have had the experience where I think “why haven’t they presented this sub-result in this intuitive way?”. Sometimes this is just incompetence but at other times it leads me to find that the particular result in question goes against the hypothesis of the paper and that result is included only in the footnotes/supplemental material.
The claim I was against was that there’s no point trying to petition as force is the only solution which is covered in some depth in that piece. Currently there is a clash of norms but no force has been used. My feelings will change somewhat if they do publish.
I’m In Favor of Niceness, Community and Civilization.
The dispute here, then, is whether doxing is a concept like murder (with intent built into the definition) or homicide (which is defined solely by the nature of the act and its consequences).
I feel like we’re still talking past each other a bit here. I don’t dispute that doxxing can mean any revealing of information about someone, it could be used even when no foreseeable damage is implied and someone just wanted to remain private. The strict definition is not the question.
The non-central fallacy is when a negative affect word is used to describe something where the word is technically true but the actual thing should not have that negative affect associated with it. Martin Luther King fits the definition of a criminal but the negative affect of the word criminal (the reasons why crimes are bad) shouldn’t apply to him.
The problem I have using “dox” here is that some portion of the word’s negative affect doesn’t (or at least might not) apply in this case. An alternative phrasing would be “reveal Scott’s true identity” or, to be snappier, “unmask Scott” which are more neutral. dontdoxscottalexander.com’s title is Don’t De-Anonymize Scott Alexander which I think is better than my ideas.
I think its hard to argue that a central example of doxxing doesn’t involve intent to cause harm. The central example I think in most people’s minds would be something like the hit list of abortion providers or anonymous. Wikipedia has a list of examples of doxxing - a rough count suggests ~13/15 involve providing information about someone ideologically opposed to the doxxer (confirming intent is more difficult). The non-centrality here isn’t as extreme as it is in, say, “Martin Luther King was a criminal” but it is there.
On the relevance of the distinction, yes, I do think it is important. I would support different responses to the NYT depending on whether I thought they were acting out of a desire to endanger/silence Scott or were following a journalistic norm in a way I considered wrong.
Is dox the right word here? I guess this fits inside the definition but it feels kinda non-central to me. A typical example would include some intent to do harm. Considering a different principle more important feels importantly different.
Not that this is much consolation to Scott and I think the NYT is wrong to reveal Scott’s identity (and have written in to say this), I just think doxxing is the wrong way to describe it.
You can generalise this for other required accuracies. If instead of 25% we use “a” then the optimal guess is 100%+a of the current life which is correct 2a100%+a of the time.
If we use an alternative optimisation criterion where we compare any two prediction methods and see, over the life of the bathing machine, which is closer to the correct answer most often then 200% (i.e. the halfway rule) is best.
So which rule of thumb you use depends on what you’re looking to achieve—a guess which will be fairly good for as much of the lifetime as possible or a guess which is better for most of the lifetime, even if sometimes it’s way off.
I was always a little underwhelmed by the argument that elections grant a government legitimacy—it feels like it assumes the conclusion.
A thought occurred to me that a stronger argument is that elections are a form of common knowledge building to avoid insurrections.
The key distinction with my previous way of thinking is that it isn’t the choice of elections as such which is important but that everyone knows how people feel on average. Obviously a normal election achieves both the choice and the knowledge but you could have knowledge without choice.
For example if people don’t vote on a new government but just indicate anonymously whether they would support an uprising (not necessarily violent) against the current government. Based on the result the government can choose to step down or not. This gives common knowledge without a choice.
I suspect this isn’t an original thought and seems kinda obvious now that I think about it—just a way of looking at it that I hadn’t considered before.
I have edited the original comment to more fully reflect my position.
I’m not confident in a 1% as an upper limit (especially in an overrun healthcare system) but I do think that comment gives good back-of-the-envelope estimates (as requested). Later on in that thread CBG also acknowledges it may be higher in than 1% in some places and conditions.
Detail in this case is useful as it shows multiple sources and back-of-the-envelope calculations. I’m not really assessing CBG (except trusting that he isn’t picking and choosing his arguments), rather I’m assessing his back-of-the-envelope calculation and where likely errors can creep in—exactly what the great-grandparent mentioned was preferred.
If “Greg Cochran says 1.2%” is the counter-argument then I don’t really know what to say except how likely is it that he’s wrong this time and by what factor might he be off? What’s his confidence interval? If someone can provide his working then at least that’s something I can assess. It seems he is looking specifically at places with high infection rates and more stretched healthcare systems.
Anyhow, you repudiated this. When I pushed you on it, you came up with the number 1.4%.
The naive central estimate of a single back-of-the-envelope estimate where virus prevalence in Lombardy was estimated from one small town from a month previous isn’t something I’d put much weight on. If pushed for an interquartile range based only on this calculation I would say 0.5<IFR<3.5. The point of that calculation wasn’t to get an accurate answer but to show that 0.2% population fatality rate doesn’t imply that the IFR is massive and 3,000,000 US coronavirus deaths this year is still highly unlikely.
The most detailed treatment I’ve seen on this is this from a couple of months ago.
EDIT: To clarify per discussion below, I do think there’s a fair chance that given a lack of sufficient ventillators the IFR may be >1%.
I’m running at 100% thinking my past selves are assholes. That implies that my current self is probably an asshole by the standard of my future selves. Future selves know which dimensions you need to change in which directions to improve but that isn’t straightforward to a current self.
With this in mind I both think my past selves were assholes but maintain some sympathy for them on the assumption that not being an asshole is incredibly difficult and I am still failing in ways that I don’t even know about yet.
What are your estimates for how many nodes / causal relationships you would need to investigate to figure out one blueprint?
I was going to write an answer but this sums up my thought process perfectly.
Halloween as a counterexample? (Or possibly the exception which proves the rule?)
This is a math error...
Good point, thanks.
Lombardy had a population fatality rate of 0.2%
I don’t think this really means anything without knowing the fraction infected. Robbio’s antibody testing a month ago showed 13-14% infected so naively this gives 1.4% IFR. Possibly some sampling bias though. On the other hand this is a small town and presumably larger towns / cities would expect higher rates.
I’m willing to accept that IFR might push a bit over 1% but that doesn’t overcome the need for a massive outbreak to happen across the whole US without significant action being taken to minimise the impact to get to 3M deaths.