And the correct reaction (and the study’s own conclusion) is that the sample is too small to say much of anything.
(Also, the “something else” was “conventional treatment”, not another antiviral.)
And the correct reaction (and the study’s own conclusion) is that the sample is too small to say much of anything.
(Also, the “something else” was “conventional treatment”, not another antiviral.)
I find the ‘backfired through distrust’/‘damaged their own credibility’ claim plausible, it agrees with my prejudices, and I think I see evidence of similar things happening elsewhere; but the article doesn’t contain evidence that it happened in this case, and even though it’s a priori likely and worth pointing out, the claim that it did happen should come with evidence. (This is a nitpick, but I think it’s an important nitpick in the spirit of sharing likelihood ratios, not posterior beliefs.)
if there’s a domain where the model gives two incompatible predictions, then as soon as that’s noticed it has to be rectified in some way.
What do you mean by “rectified”, and are you sure you mean “rectified” rather than, say, “flagged for attention”? (A bounded approximate Bayesian approaches consistency by trying to be accurate, but doesn’t try to be consistent. I believe ‘immediately update your model somehow when you notice an inconsistency’ is a bad policy for a human [and part of a weak-man version of rationalism that harms people who try to follow it], and I don’t think this belief is opposed to “rationalism”, which should only require not indefinitely tolerating inconsistency.)
We found that viable virus could be detected… up to 4 hours on copper...
Here’s a study using a different coronavirus.
Brasses containing at least 70% copper were very effective at inactivating HuCoV-229E (Fig. 2A), and the rate of inactivation was directly proportional to the percentage of copper. Approximately 103 PFU in a simulated wet-droplet contamination (20 µl per cm2) was inactivated in less than 60 min. Analysis of the early contact time points revealed a lag in inactivation of approximately 10 min followed by very rapid loss of infectivity (Fig. 2B).
That paper only looks at bacteria and does not knowably carry over to viruses.
I don’t see you as having come close to establishing, beyond the (I claim weak) argument from the single-word framing, that the actual amount or parts of structure or framing that Dragon Army has inherited from militaries are optimized for attacking the outgroup to a degree that makes worrying justified.
Doesn’t work in incognito mode either. There appears to be an issue with lesserwrong.com when accessed over HTTPS — over HTTP it sends back a reasonable-looking 301 redirect, but on port 443 the TCP connection just hangs.
Similar meta: none of the links to lesserwrong.com currently work due to, well, being to lesserwrong.com rather than lesswrong.com.
Further-semi-aside: “common knowledge that we will coordinate to resist abusers” is actively bad and dangerous to victims if it isn’t true. If we won’t coordinate to resist abusers, making that fact (/ a model of when we will or won’t) common knowledge is doing good in the short run by not creating a false sense of security, and in the long run by allowing the pattern to be deliberately changed.
This post may not have been quite correct Bayesianism (… though I don’t think I see any false statements in its body?), but regardless there are one or more steel versions of it that are important to say, including:
persistent abuse can harm people in ways that make them more volatile, less careful, more likely to say things that are false in some details, etc.; this needs to be corrected for if you want to reach accurate beliefs about what’s happened to someone
arguments are soldiers; if there are legitimate reasons (that people are responding to) to argue against someone or see them as dangerous, this is likely to bleed over to dismissing other things they say more than is justified, especially if there are other motivations to do so
the intelligent social web makes some people both more likely to be abused, and less likely to be believed
whether someone seems “off” depends to some extent on how the social web wants them to be perceived, independent of what they’re doing
seriously I don’t know how to communicate using words just how powerful (I claim) this class of effects is
there are all kinds of reasons that not believing claims about abuse is often just really convenient; this sounds obvious but I don’t see people accounting for it well; this motivation will take advantage of whatever rationalizations it can
IMO, the “legitimate influence” part of this comment is important and good enough to be a top-level post.
This is simply instrumentally wrong, at least for most people in most environments. Maybe people and an environment could be shaped so that this was a good strategy, but the shaping would actually have to be done and it’s not clear what the advantage would be.
My consistent experience of your comments is one of people giving [what I believe to be, believing that I understand what they’re saying] the actual best explanations they can, and you not understanding things that I believe to be comprehensible and continuing to ask for explanations and evidence that, on their model, they shouldn’t necessarily be able to provide.
(to be upfront, I may not be interested in explaining this further, due to limited time and investment + it seeming like a large tangent to this thread)
I don’t see how we anything like know that deep NNs with ‘sufficient training data’ would be sufficient for all problems. We’ve seen them be sufficient for many different problems and can expect them to be sufficient for many more, but all?
A tangential note on third-party technical contributions to LW (if that’s a thing you care about): the uncertainty about whether changes will be accepted, uncertainty about and lack of visibility into how that decision is made or even who makes it, and lack of a known process for making pull requests or getting feedback on ideas are incredibly anti-motivating.
Other possible implications of this scenario have been discusesd on LW before.
This shouldn’t lead to rejection of the mainstream position, exactly, but rejection of the evidential value of mainstream belief, and reversion to your prior belief / agnosticism about the object-level question.
Solving that problem seems to require some flavor of Paul’s “indirect normativity”, but that’s broken and might be unfixable as I’ve discussed with you before.
Do you have a link to this discussion?
Upvoted, but weighing in the other direction: Average Joe also updates on things he shouldn’t, like marketing. I expect the doctor to have moved forward some in resistance to BS (though in practice, not as much as he would if he were consistently applying his education).