Relatedly, in the scenario (in some utterly absurd counterfactual world entirely unlike the real world) where agents sometimes misrepresent the evidence in a direction that favours their actual beliefs, it seems like the policy described here might well do better than the policy of updating fully on all evidence you’re presented with.
Given the limitations of our ability to investigate others’ honesty, it’s possible that the only options are factionalism or naivety and that the latter produces worse results than factionalism; e.g., if we happen to start with more people favouring (A,A,A) than (B,B,B) then rejecting “distrust those who disagree” may end up with everyone in the (A,A,A) corner, which is probably worse than having factions in all eight corners if the reality is (B,B,B).
As Zack says, what we want is a degree of trust that matches agents’ trustworthiness. But that may be extremely hard to obtain, and if all agents are somewhat untrustworthy (but some happen to be right so that their untrustworthiness does little harm) then having trust matching trustworthiness may produce exactly the sort of factional results reported here.
So I think the most interesting question is: Are there strategies that, even when agents may be untrustworthy in their reporting of the evidence, manage to converge to the truth over a reasonable portion of the space of untrustworthiness and actual-evidence-strength? My guesses: (1) yes, there kinda are, and the price they pay instead of factionalism is slowness; (2) if there is enough untrustworthiness relative to the actual strength of evidence then no strategy will give good results; (3) there are plenty of questions in the real world for which #2 holds. But I’m not terribly confident about any of those guesses.
My personal theory is that not all talk is signalling, but almost all talk about signalling is. (It signals “I am smart, sophisticated, not easily fooled, and willing to face uncomfortable realities; I see below the carefully groomed surface of things to the ugliness beneath.”)
In the particularly prominent case of Literal Robin Hanson, it seems possibly significant that the uncomfortable realities he uncovers are generally much more uncomfortable for one of the two major political factions in the US than for the other, and that the “other” one is the one responsible for a lot of his funding over the years (though I think he may no longer be affiliated with the Mercatus Center now?).
(Only possibly significant, and I do actually mean that. Obviously things that are politically convenient for the person saying them can also be true.)
A probability measure is a measure μ (on a σ-algebra A on a set A) such that μ(A)=1.
A measure on a σ-algebra A is a function μ:A→R with properties like “if A∩B=∅ then μ(A∪B)=μ(A)+μ(B)” etc.; the idea is that the elements of A are the subsets of A that are well-enough behaved to be “measurable” and then if X is such a subset then μ(X) says how big X is.
A σ-algebra on a set A is a set A of subsets of A that (1) includes all-of-A, (2) whenever it includes a set X also includes its complement A−X, and (3) whenever it includes all of countably many sets Xi also includes their union.
And now probability theory is the study of probability measures. (So the measure-theoretic definition of “probability” would be “anything satisfying the formal properties of a probability measure”, just as the mathematician’s definition of “vector” is “anything lying in a vector space”.)
“Bayesian” probability theory doesn’t disagree with any of that; it just says that one useful application for (mostly the more elementary bits of) the theory of probability measures is to reasoning under uncertainty, where it’s useful to quantify an agent’s beliefs as a probability measure. Here A is the set of ways the world could be; A is something like the set of sets of ways the world could be that can be described by propositions the agent understands, or the smallest σ-algebra containing all of those; μ, more commonly denoted P or P or P or something of the sort, gives you for any such set of ways the world could be a number quantifying how likely the agent thinks it is that the actual state of the world is in that set.
You can work with probability measures even if you think that it’s inappropriate to use them to quantify the beliefs of would-be rational agents. I guess that’s PP’s position?
I assume it means an image used in the training process by which the robots learned to recognize things.
You don’t need anyone’s forgiveness. But it turns out that quantifying degrees of belief is useful sometimes, and that representing them as numbers from 0 to 1 that behave like probabilities is a good way to do that. (There are theorems that kinda-sorta say it’s the only way to do that, if you want various nice-sounding things to be true, but how much you care about those nice-sounding things is up to you.) So you may be missing out on some useful thinking tools.
In the space of four frames, today’s SMBC comic touches on unfriendly AI, fun theory, and overfitting in machine learning.
No one is erasing other people’s comments here unless they’re outright abusive.
As for the “irreducible complexity” argument, you may notice that it has convinced approximately zero percent of actual biologists. You may find “they’re all brainwashed atheists so completely under Satan’s thumb that they can’t form rational opinions” a more convincing explanation for that than “the argument is actually not very strong”, but I don’t agree.
(I agree with the biologists; I think Behe’s argument is bad. But I don’t think arguing about that argument is particularly on topic here.)
Also, no one is claiming that Francis Bacon is anyone’s moral foundation. I think you may not have been reading what Scott wrote very carefully.
It’s true that lawters aren’t required to take every client who comes along, but I think generally the legal profession strongly encourages them to be willing to take unattractive cases. For instance, the ABA Model Code of Professional Responsibility has various things to say, of which I’ve excerpted the bits that seem to me most important (on both sides of the question):
A lawyer is under no obligation to act as adviser or advocate for every person who may wish to become his client; but in furtherance of the objective of the bar to make legal services fully available, a lawyer should not lightly decline proffered employment. The fulfillment of this objective requires acceptance by a lawyer of his share of tendered employment which may be unattractive both to him and the bar generally.
When a lawyer is appointed by a court or requested by a bar association to undertake representation of a person unable to obtain counsel, whether for financial or other reasons, he should not seek to be excused from undertaking the representation except for compelling reasons. Compelling reasons do not include such factors as the repugnance of the subject matter of the proceeding, the identity or position of a person involved in the case, the belief of the lawyer that the defendant in a criminal proceeding is guilty, or the belief of the lawyer regarding the merits of the civil case.
So they don’t quite say that lawyers should never refuse to represent clients just because they think they’re guilty. But they do say that lawyers should be willing to take “unattractive” cases, and that if a court assigns a lawyer to represent someone who can’t afford to pay for his own lawyer then that lawyer shouldn’t refuse just because they think the client is guilty.
So my earlier statement goes too far, but I think it’s more right than wrong: in general lawyers aren’t supposed to refuse to defend you just because they think you’re probably guilty. Even though they are allowed to refuse to defend you.
What do you mean by “knowing that both ways are legit”? Only one way is legit: when someone comes to you needing defence and willing to pay your fees, you defend them.
(I think the actual system is a little different: a lawyer isn’t expected to defend their client if they’re sure the client is guilty; in that case they would ask them to find another lawyer, or something. But that isn’t because those clients don’t deserve defending, it’s because they deserve defending better than someone who’s sure they’re guilty is likely to manage.)
It’s worth noting that what someone puts on a sign doesn’t necessarily indicate what they really care most about, especially if what’s on the sign is more socially acceptable than what they really care about. So I don’t think the findings here are inconsistent with what annacaffeina says. (They are probably, albeit not very strong, evidence against what she says, though. Only “probably” because it could be that the other sign-slogans—especially the more generically-political ones—are evidence about the sort of person waving the sign; maybe some of those slogans are more characteristic of wealthier people who want the service industry serving them again than of poorer service-industry employees who want to be at work again.)
Nature article giving some evidence for aerosol transmission. More specifically, what it gives evidence of is that in some circumstances you can find aerosolized SARS-CoV-2 where there are infected people, which doesn’t seem very surprising. It doesn’t say anything about how effectively that causes infection, or about the relative importance of this mode of transmission compared with others. It also has some discussion of the sizes of aerosol particles and how they got that way, and of what circumstances make it more likely for there to be non-negligible amounts of SARS-CoV-2 in the air.
The least obvious things there, to me: Toilets are pretty bad (lots of people, each there for a while, small space). In hospitals, one source of SARS-CoV-2 in the air (in smaller aerosol particles—do these stay around longer?) may be from PPE after it’s been taken off. In the public areas they looked at, only the most densely used ones had substantial amounts of SARS-CoV-2 in the air. [EDITED to add:] “Least obvious” does not mean “very not-obvious”; most of these are pretty unsurprising. I don’t know that I’d have guessed the thing about discarded PPE, though.
I think you mean occasional weirdly out-of-place fucking cuss words.
Oops! The perils of hand-transcribing URLs.
I think I may have been unclear; I wasn’t saying “let’s not use blue light for anything” or anything like that, just giving a bit of context. It shouldn’t be so surprising that blue light can kill some microorganisms, given that it can harm humans too.
It’s not going to kill us. But high-intensity blue light will, in the long term, damage your retinas. See e.g. http://photobiology.info/Rozanowska.html .
[EDITED to fix a typo in the URL; sorry about that]
Short-wavelength blue light is harmful to humans too. In particular, it’s harmful to the eyes. I would be highly unsurprised if it were slightly carcinogenic but I don’t know anything about that.
I enjoyed reading it on r/HPMOR but I confess I think it should have stayed there.
Right. Though the paper by Davies et al that Christian found suggests that at least some paper masks may not be so wretched at keeping out tiny particles.
Davies et al is encouraging as regards the benefits of surgical masks. Still, letting 10% in is a lot worse than letting 5% in, and the fact that the Wikipedia page about N95 masks says “Collection efficiency of surgical mask filters can range from less than 10% to nearly 90% for different manufacturers’ masks when measured using the test parameters for NIOSH certification” suggests that maybe Davies et al got lucky in which surgical masks they tested.
I’m not sure you’re right about the advantages of N95 masks over surgical masks. (Note: at present the question says ”… the prime advantage of surgical masks over N95 masks …” but I assume that’s just a slip.)
N95 masks have finer filters that keep out particles smaller than surgical masks’ filters do. If you tape a surgical mask to your face in a way that seals it perfectly, then while you may be doing a better job of keeping out the particles the mask can block you’re still not doing much for the smaller ones.
N95 masks are notoriously tricky to fit well, but so far as I know no one tapes those to their faces. Whatever the reasons for that, many of those reasons probably apply to surgical masks (but more so, because the benefit will be smaller, because however good the fit the surgical masks are still not keeping out all the smaller particles.) I don’t know those reasons, but I guess they include the following, all of which seem like they apply to surgical masks:
Taping a mask to your face is harder than it may sound. There isn’t that much available surface between nose and eyes to tape to.
Your face is flexible and moves around as you talk, blink, smile, etc. Tape can peel off. Especially if you have facial hair, wrinkles, damage from earlier mask-unpeelings, etc., rather than a perfect smooth surface to tape to.
Surgical masks are also flexible and often have folds extending to their edges, making it difficult to seal them effectively using tape.
They also have straps. It seems to me that any way of taping a mask on is going to leave a “tunnel” along the straps. You can tape the straps down but they’re inevitably going to move around in ways that tend to enlarge that tunnel.
Peeling tape off your face is painful and may do damage, especially if you are doing it repeatedly and especially if the tape is extra-sticky so as not to peel off while you’re wearing the mask.
The slow and awkward peeling-off process keeps the mask, whose outer surface might be covered in virus particles or whatever, close to your face for longer while you’re removing it.
None of this means that taping down a surgical mask won’t provide any benefit. My guess is that it does. But I suspect the benefit is small enough, and the pain and inconvenience large enough, that most people won’t consider it a good tradeoff.
As to whether that’s right in any given case, I don’t know. It would be interesting to have some actual numbers on this, but my guess is that no one’s done the studies.
Are you now working as a lawyer?
I think I can still remember a lot of what I learned when studying mathematics at university. But (to whatever extent it’s actually true; I haven’t tested myself) that may be because after studying mathematics at university I did a PhD in mathematics, and then some mathematics research, and I now work in the allegedly-real world as a mathematician (though in practice that work uses very very little of what I studied at university). If I’d finished my mathematics degree and then gone off to become a painter or bricklayer or chef or something, I would probably now remember a lot less of the mathematics.
(The comparison may be unfair; my impression is that learning law and learning mathematics are quite different, and that one way they’re different is that learning law involves a much larger proportion of memorizing brute facts that one couldn’t deduce from other things one knows. Whereas in principle a sufficiently hyperintelligent being wouldn’t need to learn anything in mathematics other than definitions, terminology and notation; the actual theorems could all be deduced from first principles. It seems plausible to me that the first kind of knowledge decays faster when not in active use than the second.)