This is what it looks like on my laptop; I’m using dark mode and set side comments to “show all”. You can see the two little comment icons in the lower right. The higher one has a number 3 on it, which seems to indicate the number of side comments on that paragraph. But that number is incredibly small and faint; without zooming in, mostly all I can tell is that it’s a one-digit number. I think I’d say that, if you are going to put a number there, it should be bigger.
Discussed above, ‘how could we have prevented this?’
Broken link here.
For (b), when I’ve seen the prisoner’s dilemma presented, defecting always benefits the defector and hurts the other person (e.g. reducing the defector’s prison sentence by 1 year, while adding 2 years to the other’s) regardless of the other person’s choice; thus “you have an incentive to defect no matter what”. (In fact, usually it’s presented such that the effects of defecting are always the same down to the exact numbers—in TurnTrout’s article, the example yields (+1, −3) in all cases, although the formalized version with P, R, S, T doesn’t state a requirement that T-R = P-S.) So defection being enabled by the other’s cooperation is not an element of the normal prisoner’s dilemma.
One could say that defection is bad regardless, but that my defection being enabled by your desirable prosocial behavior (which should be rewarded, at least with praise and social capital) adds insult to injury. Then we could say the insult is what makes it rude.
Regarding TurnTrout’s post: I would characterize prisoner’s dilemma cases as “everyone should know that everyone will always be tempted to steal/etc., and therefore we should all be vigilant for that and expect defection to be punished”.
Stag hunt is different—if you’re hunting stag and I know it, then I actually have an incentive to hunt stag too; if I wimp out and don’t do it, that’s because I expected you to wimp out too. In practice, the failures with stag hunt games are down to communication and organization problems—”I didn’t realize hunting stag was an option, didn’t know we were playing this game”, or possibly “I disagree about the payoff, I think the stag will kill us even if we all join the hunt”. Therefore I think it’s inappropriate to use the same word “defect” for wimping out of a stag hunt, implying that people should be punished for it—I do not want e.g. the drunk friend who says it would be awesome if we all did some crazy thing to feel entitled to punish the sober friends who think it’s stupid. (Glancing at the comments, I see jimmy makes a similar point about stag hunt.)
As for the game of chicken… the real-world game involves two people agreeing to do something insanely risky and stupid, which is its own punishment as far as I’m concerned. I’m trying to think of relevant real-life examples. Wiki mentions brinkmanship in military or almost-military conflicts between states, which isn’t very relevant to individuals… and also the Hawk-Dove game, for which the closest modern analogue is “being mugged”, but I would figure muggers usually choose targets who look like they couldn’t fight back very well. I guess “aggressive driving on the highway” would fit. It is a form of defection—trying to get an advantage at the other’s expense—and it is technically enabled by them not defecting as well. But I feel like my main emotional reaction is “you’re crazy if you even try that”; also it is an inherently self-limiting problem, because if the proportion of insanely aggressive drivers gets high enough, they’ll kill each other off frequently enough to prevent it getting higher.
I might indeed consider it “rude” if aggressive drivers’ behavior made it so that, to protect against it, everyone else had to change their behavior, or change the rules, in a way that sucked for everyone. (I tend to disagree with the authorities when they lower speed limits, add speed bumps, and stuff—they seem to err harder in the direction of alleged “safety” than I would—but there are probably interventions that I would agree with if I saw the data.) So X might be “letting everyone drive on this road without stop signs in certain places”, and Y would be “driving on that road, too fast to avoid a crash if someone makes a turn at those places”.
I think the thing I’m getting at is “prosocial behavior should be rewarded, at the very least with social capital; it being met with defection is bad, but it being met with defection enabled by the prosocial behavior is maximally set up to disincentivize the prosocial behavior, and this is especially bad and deserves more focused disapprobation”. And I still suspect “rude” may be the right word.
I think actual racists would consider it rude. “They should know they’re not welcome here, how dare they show their face, don’t they know no one wants to see them”, etc. So, yes, there may be dispute about whether something counts as rude or not. Though also, racist shop owners used to post signs saying what races were allowed (when this was legal), in which case thing X wouldn’t really be allowing thing Y.
I think there is also an assumption that the parties involved have an at least tentatively cooperative relationship. I would figure that the racist’s opinion of the other party is more like “I hate your guts and hope you all die”, which probably puts it outside that.
Let’s check that. If I imagine that the racist store owner and the black guy secretly had a great friendship, that they spent many hours together and helped each other privately and hold each other in high esteem… and if the store owner still doesn’t want the black guy appearing in his store (let’s say because he believes—maybe because they’ve told him so—that his biggest customers will all leave if they see him, and he’d have to abandon his business), and if the black guy knows all this, and chooses to appear anyway… I’m not 100% sure I’d call it rude—anyone can think their cause is so important that making a political statement is justified no matter how much it displeases people, and even if it accomplishes nothing, I think it might not be rude if the person truly believes it’s the right thing to do—but I could imagine the black guy agreeing it would be rude.
I once came up with a game-theoretic definition: “If someone can do a thing X, which is nice and beneficial and should be encouraged, but which also makes it possible for you to do a thing Y, which hurts them and makes them regret doing X, then it is rude for you to do Y.” Some thoughts:
This captures situations like “X = them letting you into their house, Y = you getting dirt everywhere / breaking dishes / being a nuisance”, and “X = them being willing to converse with you, Y = you insulting them”.
Intentionality may be part of the proper definition—it seems like being “unintentionally rude” is a thing, and that in such cases it wouldn’t be right to leave off the “unintentionally”.
If you do a rude thing and apologize profusely and make it up to them, it seems that may possibly make the whole interaction no longer rude, and I have a feeling that the key determinant for success there is “making it up to them well enough that they no longer regret doing X”. I do feel like I’ve captured a real piece of the psychology there.
I’m not sure how far to take it, and whether rudeness is the right word; but I do feel like I’ve stumbled on a concept worth having and possibly worth refining. Anyway, I mention it in case it’s useful.
Suppose that some new technology is useful but creates bad effects, and we say that the bad effects are the “problem” caused by the intelligence used to create that technology. Then suppose that further intelligence-driven technological development doesn’t give us any way to directly “fix” the problem, but it does give us a better alternative, which has the same uses and no bad effects (or less bad ones), and people just drop the first technology in favor of the second. Does that count as “solving” with intelligence?
Suppose that there is no second technology, but that, upon further analysis, the bad effects of the first technology turn out worse than initially believed, and then people decide the first technology isn’t worth the downsides and drop it. Does that count as “solving” with intelligence?
Suppose that there is no second technology—yet, anyway. There might be in the future, who knows. Will it ever be practical to say that intelligence can’t solve a problem? (The main one that comes to mind is “heat death of the universe”, due to the second law of thermodynamics, but that problem wasn’t created by intelligence.)
My best answer at the moment is, “Problem: people using their intelligence to figure out how to benefit themselves at the expense of others in net-negative ways”. Intelligence yields approaches to addressing many forms of this problem, but plenty of them are far from what I’d call solved.
(American here, talking about American universities) I had resented the mandatory attendance and homework in high school, and some people told me that college would be better in that regard, but others told me that this varied and it was up to individual professors whether to enforce a mandate. This was one of the reasons I didn’t try to enter college.
Incidentally, how about this other lab leak of the 1918 flu virus? Perhaps our safety protocols are not so great?
I think far more relevant is that SARS 1 leaked from a lab four times.
Trying to make the headline is
Probably drop “is” or replace with “be”.
The classic example fitting the title, which I learned from a Martin Gardner article (I think he cited it from some 1800s person), is: “Hypothesis: No man is 100 feet tall. Evidence: You see a man who is 99 feet tall. Technically, that evidence does fit the hypothesis, but probably after seeing that evidence you would become much less confident in the hypothesis.”
Well, basically, it can all be interpreted as having multiple competing theories (e.g. “The tallest human ever was slightly under 9 feet, humans generally follow a certain distribution, your heart probably wouldn’t even be able to circulate blood that far, etc.” vs “Something very, very weird and new that breaks my understanding is happening”) and the evidence can be considered in a Bayesian way.
Seems reminiscent of On the Impossibility of Supersized Machines.
If we see precursors to deception (e.g. non-myopia, self-awareness, etc.) but suspiciously don’t see deception itself, that’s evidence for deception.
Stated like this, it seems to run afoul of the law of conservation of expected evidence. If you see precursors to deception, is it then the case that both (a) seeing deception is evidence for deception and (b) not seeing deception is also evidence for deception? I don’t think so.
The direct patch is “If you see precursors to deception, then you should expect that there is deception, and further evidence should not change your belief on this”—which does seem to be a first approximation to your position.
Typo in title (“waiver”)? Or an ocean-related pun? Edit: Or the actual verb “waver”… perhaps saying that the Biden administration wavered for a few days before deciding to issue the waiver. The text does contain a typo about “broad wavers”.
users on LessWrong with more than 300 karma (1,178 of them) are not the kind to push buttons for one reason or another. Granted many of them would not have been checking the site
Do you have website access stats that would let you compute precisely how many of them did in fact load the homepage during Petrov Day?
If you haven’t seen it, there’s a thread here with links to Sarah Constantin’s postmortem and Zvi’s semi-postmortem, plus another comment from each of them.
I’ll excerpt Zvi’s comment from that thread:
Most start-ups fail. Failing at a start-up doesn’t even mean that you, personally are bad at start-ups. If anything the SV-style wisdom is that it means you have experience and showed you will give it your all, and should try again! You don’t blow your credibility by taking investor money, having a team that gives it their all for several years, and coming up short.
Does this [the result that people perceive others becoming less stable, much more often than themselves becoming less stable] represent people having a more optimistic view of themselves than they do of others? Or is this people correctly doing aggregation, since 10% of people becoming less stable makes people overall less stable and larger groups have less variance?
The explanation that immediately presented itself to me: Sensationalism bias. If one person goes crazy and does something crazy, then that’s a fascinating story that gets passed around to a wide audience. If a hundred people get somewhat better at maintaining their lives, that is not a fascinating story and doesn’t get passed around (except perhaps by those who study unemployment rates or other systematically aggregated data). I don’t know how good people are at correcting for this, but I’d guess many people are not good at it—20% would be enough to explain the difference in poll results.
It’s an interesting pattern, and the examples are fun to think about. I can produce explanations for a few categories. A very general statement, covering most of your examples, is, “To a naive person, people who don’t share your goals or your background may react to your policies in ways you didn’t expect or even imagine”, which is downstream of the typical mind fallacy and other common mistakes.
A subcategory is “If people really want to do a thing (buy some good or service at market price; know an interesting fact; avoid doing something unpleasant; etc.), then any attempted suppression creates strong incentives to work around it, which often has fascinating results”. As shminux mentions, perverse incentives can happen, and the Streisand effect actually goes somewhat beyond it (there I would say your intervention hits a preexisting immune system—people try to avoid being tricked, and either instinct or habit tells them that someone trying to hide a fact from you is a sign they’re trying to trick you).
Another very general explanation: “If a complex system is already somewhat optimized in one direction, then there are a lot more ways to make it worse than to make it better, and therefore our priors on a naive attempt to ‘improve’ it should be pretty bad.” Chesterton’s fence is related.
That said, I would definitely not call this “enantiodromia” a “principle”—to me that implies it’s a scientific or mathematical fact, and furthermore suggests that there’s some mysterious force that specifically causes it. No: There are mundane explanations for everything, and there are heuristics you can learn (and should learn, especially if your career has any chance of affecting serious policy decisions), probably most importantly “Having thought of your proposal, try imagining the ways the different involved parties might react, and imagine ways it could go wrong”. There are forces opposing the proposals, but they’re different in each case (the people who just want good lawyers; the people who abuse celibacy norms; the virus researchers), and there is no mysterious force causing all of them (the thing that seems most like it is the “negative prior on naive policy proposals”).
It reminds me of the concept of “synchronicity”, the idea that you see more coincidences than appear to be explained by causality or by chance. As far as I can tell, the explanation for that is simply (a) sometimes the coincidence is explained by facts you’re unaware of (e.g. three friends happen to mention a specific unusual topic during a week, and you don’t know there was an article recently published on that topic, and all of them read it, or perhaps talked to someone who read it); (b) selective memory and reporting: a thousand things happen to you, most of which are common non-coincidences, and you remember only the freak coincidence.
If “synchronicity” is a fact, it’s a fact about your imperfect mind—that you don’t know the true causal connection and your memory is biased—not a fact about the world; I don’t think there’s some mysterious force deliberately throwing coincidences at you. But it seems that the guy who came up with the concept, Carl Jung, was (uncharitable explanation alert) so epistemically arrogant that he believed it was a mysterious fact about the world rather than about his imperfect mind.
Am I being uncharitable? Judge for yourself:
Synchronicity (German: Synchronizität) is a concept first introduced by analytical psychologist Carl G. Jung “to describe circumstances that appear meaningfully related yet lack a causal connection.” In contemporary research, synchronicity experiences refer to one’s subjective experience that coincidences between events in one’s mind and the outside world may be causally unrelated to each other yet have some other unknown connection. [...]Jung developed the theory of synchronicity as a hypothetical noncausal principle serving as the intersubjective or philosophically objective connection between these seemingly meaningful coincidences. Mainstream science generally regards that any such hypothetical principle either does not exist or would not fall within the bounds of science. [...]Jung used the concept of synchronicity in arguing for the existence of the paranormal. This idea was similarly explored by writer Arthur Koestler in his 1972 work The Roots of Coincidence and was also taken up by the New Age movement.
Synchronicity (German: Synchronizität) is a concept first introduced by analytical psychologist Carl G. Jung “to describe circumstances that appear meaningfully related yet lack a causal connection.” In contemporary research, synchronicity experiences refer to one’s subjective experience that coincidences between events in one’s mind and the outside world may be causally unrelated to each other yet have some other unknown connection. [...]
Jung developed the theory of synchronicity as a hypothetical noncausal principle serving as the intersubjective or philosophically objective connection between these seemingly meaningful coincidences. Mainstream science generally regards that any such hypothetical principle either does not exist or would not fall within the bounds of science. [...]
Jung used the concept of synchronicity in arguing for the existence of the paranormal. This idea was similarly explored by writer Arthur Koestler in his 1972 work The Roots of Coincidence and was also taken up by the New Age movement.
For this “enantiodromia”, I would again say: It’s a fact about the minds of those who make policies and were too unimaginative to think how they might go wrong, too incompetent to figure out how they would go wrong, or too careless to even try; not a fact about the world that there’s some mysterious force that enjoys dramatic irony and tries to make your policies go wrong. Only some epistemically arrogant person would think it’s a fact about the world… hey, guess what, it’s Carl Jung again! I seriously did make the connection to “synchronicity” before I looked up the Wikipedia on enantiodromia. Behold:
Enantiodromia (Ancient Greek: ἐνάντιος, romanized: enantios – opposite and δρόμος, dromos – running course) is a principle introduced in the West by psychiatrist Carl Jung. In Psychological Types, Jung defines enantiodromia as “the emergence of the unconscious opposite in the course of time.” It is similar to the principle of equilibrium in the natural world, in that any extreme is opposed by the system in order to restore balance. When things get to their extreme, they turn into their opposite. Jung adds that “this characteristic phenomenon practically always occurs when an extreme, one-sided tendency dominates conscious life; in time an equally powerful counterposition is built up which first inhibits the conscious performance and subsequently breaks through the conscious control.”However, in Jungian terms, a thing psychically transmogrifies into its shadow opposite, in the repression of psychic forces that are thereby cathected into something powerful and threatening.
Enantiodromia (Ancient Greek: ἐνάντιος, romanized: enantios – opposite and δρόμος, dromos – running course) is a principle introduced in the West by psychiatrist Carl Jung. In Psychological Types, Jung defines enantiodromia as “the emergence of the unconscious opposite in the course of time.” It is similar to the principle of equilibrium in the natural world, in that any extreme is opposed by the system in order to restore balance. When things get to their extreme, they turn into their opposite. Jung adds that “this characteristic phenomenon practically always occurs when an extreme, one-sided tendency dominates conscious life; in time an equally powerful counterposition is built up which first inhibits the conscious performance and subsequently breaks through the conscious control.”
However, in Jungian terms, a thing psychically transmogrifies into its shadow opposite, in the repression of psychic forces that are thereby cathected into something powerful and threatening.
Right. Well. I will just say that “people figuring out that a bounty on dead cobras incentivized them to breed cobras” does not require any invocation of unconscious opposites or psychic transmogrification to explain.
I see one advantage to talking about enantiodromia as a “principle”: diplomacy. If people care about status, it may be easier to tell a would-be reformer “Have you considered that your proposal might hit this mysterious abstract principle?” than “Have you considered that you may be too naive, unimaginative, and/or incompetent to see how your proposal will go wrong?”. (Note that I mean “incompetent” for the task, not compared to average; there may be policy failures that even the most competent person on Earth couldn’t foresee.) Indeed, as I see these references to earlier discussion of the “principle” during history, I suspect some of them are cases where people expected to punished if they openly criticized policy, so they talked about an abstraction instead; I even suspect something similar might be upstream of this thread. (I doubt it would actually work as a tactic, to persuade those who would respond badly to regular criticism; but perhaps people tried it.)
But there are other ways to address the politeness aspect. You can say things like “For any important proposal, we should try to imagine how it could go wrong”, mention perverse incentives if they seem relevant, mention the fun historical examples of screwups, and so on. I don’t think I’m an expert at politeness, but I think it’s a solvable problem. In any case, Less Wrong is not a place where people would talk about a supernatural force because it’s easier to be polite that way.
See also a deliberately malicious example here:
I’m quite in favor of choosing a better name. I’ll start by proposing a long name that I think captures the essence: “Iterated felt-sense introspection”? Then maybe one could drop the “iterated” part for brevity. Other thoughts?