Thinking that evolution is smart on the timescales we care about is probably a worse heuristic, though. Evolution can’t look ahead, which is fine when it’s possible to construct useful intermediate adaptations, but poses a serious problem when there are no useful intermediates. In the case of infosec, it’s as all-or-nothing as it gets. A single mistake exposes the whole system to attack by adversaries. In this case, the attack could destroy the mind of the person using their neural connection.
Consider it from this perspective: a single deleterious mutation to part of the genome encoding the security system opens the person up to someone else poisoning their mind in serious and sudden ways: consider literal toxins, including the wide variety of organochlorides and other chemicals that can bind acetylcholinesterase and cause seizures (i.e., how many pesticides work), but also consider memetic attacks that can cause the person to act against their own interests (yes, language also permits these attacks, but much less efficiently than being able to directly update someone’s beliefs/memories/heuristics/thoughts, which is entirely possible once you open a direct, physical connection to someone’s brain from the outside of their skull—eyes are bad enough, from this perspective!).
A secure system would not only have to be secure for the individual it evolved in, but also be robust to the variety of mutations it will encounter in that individual’s descendants. And the stage in between wherein some individuals have secure neural communication while others can have their minds ravaged by adversaries (or unwitting friends) would prevent any widespread adoption of the genes involved.
Over millions upon millions of years, it’s possible that evolution could devise an ingenious system that gets around all of this, but my guess is that direct neural communication would only noticeably help language-bearing humans, which have existed for only ~100K years. Simpler organisms can just exchange chemicals or other simple signals. I don’t think 100K years nearly enough time to evolve a robust-to-mutations security system for a process that can directly update the contents of someone’s mind.
I’m not sure what “statistically immoral” means nor have I ever heard the term, which makes me doubt it’s common speech (googling it does not bring up any uses of the phrase).
I think we’re using the term “historical circumstances” differently; I simply mean what’s happened in the past. Isn’t the base rate purely a function of the records of white/black convictions? If so, then the fact that the rates are not the same is the reason that we run into this fairness problem. I agree that this problem can apply in other settings, but in the case where the base rate is a function of history, is it not accurate to say that the cause of the conundrum is historical circumstances? An alternative history with equal, or essentially equal, rates of convictions would not suffer from this problem, right?
I think what people mean when they say things like “machines are biased because they learn from history and history is biased” is precisely this scenario: historically, conviction rates are not equal between racial groups and so any algorithm that learns to predict convictions based on historical data will inevitably suffer from the same inequality (or suffer from some other issue by trying to fix this one, as your analysis has shown).
Didn’t you just show that “machines are biased because it learns from history and history is biased” is indeed the case? The base rates differ because of historical circumstances.
Going along with this, our world doesn’t appear to be the result of each individual making “random” choices in this way. If every good decision was accompanied by an alternate world with the corresponding bad decision, you’d expect to see people do very unexpected things all the time. e.g., this model predicts that each time I stop at a red light, there is some alternate me that just blows right through it. Why aren’t there way more car crashes if this is how it works?
For #1, I’m not sure I agree that not everyone in the room knows. I’ve seen introductions like this at conferences dedicated entirely to proteins where it assumed, rightly or not, that everyone knows the basics. It’s more that not everyone will have the information cached as readily as the specialists. So I agree that sometimes it is more accurate to say “As I’m sure most of you know” but many times, you really are confident that everyone knows, just not necessarily at the tip of their tongue. It serves as a reminder, not actually new knowledge.
I suppose you could argue: since everyone is constantly forgetting little things here and there, even specialists forget some basics some of the time and so, at any given time, when a sufficiently large number of people is considered, it is very likely that at least one person cannot recall some basic fact X. Thus, any phrase like “everybody knows X” is almost certainly false in a big enough room.
With this definition of knowledge, I would agree with you that the phrase should be “as most of you know” or something similarly qualified. But I find this definition of knowledge sort of awkward and unintuitive. There is always some amount of prompting, some kind of cue, some latency required to access my knowledge. I think “remembers after 30 seconds of context” still counts as knowledge, for most practical purposes, especially for things outside my wheelhouse. Perhaps the most accurate phrase would be something like “As everyone has learned but not necessarily kept fresh in their minds...”
For #2, I should have clarified: this was an abbreviated reference to a situation in an apartment complex I lived in in which management regularly reminded everybody that bears would wreak havoc if trash were left out, and people regularly left trash out, to the delight of the bears. So I think in that scenario, everybody involved really did know, they just didn’t care enough.
Echoing the other replies so far, I can think of other practical explanations for saying “everybody knows...” that don’t fall into your classification.
1) Everybody knows that presenting a fact X to someone who finds X obvious can sometimes give them the impression that you think they’re stupid/uninformed/out-of-touch. For instance, the sentence you just read. For another instance, the first few slides of a scientific talk often present basic facts of the field, e.g. “Proteins comprise one or more chains of amino acids, of which there are 20 natural types.” Everybody who’s a professional biologist/biochemist/bioinformatician/etc. knows this . If you present this information as being even a little bit novel, you look ridiculous. So a common thing to do is to preface such basic statements of fact with “As is well known / As everybody knows / As I’m sure you know / etc.” 
No bad faith at all! Just a clarification that your statements are meant to help newcomers or outsiders who may not remember such facts as readily as people who work with them every day.
2) I find myself saying “but everybody knows...” to myself or the person I’m talking to when trying to understand puzzling behavior of others. For example, “everybody knows that if trash bags are left outside the dumpster, bears will come and tear everything up, so why do people keep leaving them there?” In this context, the “everybody knows” clause isn’t meant as a literal truth but as a seemingly reasonable hypothesis in tension with concrete evidence to the contrary. If everybody has been told, repeatedly, that trash is to be put in the dumpster and not next to it, why do they act like they don’t know this? Obviously there is no real mystery here: people do know, they just don’t care enough to put in the effort.
But especially in more complex situations, it often helps to lay out a bunch of reasonable hypotheses and then think about why they might not hold. “Everybody knows …” is a very common type of reasonable hypothesis and so discussion of this sort will often involve good faith uses of the phrase. Put another way: not all statements that look like facts are meant as facts and in particular, many statements are made expressly for the purpose of tearing them down as an exercise in reasoning (essentially, thinking out loud). But if you’re not aware of this dynamic, and it’s done too implicitly, it might seem like people are speaking in bad faith.
I guess what I’m trying to say in general is: “this statement of fact is too obviously false to be a mistake” has two possible implications: one, as you say, is that the statement was made in bad faith. The other, though, is that it’s not a statement of fact. It’s a statement intended to do something more so than to say something.
 Of course, even such basic facts aren’t even strictly true. There are more than 20 natural amino acids if you include all known species, but, as everybody knows, everybody excludes selenocysteine and pyrrolysine in the canonical list.
 The alternative is to exclude these first few slides altogether, but this often makes for a too-abrupt start and the non-specialists are more likely to get lost partway through without those initial reminders of what’s what.
Simplified examples from my own experience of participating in or witnessing this kind of disagreement:
Poverty reduction: Alice says “extreme poverty is rapidly falling” and Bob replies “$2/day is not enough to live on!” Alice and Bob talked past each other for a while until realizing that these statements are not in conflict; the conflict concerns the significance of making enough money to no longer be considered in “extreme poverty.” The resolution came from recognizing that extreme poverty reduction is important, but that even 0% extreme poverty does not imply that we have solved starvation, homelessness, etc. That is, Alice thought Bob was denying how fast and impressively extreme poverty is being reduced, which he was not, and Bob thought Alice believed approaching 0% extreme poverty was sufficient, when she in fact did not.
Medical progress: Alice says “we don’t understand depression” and Bob replies “yes we do, look at all the anti-depression medications out there.” Alice and Bob talked past each other for a while, with the discussion getting increasingly angry, until it was realized that Alice’s position was “you don’t fully understand a problem until you can reliably fix it” and Bob’s position was “you partially understand a problem when you can sometimes fix it”. These are entirely compatible positions and Alice and Bob didn’t actually disagree on the facts at all!
Free markets: Alice says “free markets are an essential part of our economy” and Bob replies “no they’re not because there are very few free markets in our economy and none of the important industries can be considered to exist within one.” The resolution to this one is sort of embarrassing because it’s so simple and yet took so long to arrive at: Alice’s implicit definition of a free market was “a market free from government interference” while Bob’s implicit definition was “a market with symmetric information and minimal barriers to entry.” Again, while it sounds blindingly obvious why Alice and Bob were talking past each other when phrased like this, it took at least half an hour of discussion among ~6 people to come to this realization.
Folk beliefs vs science: Alice says “the average modern-day Westerner does not have a more scientific understanding of the world than the average modern-day non-Westerner who harbors ‘traditional’/‘folk’/‘pseudoscientific’ beliefs” and Bob replies “how can you argue that germ theory is no more scientific than the theory that you’re sick because a demon has inhabited you?” After much confusing back and forth, it turns out Alice is using the term ‘scientific’ to denote the practices associated with science while Bob is using the term to denote the knowledge associated with science. The average person inculcated in Western society indeed has more practical knowledge about how diseases work and spread than the average person inculcated in their local, traditional beliefs, but both people are almost entirely ignorant of why the believe what they believe and could not reproduce the knowledge if needed, e.g. the average person does not know the biological differences between a virus and a bacterium even though they are aware that antibiotics work on bacteria but not viruses. Once the distinction was made between “science as a process” and “science as the fruits of that process” Alice and Bob realized they actually agreed.
I think the above are somewhat “trivial” or “basic” examples in that the resolution came down to clearly defining terms: once Alice and Bob understood what each was claiming, the disagreement dissolved. Some less trivial ones for which the resolution was not just the result of clarifying nebulous/ambiguous terms:
AI rights: Alice says “An AGI should be given the same rights as any human” and Bob replies “computer programs are not sentient.” After much deliberation, it turns out Alice’s ethics are based on reducing suffering, where the particular identity and context surrounding the suffering don’t really matter, while Bob’s are based on protecting human-like life, with the moral value of entities rapidly decreasing as an inverse function of human-like-ness. Digging deeper, for Alice, any complex system might be sentient and the possibility of a sentient being suffering is particularly concerning when that being is traditionally not considered to have any moral value worth protecting. For Bob, sentience can’t possibly exist outside of a biological organism and so efforts into that ensuring computer programs aren’t deleted while running are a distraction that seems orthogonal to ethics. So while the ultimate question of “should we give rights to sentient programs?” was not resolved, a great amount of confusion was reduced when Alice and Bob realized they disagree about a matter of fact—can digital computers create sentience? - and not so much about how to ethically address suffering once the matter of who is suffering has been agreed on (Actually, it isn’t so much a “matter of fact” since further discussion revealed substantial metaphysical disagreements between Alice and Bob, but at least the source of the disagreements was discovered).
Government regulation: Alice says “the rise of the internet makes it insane to not abolish the FDA” and Bob replies “A lack of drug regulation would result in countless deaths.” Alice and Bob angrily, vociferously disagree with each other, unfortunately ending the discussion with a screaming match. Later discussion reveals that Alice believes drug companies can and will regulate themselves in the absence of the FDA and that 1) for decades now, essentially no major corporation has deliberately hurt their customers to make more profit and that 2) the constant communication enabled by the internet will educate customers on which of the few bad apples to avoid. Bob believes drug companies cannot and will not regulate themselves in the absence of the FDA and that 1) there is a long history of corporations hurting their customers to make more profit and that 2) the internet will promote just as much misinformation as information and will thus not alleviate this problem. Again, the object-level disagreement—should we abolish the FDA given the internet? - was not resolved, but the reason for that became utterly obvious: Alice and Bob have *very* different sets of facts about corporate behavior and the nature of the internet.
How to do science: Alice says “you should publish as many papers as possible during your PhD” and Bob replies “paper count is not a good metric for a scientist’s impact.” It turns out that Alice was giving career advice to Carol in her particular situation while Bob was speaking about things in general. In Carol’s particular, bespoke case, it may have been true that she needed to publish as many papers as possible during her PhD in order to have a successful career even though Alice was aware this would create a tragedy-of-the-commons scenario if everyone were to take this advice. Bob didn’t realize Alice was giving career advice instead of her prescriptive opinion on the matter (like Bob was giving).
Role playing: Alice says “I’m going to play this DnD campaign as a species of creature that can’t communicate with most other species” and Bob replies “but then you won’t be able to chat to your fellow party members or share information with them or strategize together effectively.” Some awkwardness ensued until it became clear that Alice *wanted* to be unable to communicate with the rest of the party due to anxiety related concerns. Actually, realizing this didn’t really reduce the awkwardness, since it was an awkward situation anyway, but Alice and Bob definitely talked past each other until the difference in assumptions was revealed and had Bob realized what Alice’s concerns were to begin with, he probably would not have initiated the conversation since he didn’t have a problem with a silent character but simply wanted to ensure Alice understood the consequences of this, with the discussion revealing that she did.
Language prescriptivism: Alice says “that’s grammatically incorrect” and Bob replies “there is no ‘correct’ or ‘incorrect’ grammar—language is socially constructed!” Alice and Bob proceed to have an extraordinarily unproductive discussion until Alice points out that while she doesn’t know exactly how it’s decided what is correct and incorrect in English, there *must* be some authority that decides, and that’s the authority she follows. While Alice and Bob did not come to an agreement per se, it became clear that what they really disagreed about was whether or not the English language has a definitive authority, not whether or not one should follow the authority assuming it exists.
I’m going to stop here so the post isn’t too long, but I very much enjoyed thinking about these circumstances and identifying the “A->B” vs “X->Y” pattern. So much time and emotional energy wasted that wouldn’t have been had Alice and Bob first established exactly what they were talking about.
One notable aspect in my experience with this is that exhaustion is not exclusively a function of the decision’s complexity. I can experience exhaustion when deciding what to eat for dinner, for instance, even though I’ve made similar decisions literally thousands of times before, the answer is always obvious (cook stuff I have at home or order from a restaurant I like—what else is there?), and the stakes are low (“had I given it more thought, I would have realize I was more in the mood for soup than a sandwich” is not exactly a harrowing loss).
Another aspect to note is that decisions that end up exhausting me usually entail doing work I don’t want to do. I never get exhausted when deciding where to hike, for instance, because no matter what I know I will enjoy myself, even if one spot requires a long drive, or inconvenient preparations, or whatever. One possibility is that part of me recognizes that the correct decision will inevitably cause me to do work I don’t want to do. Actually deciding sets whatever work I have to do into motion while “deliberating” endlessly lets me put it off, which might end up feeling internally like the decision is hard to make. A motivated mind is great at coming up with bogus reasons for why an obvious decision is not so obvious.
A key insight for me was recognizing that my reluctance to do work is pretty directly proportional to what I expect the value of its product to be, biased towards short term gains unless I explicitly visualize the long term consequences. If realizing that the best decision for dinner is to cook, and that reminds me that I need to do dishes and chop vegetables and clean the stove, etc. etc. then I have a hard time “deciding” that cooking is the way to go because it implies that in the short term, I will be less happy than I am currently. If I think about the scenario where I procrastinate and don’t cook, and focus on how hungry I will be and how unpleasant that feeling is, then my exhaustion often fades and the decision becomes clearer.
Thanks for the spot check! I had heard this number (~4 hours per day) as well and I now have much less confidence in it. That most of the cited studies focus on memorization / rote learning seriously limits their generality.
Anecdotally, I have observed soft limits for the amount of “good work” I can do per day. In particular, I can do good work for several hours in a day but—somewhat mysteriously—I find it more difficult to do even a couple hours of good work the next day. I say “mysteriously” because sometimes the lethargy manifests itself in odd ways but the end result is always less productivity. My folk theory-ish explanation is that I have some amount of “good work” resources that only gradually replenish, but I have no idea what the actual mechanism might be and my understanding is that ego depletion has not survived the replication crisis, so I’m not very confident in this.
While a true Bayesian’s estimate already includes the probability distributions of future experiments, in practice I don’t think it’s easy for us humans to do that. For instance, I know based on past experience that a documentary on X will not incorporate as much nuance and depth as an academic book on X. I *should* immediately reduce the strength of any update to my beliefs on X upon watching a documentary given that I know this, but it’s hard to do in practice until I actually read the book that provides the nuance.
In a context like that, I definitely have experienced the feeling of “I am pretty sure that I will believe X less confidently upon further research, but right now I can’t help but feel very confident in X.”
Other than both being pictographic, I’m not sure emoticons and reactions are that related. Emoticons are either objects (neither here nor there for our purposes) or facial/bodily expressions. Reactions are emotional or high-level responses to information.
You can’t really express the thumbs-up reaction with a facial expression emoticon. You can use a smiley face or something similar, but thumbs-up means approval, not happiness. If someone says “I’ll be five minutes late—start without me” I don’t want to express happiness at this, but I do want to acknowledge it and (if this is the case) say it’s OK. A thumbs-up does this wonderfully: by definition, it means I have acknowledged the message, and it signals approval rather than disapproval, but nothing else. You can’t really do that with emoticons.
I think there are lots of situations in which reactions can do things emoticons can’t, and I’ve found that I notice nice opportunities for reactions more when I’m in an environment in which they’re readily available.
(Just an attempt at an answer)
Both an explanation and a prediction seek to minimize the loss of information, but the information under concern differs between the two.
For an explanation, the goal is to make it as human understandable as possible, which is to say, minimize the loss of information resulting from an expert human predicting relevant phenomena.
For a prediction, the goal is to make it as machine understandable as possible, which is to say, minimize the loss of information resulting from a machine predicting relevant phenomena.
The reason there isn’t a crisp distinction between the two is because there isn’t a crisp distinction between a human and a machine. If humans had much larger working memories and more reliable calculation abilities, then explanations and predictions would look more similar: both could involve lots of detail. But since humans have limited memory and ability to calculate, explanations look more “narrative” than predictions (or from the other perspective, predictions look more “technical” than explanations).
Note that before computers and automation, machine memory and calculation wasn’t always better than the human equivalent, which would have elided the distinction between explanation and prediction in a way that could never happen today. e.g., if all you have to work with is a compass and straight edge, then any geometric prediction is also going to look like an explanation because we humans grok the compass and straightedge in a way we’ll never, without modifications anyway, grok the more technical predictions modern geometry can make. The exceptions that prove the rule are very long geometric methods/proofs, which strain human memory and so feel more like predictions than methods/proofs that can be summarized in a picture.
As machines get more sophisticated, the distinction will grow larger, as we’ve already seen in debates about whether automated proofs with 10^8 steps are “really proofs”—this gets at the idea that if the steps are no longer grokable by humans, then it’s just a prediction and not an explanation, and we seem to want proofs to be both.
I think what he’s saying is that the existence of noise in computing hardware means that any computation done on this hardware must be (essentially) invariant to this noise, which leads the methods away from the precise, all-or-nothing logic of discrete math and into the fuzzier, smoother logic of probability distributions and the real line. This makes me think of analog computing, which is often done in environments with high noise and can indeed produce computations that are mostly invariant to it.
But, of course, analog computing is a niche field dwarfed by digital computing, making this prediction of von Neumann’s comically wrong: the solution people went with wasn’t to re-imagine all computations in a noise-invariant way, it was to improve the hardware to the point that the noise becomes negligible. But I don’t want to sound harsh here at all. The prediction was so wrong only because, at least as far as I know, there was no reasonable way to predict the effect that transistors would have on computing in the ’50s since they were not invented until around then. It seems reasonable from that vantage point to expect creative improvements in mathematical methods before a several-orders-of-magnitude improvement in hardware accuracy.
The pre/post conflation reminds me of Terence Tao’s discussion of math pre/post proofs (https://terrytao.wordpress.com/career-advice/theres-more-to-mathematics-than-rigour-and-proofs/), which I’ve found to be a helpful guide in my journeys through math. I’m not surprised the distinction occurs more widely than in just math, but this post has encouraged me to keep the concept on hand in contexts outside of math.
I also enjoyed the discussion about how various religions are all getting at the same concepts through different lenses/frameworks. As an atheist, I have no interest in, say, Christianity per se; I enjoy learning about the historical, psychological, and sociological components in the same way I enjoy learning about many aspects of humanity, but I’m not really interested in things like grace or transubstantiation or exegesis because it all falls under the label “false” or “irrelevant”. Having said that, I’m also very much aware that many Christian thinkers have insights that are relevant even for people who don’t share their belief in God. But I can’t get myself to slog through writing that is mostly false/irrelevant just to glean some nuggets of wisdom.
It would be excellent to find a book that synthesizes all of the most insightful aspects of the major religions, strips them of their cultural/theological labels into something more generic, and presents the stuff that’s been “replicated” (in the sense of multiple religions all coming to the same conclusion modulo cultural/theological labels). Do you know of a book that does this? Is Integral Spirituality a good example? It seems like it’s in the right ballpark, or at least would reference many books that are.
I’m a little confused about how the burden of proof ended up as it is in this discussion. I think most people intuitively understand that blackmail is a bad thing. That they are not able to articulate a rigorous, general argument for why seems like a much higher bar than we expect for other things.
Consider murder. Murder should be illegal, obviously (I hope?! Not sure there is much to discuss if we disagree on that). But it’s not trivial to construct a rigorous, general argument for why. Any demonstrated harm can be countered with another hypothetical in which some convoluted chain of events following the murder creates a net benefit.
“Oh, it’s always bad to shoot a stranger in the street? What if you have an uncanny ability to identify serial killers and recognize some stranger as one who’s about to kill again and you can prevent even more deaths by shooting them on sight?”
“You think killing your unfaithful spouse should be illegal? What about if the knowledge that they’re continuing to see other people causes you such great psychological harm that killing the spouse is actually less bad, huh?”
And you don’t really have to go to silly extremes like that to generate counter examples; most claims that “murder should be illegal” implicitly except killing for self defense, during wartime or for national security reasons, euthanasia, late term abortion for medical reasons, and the death penalty itself. I’m not saying everyone has the same list of exceptions, but I’ve never met anyone who rejects all of those simultaneously and claims that literally all murder, no matter the context or consequences, should be illegal. That’s not what people mean when they say murder should be illegal.
How is the blackmail situation any different? It’s trivial to come up with endless exceptions in which the blackmail is more like whistleblowing and provides a net benefit. I have two responses to that. First off, even if blackmail were legalized, the government would never allow routine whistleblowing. There are always a million exceptions when it comes to how three letter organizations and their information are treated legally. But more importantly, why does the possibility of black swan whistleblowing scenarios make the law worse to have than not? There are plenty of exceptions to the general rule that murder is bad and yet we still have laws against murder, right?
Something I didn’t notice in the comments is how to handle the common situation that Bob is a one-hit wonder. Being a one-hit wonder is pretty difficult; most people are zero-hit wonders. Being a two-hit wonder is even more difficult, and very few people ever create many independent brilliant ideas / works / projects / etc.
Keeping that in mind, it seems like a bad idea to make a precedent of handing out epistemic tenure. Most people are not an ever-flowing font of brilliance and so the case that their one hit is indicative of many more is much less likely than the case that you’ve already witnessed the best thing they’ll do.
Just anecdotally, I can think of many public intellectuals who had one great idea, or bundle of ideas, and now spend most of their time spouting unrelated nonsense. And, troublingly, the only reason people take their nonsense seriously is that there is, at least implicitly, some notion of epistemic tenure attached to them. These people are a tremendous source of “intellectual noise”, so to speak, and I think discourse would improve if the Bobs out there had to demonstrate the validity of their ideas from as close to scratch as possible rather than getting an endless free pass.
My biggest hesitation with never handing out intellectual tenure is that might make it harder for super geniuses to work as efficiently. Would von Neumann have accomplished what he did if he had to compete as if he were just another scientist over and over? But I think a lack of intellectual tenure would have to really reduce super genius efficiency for it to make up for all the noise it produces. There’s just so many more one-hit wonders than (N>1)-hit wonders.
I would call this a good visual representation of technical debt. I like to think of it as chaining lots of independently reasonable low order approximations until their joint behavior becomes unreasonable.
It’s basically fine to let this abstraction be a little leaky, and it’s basically reasonable to let that edge case be handled clumsily, and it’s basically acceptable to assume the user won’t ever give this pathological input, etc., until the number of “basically reasonable” assumptions N becomes large enough that 0.99^N ends up less than 0.5 (or some other unacceptably low probability of success). And even with a base as high as 0.99, the N that breaks 50% is only ~70!
The visual depiction of this as parts being stacked such that each additional part is placed in what looks to be a reasonable way but all the parts together look ridiculously fragile is excellent! It really emphasizes that this problem mode can only be understood with a global, rather than a local or incremental, view.
Alternative hypothesis: the internet encourages people who otherwise wouldn’t contribute to the general discourse to contribute to it. In the past, contributing meant writing some kind of article, or at least letter-to-the-editor, which 1) requires a basic level of literacy and intellectual capacity, and 2) provides a filter, removing the voices of those who can’t write something publishers consider worth of publication (with higher-influence publications having, in general, stricter filters).
Anecdote in point: I have yet to see an internet comment that I couldn’t imagine one of my relatives writing (sorry, relatives, but a few of y’all have some truly dumb opinions!). But these relatives I have in mind wouldn’t have contributed to the general discourse before the internet was around, so if you don’t have That Uncle in your family you may not have been exposed to ideas that bad before seeing YouTube comments.
Last minute edit: I mean that I have yet to see an internet comment that I couldn’t imagine one of my relatives writing years and years ago, i.e. I expect that we would have seen 2018 level discourse in 2002 if That Uncle had posted as much in 2002 as in 2018.
I really like this framework! I’ve noticed that if someone makes a comment that assumes everyone in the group has CI, but I’m not sure if everyone does, I get a sense of awkwardness and feel the need to model two conversations: the one happening assuming everyone has CI, and the one happening assuming at least one person doesn’t. This has the unfortunate side effect of consuming most of my thought-bandwidth, which makes me boring and quiet even if I would have otherwise been engaged and talkative.
I think most political opinions are opinions about priorities of issues, not issues per se. I remember from years ago, before most states had started legalizing same sex marriage, a relative of mine expressing the sentiment “I’m not against legalizing gay marriage, I just don’t want to hear about the topic ever again.” I think this is the attitude that the (admittedly very obnoxious and frustrating) party guest is concerned about. If more people held the opinion of my relative then we’d be stuck in a bad equilibrium, with everyone agreeing that they would be OK with same sex marriage but no one bothering to put in the effort to legalize it.
It doesn’t matter if everyone agrees X is an issue if everyone also believes that solving the much more difficult Y should always take priority over solving X—this has the same consequences as a world in which no one believes X is an issue. Of course that doesn’t mean you should go around yelling at people for not being obsessed with your favorite obsession, but I think “unconscious selfishness” and “mind viruses” are uncharitable explanations for what seems to be the reasonable concern that low priority tasks often never get completed and thus those that claim to support those causes but with low priority are effectively not supporting those causes.
Having said that, I completely agree with your larger point about diversity—I would much prefer a world in which people can obsess over what they want to obsess over even when their obsessions and lack-of-obsessions are contrarian.