“G” fits my own understanding best: “Not Okay” is a generalized alarm state, and the ambiguity is a feature, not a bug.
(Generally) we have an expectation that things are supposed to be “Okay” so when they’re not, this conflict is uncomfortable and draws attention to the fact that “something is wrong!”. What exactly it takes to provoke this alarm into going off depends on the person/context/mindset because it depends on (what they realize) they haven’t already taken into account, and that’s kinda the point. For example, if you’re on a boat and notice that you’re on a collision course with a rock you might panic a bit and think “We have to change course!!!”, which is an example of “things not being okay”. However, the driver might already see the rock and is Okay because the “trajectory” he’s on includes turning away from the rock so there’s no danger. And of course, other passengers may be in Okay Mode because they fail to see the rock or because they kinda see the rock but they are averse to being Not Okay and therefore try to ignore it as long as possible.
In that light, “Everything is Okay” is reassurance that the alarm can be dismissed. Maybe it’s because the driver already sees the rock. Maybe it’s because our “boat” is actually a hovercraft which will float right over the rock without issue. Maybe we actually will hit the rock, but there’s nothing we can do to not hit the rock, and the damages will be acceptable. Getting people back into Okay Mode is in exercise in getting people to believe that one of these is true, and you don’t necessarily have to specify which one if they trust you, and if the details are important that’s what the rest of the conversation is for.
The best way to get the benefits of ‘okay’ in avoiding giant stress balls, while still retaining the motivation to act and address problems or opportunities is to “just” engage with the situation without holding back.
Okay, so we’re headed for a rock, now what? If that’s alarming then it’s alarming. Are we actually going to hit it if we simply dismiss the alarm and go back to autopilot? If so, would that be more costly than the cost of the stress needed to avert it? What can we actually do to stop it? Can we just talk to the driver? Is that likely to work?
If that’s likely to work and you’re on track to doing that, then “can we sanely go back to autopilot?” can evaluate as “yes” again and we can go back to Okay Mode—at least, until the driver doesn’t listen and we no longer expect out autopilot to handle the situation satisfactorily. You get to go back to Okay Mode as soon as you’ve taken the new information into account and gotten back on a track you’re willing to accept over the costs of stressing more.
“The Kensho thing”, as I see it, is the recognition that these alarms aren’t “fundamental truths” where the meaning resides. They’re momentary alarms that call for the redirection of one’s attention, and the ultimate place that everything resolves to after doing your homework and integrating all the information is back to a state which calls for no alarms. That’s why it’s not “nothing matters, everything is equally good” or “you’ll feel good no matter what once you’re enlightened”—it’s just “Things are okay, on a fundamental level alarms are not called for, behaviors are, and it’s my job to figure out which. If I’m not okay with them that signals a problem with me in that I have not yet integrated all the information available and gotten back on my best-possible-track”. So when your friend dies or you realize that humanity is going to be obliterated, it’s not “Lol, that’s fine”, it’s room to keep a drive to not only do something about it, a drive to stare reality in the face as much as you can manage, to regulate how much you stare at painful truths so that you keep your responses productive, and a desire to up one’s ability to handle unpleasant conflict.
How should one react to those who are primarily optimizing for being in Okay Mode at the expense of other concerns
Fundamentally, it’s a problem of aversion to unpleasant conflict. Sometimes they won’t actually see the problem here so it can be complicated by their endorsement of avoidance, but even in those cases it’s probably most productive to ignore their own narratives and instead directly address the thing that’s causing them to want to avoid.
Shoving in their face more reasons to be Not Okay is likely to trigger more avoidance, so instead of trying to argue why “Here’s how closing your eyes means you’re more likely to fail to avoid the rock, and therefore kill everyone. Can you imagine how unfun drowning will be?” (which I would expect to lead to more rationalizations/avoidance), I’d focus on helping them be comfortable. More “Yeah, it’s super unfun for things to be Not Okay, and I can’t blame you for not wanting to do it more than necessary”/”Yes, it’s super important to be able to be able to regulate one’s own level of Okayness, since being an emotional wreck often makes things worse, and it’s good that you don’t fail in that way”.
Of course, you don’t want to just make them comfortable staying in Okay Mode because then there’s no motivation to switch, so when there’s a little more room to introduce unpleasant ideas without causing folding you can place a little more emphasis on “it’s good that you fail in that way”, and how completely avoiding stress isn’t ideal or consequence free either.
It’s a bit of a balancing act, and more easily said than done. You have to be able to pull off sincerity when you reassure them that you get where they’re coming from and that it’s actually better than doing the thing they fear their option is, and without “Not Okaying” at them by pushing them “It’s Not Okay that you feel Okay!”. It’s a lot easier when you can be Okay that they’re in Okay mode because they’re Not Okay with being Not Okay, partially just because externalizing ones alarms as a flinch is rarely the most helpful way of doing things. But also because if you’re Okay you can “go first” and give them a proof of concept and reference example for what it looks like to stare at the uncomfortable thing (or uncomfortable things in general) and stay in Okay Mode. It helps them know “Hey, this is actually possible”, and feel like you might even be able to help them get closer to it.
or those who are using Okay as a weapon?
Again, I’d just completely disregard their narratives on this one. They’re implying that if you’re Not Okay, then it’s a “you problem”. So what? Make sure they’re wrong and demonstrate it.
“God, it’s just a little fib. Are you okay??”
“Not really. I think honesty about these kinds of things is actually extremely important, and I’m still trying to figure out where I went wrong expecting not to have that happen”
“Yeah, no, I’m fine. I just want to make sure that these people know your history when deciding how much to trust you”.
“Content moderation” is not always a bad thing, but you can’t jump directly from “Content moderation can be important” to “Banning Trump, on balance, will not be harmful”.
The important value behind freedom of association is not in conflict with the important value behind freedom of speech, and it’s possible to decline to associate with someone without it being a violation of the latter principle. If LW bans someone because they’re [perceived to be] a spammer that provides no value to the forum, then there’s no freedom of speech issue. If LW starts banning people for proposing ideas that are counter to the beliefs of the moderators because it’s easier to pretend you’re right if you don’t have to address challenging arguments, then that’s bad content moderation and LW would certainly suffer for it.
The question isn’t over whether “it’s possible for moderation to be good”, it’s whether the ban was motivated in part or full by an attempt to avoid having to deal with something that is more persuasive than Twitter would like it to be. If this is the case, then it does change the ultimate point.
What would you expect the world to look like if that weren’t at all part of the motivation?
What would you expect the world to look like if it were a bigger part of the motivation than Twitter et al would like to admit?
The world would be better if people treated more situations like the first set of problems, and less situations like the second set of problems. How to do that?
It sounds like the question is essentially “How to do hard mode?”.
On a small scale, it’s not super intimidating. Just do the right thing and take your spouse to the place you both like. Be someone who cares about finding good outcomes for both of you, and marry someone who sees it. There are real gains here, and with the annoyance you save yourself by not sacrificing for the sake of showing sacrifice, you can maintain motivation to sacrifice when the payoff is actually worth it—and to find opportunities to do so. When you can see that you don’t actually need to display that costly signal, it’s usually a pretty easy choice to make.
Forging a deeper and more efficient connection does require allowing potential for conflict so that you can distinguish yourself from the person who is only doing things for shallow/selfish reasons. Distinguish yourself by showing willingness to entertain such accusations, knowing that the truth will show through. Invite those conflicts when you have enough slack to turn it into play, and keep enough slack that you can. “Does this dress make my ass look fat?”—can you pull off “The *dress* doesn’t, no” and get a laugh, or are you stuck where there’s only one acceptable answer? If you can, demonstrate that it’s okay to suggest the “unthinkable” and keep poking until you can find the edge of the envelope. If not, or when you’ve reached the point where you can’t, then stop and ask why. Address the problem. Rinse and repeat with the next harder thing, as you become ready to.
On a larger scale, it gets a lot harder. You can no longer afford to just walk away from anyone who doesn’t already mostly get it, and you don’t have so much time and attention to work. There are things you can do, and I don’t want to suggest that it’s “not doable”. You can start to presuppose the framings that you’ve worked hard to create and justify in the past, using stories from past experience and social proof to support them in the cases where you’re challenged—which might be less than you think, since the ability to presuppose such things without preemptively flinching defensively can be powerful subcommunication. You can start to build social groups/communities/institutions to scale these principles, and spread to the extent that your extra ability to direct motivation towards good outcomes allows you to out-compete the alternatives.
I just don’t get the impression that there’s any “easy” answer. If you want people to donate to your political campaign even though you won’t play favorites like the other guy will, I think you have to genuinely have to be able to expect that your donors will be more personally rewarded by the larger total pie and recognition of doing the right thing than they will in the alternative where they donate to have someone fight to give them more of a smaller pie—and are perceived however you let that be perceived.
This answer is great because it takes the problem with the initial game (one person gets to update and the other doesn’t) and returns the symmetry by allowing both players to update. The end result shows who is better at Aumann updating and should get you closer to the real answer.
If you’d rather know who has the best private beliefs to start with, you can resolve the asymmetry in the other direction and make everyone commit to their numbers before hearing anyone else’s. This adds a slight bit of complexity if you can’t trust the competitors to be honest, but it’s easily solved by either paper/pencil or everyone texting their answer to the person who is going to keep their phone in their pocket and say their answer first.
The official recommendations are crazy low. Zvi’s recommendation here of 5000IU/day is the number I normally hear from smart people who have actually done their research.
The RCT showing vitamin D to help with covid used quite a bit. This converter from mg to IU suggests that the dose is at least somewhere around 20k on the first day and a total of 40k over the course of the week. The form they used (calcifediol) is also more potent, and if I’m understanding the following comment from the paper correctly, that means the actual number is closer to 200k/400k. (I’m a bit rushed on this, so it’s worth double checking here)
In addition, calcifediol is more potent when compared to oral vitamin D3 . In subjects with a deficient state of vitamin D, and administering physiological doses (up to 25 μg or 1000 IU daily, approximately 1 in 3 molecules of vitamin D appears as 25OHD; the efficacy of conversion is lower (about 1 in 10 molecules) when pharmacological doses of vitamin D/25OHD are used. 
I’ve always been confused why the official recommendations for vitamin D are so darn low, but it seems that there might be an answer that is fairly straight forward (and not very flattering to the those coming up with the recommended values). It looks like it might be a simple conflation between “standard error of the mean” and “standard deviation” of the population itself.
(If you’re worried about the difference being due to random chance, feel free to multiply the number of animals by a million.)[...]They vary from these patterns, but never enough that they are flying the same route on the same day at the same time at the same time of year. If you want to compare, you can group flights by cities or day or time or season, but not all of them.
(If you’re worried about the difference being due to random chance, feel free to multiply the number of animals by a million.)
They vary from these patterns, but never enough that they are flying the same route on the same day at the same time at the same time of year. If you want to compare, you can group flights by cities or day or time or season, but not all of them.
The problem you’re using Simpson’s paradox to point at does not have this same property of “multiplying the size of the data set by arbitrarily large numbers doesn’t help”. If you can keep taking data until randomness chance is no issue, then they will end up having sufficient data in all the same subgroups, and you can just read the correct answer off the last million times they both flew in the same city/day/time/season simultaneously.
The problem you’re pointing at fundamentally boils down to not having enough data to force your conclusions, and therefore needing to make judgement about how important season is compared to time of day so that you can determine when conditioning on more factors will help relevance more than it will hurt by adding more noise.
Hypothetically, what would the right response be if you noticed that one of the main vaccine trials has really terrible blinding (e.g. participants are talking about how to tell whether you get the placebo in the waiting room)?
It seems like it would really mess up the data, probably resulting in the people who got the the vaccine taking extra risk and leading the study to understate the effectiveness. Ideally, “tell the researchers” would be the obvious right answer, but are there perverse incentives at play that make the best response something else?
If I didn’t have people thanking me every week for doing these, it would be difficult to keep going.
Thanks Zvi. The effort is definitely appreciated.
There were 50 patients in the treatment group. None were admitted to the ICU. There were 26 patients in the control group. Half of them, 13 out of 26, were admitted to the ICU. So 13⁄26 vs. 0⁄50.
That’s not what the paper says
Of 50 patients treated with calcifediol, one required admission to the ICU (2%),
The conclusions still hold, of course.
Adjusting in the other direction seems useful as well. If someone Strong Upvotes ten times less frequently than average I would want to see their strong upvote as worth somewhat more.
Voting based on current karma is a good thing.
Without that, a post that is unanimously barely worth upvoting will get an absurd amount of upvotes while another post which is recognized as earth shatteringly important by 50% will fail to stand out. Voting based on current karma gives you a measure of the *magnitude* of people’s like for a comment as well as the direction, and you don’t want to throw that information out.
If everyone votes based on what they think the total karma should be, then a post’s karma reflects [a weighted average of opinions on what the post’s total karma should be] rather than [a weighted average of opinions on the post].
This isn’t true.
If people vote based on what the karma should be, the final value you get is the median of what people think the karma should be—i.e. a median of people’s opinion of the post. If you force people to ignore the current karma, you don’t actually get a weighted average of opinions on the post because there’s very little flexibility in how strongly you upvote a post. In order to get that magnitude signal back, you’d have to dilute your voting with dither, and while that will no doubt happen to some extent (people might be too lazy to upvote slightly-good posts, but will make sure to upvote great ones), you will get an overestimate of the value of slightly-good posts.
This is bad, because the great posts hold a disproportionate share of the value, and we very much want them to rise to the top and stand out above the rest.
You are very much in the minority if you want to abolish norms in general.
There’s a parallel here with the fifth amendment’s protection from self incrimination making it harder to enforce laws and laws being good on average. This isn’t paradoxical because the fifth amendment doesn’t make it equally difficult to enforce all laws. Actions that harm other people tend to have other ways of leaving evidence that can be used to convict. If you murder someone, the body is proof that someone has been harmed and the DNA in your van points towards you being the culprit. If you steal someone’s bike, you don’t have to confess in order to be caught with the stolen bike. On the other hand, things that stay in the privacy of your own home with consenting adults are *much* harder to acquire evidence for if you aren’t allowed to force people to testify against themselves. They’re also much less likely to be things that actually need to be sought out and punished.If it were the case that one coherent agent were picking all the rules with good intent, then it wouldn’t make sense to create rules that make enforcement of other rules harder. There isn’t one coherent agent picking all the rules and intent isn’t always good, so it’s important to fight for meta rules that make it selectively hard to enforce any bad rules that get through.You can try to argue that preventing blackmail isn’t selective *enough* (or that it selects in the wrong direction), but you can’t just equate blackmail with “norm enforcement [applied evenly across the board]”.
I actually don’t think this is a problem for the use case I have in mind. I’m not trying to solve the comparison problem. This work formalizes: “given a utility weighting, what is defection?”. I don’t make any claim as to what is “fair” / where that weighting should come from. I suppose in the EGTA example, you’d want to make sure eg reward functions are identical.
This strikes me as a particularly large limitation. If you don’t have any way of creating meaningful weightings of utility between agents then you can’t get anything meaningful out. If you’re allowed to play with that free parameter then you can simply say “I’m not a utility monster, this genuinely impacts me more than you [because I said so!]” and your actual outcomes aren’t constrained at all.
Defection doesn’t always have to do with the Pareto frontier—look at PD, for example. (C,C), (C,D), (D,C) are usually all Pareto optimal.
That’s why I talk about “in the larger game” and use scare quotes on “defection”. I think the word has to many different connotations and needs to be unpacked a bit.
The dictionary definition, for example, is:
A lack: a failure; especially, failure in the performance of duty or obligation.
n.The act of abandoning a person or a cause to which one is bound by allegiance or duty, or to which one has attached himself; a falling away; apostasy; backsliding.
n.Act of abandoning a person or cause to which one is bound by allegiance or duty, or to which one has attached himself; desertion; failure in duty; a falling away; apostasy; backsliding.
This all fits what I was talking about, and the fact that the options in prisoners dilemma are traditionally labeled “Cooperate” and “Defect” doesn’t mean they fit the definition. It smuggles in these connotations when they do not necessarily apply.
The idea of using tit for tat to encourage cooperation requires determining what ones “duty” is and what “failing” this duty is, and “doesn’t maximize total utility” does not actually work as a definition for this purpose because you still have to figure out how to do that scaling.
Using the Pareto frontier allows you to distinguish between cooperative and non-cooperative behavior without having to make assumptions/claims about whose preferences are more “valid”. This is really important for any real world application, because you don’t actually get those scalings on a silver platter, and therefore need a way to distinguish between “cooperative” and “selfishly destructive” behavior as separate from “trying to claim a higher weight to one’s own utility”.
As others have mentioned, there’s an interpersonal utility comparison problem. In general, it is hard to determine how to weight utility between people. If I want to trade with you but you’re not home, I can leave some amount of potatoes for you and take some amount of your milk. At what ratio of potatoes to milk am I “cooperating” with you, and at what level am I a thieving defector? If there’s a market down the street that allows us to trade things for money then it’s easy to do these comparisons and do Coasian payments as necessary to coordinate on maximizing the size of the pie. If we’re on a deserted island together it’s harder. Trying to drive a hard bargain and ask for more milk for my potatoes is a qualitatively different thing when there’s no agreed upon metric you can use to say that I’m trying to “take more than I give”.
Here is an interesting and hilarious experiment about how people play an iterated asymmetric prisoner’s dilemma. The reason it wasn’t more pure cooperation is that due to the asymmetry there was a disagreement between the players about what was “fair”. AA thought JW should let him hit “D” some fraction of the time to equalize the payouts, and JW thought that “C/C” was the right answer to coordinate towards. If you read their comments, it’s clear that AA thinks he’s cooperating in the larger game, and that his “D” aren’t anti-social at all. He’s just trying to get a “fair” price for his potatoes, and he’s mistaken about what that is. JW, on the other hand, is explicitly trying use his Ds to coax A into cooperation. This conflict is better understood as a disagreement over where on the Pareto frontier (“at which price”) to trade than it is about whether it’s better to cooperate with each other or defect.In real life problems, it’s usually not so obvious what options are properly thought of as “C” or “D”, and when trying to play “tit for tat with forgiveness” we have to be able to figure out what actually counts as a tit to tat. To do so, we need to look at the extent to which the person is trying to cooperate vs trying to get away with shirking their duty to cooperate. In this case, AA was trying to cooperate, and so if JW could have talked to him and explained why C/C was the right cooperative solution, he might have been able to save the lossy Ds. If AA had just said “I think I can get away with stealing more value by hitting D while he cooperates”, no amount of explaining what the right concept of cooperation looks like will fix that, so defecting as punishment is needed.In general, the way to determine whether someone is “trying to cooperate” vs “trying to defect” is to look at how they see the payoff matrix, and figure out whether they’re putting in effort to stay on the Pareto frontier or to go below it. If their choice shows that they are being diligent to give you as much as possible without giving up more themselves, then they may be trying to drive a hard bargain, but at least you can tell that they’re trying to bargain. If their chosen move is conspicuously below (their perception of) the Pareto frontier, then you can know that they’re either not-even-trying, or they’re trying to make it clear that they’re willing to harm themselves in order to harm you too. In games like real life versions of “stag hunt”, you don’t want to punish people for not going stag hunting when it’s obvious that no one else is going either and they’re the one expending effort to rally people to coordinate in the first place. But when someone would have been capable of nearly assuring cooperation if they did their part and took an acceptable risk when it looked like it was going to work, then it makes sense to describe them as “defecting” when they’re the one that doesn’t show up to hunt the stag because they’re off chasing rabbits.”Deliberately sub-Pareto move” I think is a pretty good description of the kind of “defection” that means you’re being tatted, and “negligently sub-Pareto” is a good description of the kind of tit to tat.
To the extent that the underlying structure doesn’t matter and can’t be used, I agree that technically non-random “noise” behaves similarly and that this can be a reasonable use of the term. My objection to the term “noise” as a description of conversational landmines isn’t just that they’re “technically not completely random”, but that the information content is actually important and relevant. In other words, it’s not noise, it’s signal.
The “landmines” are part of how their values are actually encoded. It’s part of the belief structure you’re looking to interact with in the first place. They’re just little pockets of care which haven’t yet been integrated in a smooth and stable way with everything else. Or to continue the metaphor, it’s not “scary dangerous explosives to try to avoid”, it’s “inherently interesting stores of unstable potential energy which can be mined for energetic fuel”. If someone is touchy around the subject you want to talk about, that is the interesting thing itself. What is in here that they haven’t even finished explaining to themselves, and why is it so important to them that they can’t even contain themselves if you try to blow past it?
It doesn’t even require slow and cautious approach if you shift your focus appropriately. I’ve had good results starting a conversation with a complete stranger who was clearly insecure about her looks by telling her that she should make sure her makeup doesn’t come off because she’s probably ugly if she’s that concerned about it. Not only did she not explode at me, she decided to throw the fuse away and give me a high bandwidth and low noise channel to share my perspective on her little dilemma, and then took my advice and did the thing her insecurity had been stopping her from doing.
The point is that you only run into problems with landmines as noise if you mistake landmines for noise. If your response to the potential of landmines is “Gah! Why does that unimportant noise have to get in the way of what I want to do!? I wonder if I can get away with ignoring them and marching straight ahead”, then yeah, you’ll probably get blowed up if you don’t hold back. On the other hand, if your response is closer to “Ooh! Interesting landmine you got here! What happens if I poke it? Does it go off, or does the ensuing self reflection cause it to just dissolve away?”, then you get to have engaging and worthwhile high bandwidth low noise conversations immediately, and you will more quickly get what you came for.
I think it’s worth making a distinction between “noise” and “low bandwidth channel”. Your first examples of “a literal noisy room” or “people getting distracted by shiny objects passing by” fit the idea of “noise” well. Your last two examples of “inferential distance” and “land mines” don’t, IMO.
“Noise” is when the useful information is getting crowded out by random information in the channel, but land mines aren’t random. If you tell someone their idea is stupid and then you can’t continue telling them why because they’re flipping out at you, that’s not a random occurrence. Even if such things aren’t trivially predictable in more subtle cases, it’s still a predictable possibility and you can generally feel out when such things are safe to say or when you must tread a bit more carefully.
The “trying to squeeze my ideas through a straw” metaphor seems much more fitting than “struggling to pick the signal out of the noise floor” metaphor, and I would focus instead on deliberately broadening the straw until you can just chuck whatever’s on your mind down that hallway without having to focus any of your attention on the limitations of the channel.
There’s a lot to say on this topic, but I think one of the more important bits is that you can often get the same sense of “low noise conversation” if you pivot from focusing on ideas which are too big for the straw to focusing on the straw itself, and how its limitations might be relaxed. This means giving up on trying to communicate the object level thing for a moment, but it wasn’t going to fit anyway so you just focus on what is impeding communication and work to efficiently communicate about *that*. This is essentially “forging relationships” so that you have the ability to communicate usefully in the future. Sometimes this can be time consuming, but sometimes knowing how to carry oneself with the right aura of respectability and emotional safety does wonders for the “inferential distance” and “conversational landmines” issues right off the bat.
When the problem is inferential distance, the question comes down to what extent it makes sense to trust someone to have something worth listening to over several inferences. If our reasonings differ several layers deep then offering superficial arguments and counterarguments is a waste of time because we both know that we can both do that without even being right. When we can recognize that our conversation partner might actually be right about even some background assumptions that we disagree on, then all of a sudden the idea of listening to them describe their world view and looking for ways that it could be true becomes a lot more compelling. Similarly, when you can credibly convey that you’ve thought things through and are likely to have something worth listening to, they will find themselves much more interested in listening to you intently with an expectation of learning something.
When the problem is “land mines”, the question becomes whether the topic is one where there’s too much sensitivity to allow for nonviolent communication and whether supercritical escalation to “violent” threats (in the NonViolent Communication sense) will necessarily displace invitations to cooperate. Some of the important questions here are “Am I okay enough to stay open and not lash out when they are violent at me?” and the same thing reflected towards the person you’re talking to. When you can realize “No, if they snap at me I’m not going to have an easy time absorbing that” you can know to pivot to something else (perhaps building the strength necessary for dealing with such things), but when you can notice that you can brush it off and respond only to the “invitation to cooperate” bit, then you have a great way of demonstrating for them that these things are actually safe to talk about because you’re not trying to hurt them, and it’s even safe to lash out unnecessarily before they recognize that it’s safe. Similarly, if you can sincerely and without hint of condescension ask the person whether they’re okay or whether they’d like you to back off a bit, often that space can be enough for them to decide “Actually, yeah. I can play this way. Now that I think about it, its clear that you’re not out to get me”.
There’s a lot more to be said about how to do these things exactly and how to balance between pushing on the straw to grow and relaxing so that it can rebuild, but the first point is that it can be done intentionally and systematically, and that doing so can save you from the frustration of inefficient communication and replace it with efficient communication on the topic of how to communicate efficiently over a wider channel that is more useful for everything you might want to communicate.
In general, if you’re careful to avoid giving unsolicited opinions you can avoid most of these problems even with rigid ideologues. You wouldn’t inform a random stranger that they’re ugly just because it’s true, and if you find yourself expressing or wishing to express ideas which people don’t want to hear from you, it’s worth reflecting on why that is and what you are looking to get out of saying it.
I think I get the general idea of the thing you and Vaniver are gesturing at, but not what you’re trying to say about it in particular. I think I’m less concerned though, because I don’t see inter agent value differences and the resulting conflict as some fundamental inextricable part of the system.
Perhaps it makes sense to talk about the individual level first. I saw a comment recently where the person making it was sorta mocking the idea of psychological “defense mechanisms”, because “*obviously* evolution wouldn’t select for those who ‘defend’ from threats by sticking their heads in the sand!”—as if the problem of wireheading were as simple as competition between a “gene for wireheading” and a gene against. Evolution is going to select for genes that make people flinch away from injuring themselves with hot stoves. It’s also going to select for people who cauterize their wounds when necessary to keep from bleeding out. Designing an organism that does *both* is not trivial. If sensitivity to pain is too low, you get careless burns. If it’s too high, you get refusal to cauterize. You need *some* mechanism to distinguish between effective flinches and harmful flinches, and a way to enact mostly the former. “Defense mechanisms” arise not out of mysterious propagation of fitness reducing genes, but rather the lack of solution to the hard problem of separating the effective flinches from the ineffective—and sometimes even the easiest solution to these ineffective flinches is hacked together out of more flinches, such as screaming and biting down on a stick when having a wound cauterized, or choosing to take pain killers.
The solution of “simply noticing that the pain from cauterizing a serious bleed isn’t a *bad* thing and therefore not flinching from it” isn’t trivial. It’s *doable*, and to be aspired to, but there’s no such thing as “a gene for wise decisions” that is already “hard coded in DNA”.
Similarly, society is incoherent and fragmented and flinches and cooperates imperfectly. You get petty criminals and cronyism and censorship of thought and expression, and all sorts of terrible stuff. This isn’t proof of some sort of “selection for shittiness” any more than it is to notice individual incoherence and the resulting dysfunction. It’s not that coherence is impossible or undesirable, just that you’re fighting entropy to get there, and succeeding takes work.
The desire to eat marshmallows succeeds more if it can cooperate and willingly lose for five minutes until the second marshmallow comes. The individual succeeds more if they are capable of giving back to others as a means to foster cooperation. Sometimes the system is so dysfunctional that saying “no thanks, I can wait” will get you taken advantage of, and so the individually winning thing is impulsive selfishness. Even then, the guy failing to follow through on promises of second marshmallows likely isn’t winning by disincentivizing cooperation with him, and it’s likely more of a “his desire to not feel pain is winning, so he bleeds” sort of situation. Sometimes the system really is so dysfunctional that not only is it winning to take the first marshmallow, it’s also winning to renege on your promises to give the second. But for every time someone wins by shrinking the total pie and taking a bigger piece, there’s an allocation of the more cooperative pie that would give this would-be-defector more pie while still having more for everyone else too. And whoever can find these alternatives can get themselves more pie.
I don’t see negative sum conflict between the individual and society as *inevitable*, just difficult to avoid. It’s negotiation that is inevitable, and done poorly it brings lossy conflict. When Vaniver talks about society saying “shut up and be a cog”, I see a couple things happening simultaneously to one degree or another. One is a dysfunctional society hurting themselves by wasting individual potential that they could be profiting from, and would love to if only they could see how and implement it. The other is a society functioning more or less as intended and using “shut up and be a cog” as a shit test to filter out the leaders who don’t have what it takes to say “nah, I think I’ll trust myself and win more”, and lead effectively. Just like the burning pain, it’s there for a reason and how to calibrate it so that it gets overridden at only and all the right times is a bit of an empirical balancing act. It’s not perfect as is, but neither is it without function. The incentive for everyone to improve this balancing is still there, and selection on the big scale is for coherence.
And as a result, I don’t really feel myself being pulled between a conflict of “respect societies stupid beliefs/rules” and “care about other people”. I see people as a combination of *wanting* me to pass their shit tests and show them a better replacement for their stupid beliefs/rules, being afraid and unsure of what to do if I succeed, and selfishly trying to shrink the size of the pie so that they can keep what they think will be the bigger piece. As a result, it makes me want to rise to the occasion and help people face new and more accurate beliefs, and also to create common knowledge of defection when it happens and rub their noses in it to make it clear that those who work to make the pie smaller will get less pie. Sometimes it’s more rewarding and higher leverage to run off and gain some momentum by creating and then expanding a small bubble where things actually *work*, but there’s no reason to go from “I can’t yet be effective in the broader community because I can’t yet break out of their ‘cog’ mold for me, so I’m going to focus on the smaller community where I can” to “fuck them all”. There’s still plenty of value in reengaging when capable and pretending there isn’t isn’t that good functional thing we’re striving to do. It’s not like we can *actually* form a bubble and reject the outside world, because the outside world will still bring you pandemics and AI, and from even a selfish perspective there’s plenty of incentive to help things go well for everyone.
Whereas, if things are too forsaken, one loses the ability to communicate about the lion at all. There is no combination of sounds one can make that makes people think there is an actual lion across an actual river that will actually eat them if they cross the river.
Hm. This sounds like a challenge.
How about this:
Those “popular kids” who keep talking about fictitious “lions” on the other side of the river are actually losers. They try to pretend that they’re simply “the safe and responsible people” and pat themselves on the back over it, but really they’re just a bunch of cowards who wouldn’t know what to do if there were a lion, and so they can’t even look across the river and will just shame you for being “reckless” if you doubt the existence of lions that they “just know” are there. I hate having to say something that could lump me with these deplorable fools, and never before has there actually been a lion on the other side of the river, but this time there is. This time it’s real, and I’m not saying we can’t cross if need be, but if we’re going to cross we need to be armed and prepared.
I can see a couple potential failure modes. One is if “Those guys are just crying wolf, but I am legit saving you [and therefore am cool in the way they pretend they are]” itself becomes a cool kid thing to say. The other is if your audience is motivated to see you as “one of them” to the point of being willing to ignore the evidence in front of them, they will do so despite you having credibly signaled that this is not true. Translating to actual issues I can think of, I think it would mostly actually work though.
It becomes harder if you think those guys are actually cool, but that shouldn’t really be a problem in practice. Either a) there actually has been a lion every single time it is claimed, in which case it’s kinda hard for “there’s a lion!” to indicate group membership because it’s simply true. Or b) they’ve actually been wrong, in which case you have something to distance yourself from.
If the truth is contentious and even though there has always been a lion, they’ve never believed you, then you have a bigger problem than simply having your assertions mistaken for group membership slogans; you simply aren’t trusted to be right. I’d still say there’s things that can be done there, but it does become a different issue.
I described what happend to the other post here.
Thanks, I hadn’t seen the edit.
I’m having the same dilemma right now where my genuine comments are getting voted into the negative and I’m starting to feel really bad for trying to satisfy my own personal curiosity at the expense of eating up peoples time with content they think is low quality (yes yes, I know that that doesn’t mean it is low quality per se, but it is a close enough heuristic that I’m mostly willing to stick to it). But the downvotes are very clear so while I’m disappointed that we couldn’t talk through this issue, I will no longer be eating up peoples time.
The only comments of yours that I see downvoted into the negative are the two prior conversations in this thread. Were there others that are now positive again?
While I generally support the idea that it’s better to stop posting than to continue to post things which will predictably be negative karma sum, I don’t think that’s necessary here. There’s plenty of room on LW for things other than curated posts sharing novel insights, and I think working through one’s own curiosity can be good not just for the individual in question, but any other lurkers who might have the same curiosities and for the community, as bringing people up to speed is an important part of helping them learn to interact best with the community.
I think the down votes are about something else which is a lot more easily fixable. While I’m sure they were genuine, some of your comments strike me as not particularly charitable. In order to hold a productive conversation, people have to be able to build from a common understanding. The more work you put in to understanding where the other person is coming from and how it can be a coherent and reasonable stance to hold, the less effort it takes for them to communicate something that is understood. At some point, if you don’t put enough effort in you start to miss valid points which would have been easy for you to find and would be prohibitively difficult to word in a way that you wouldn’t miss.
As an example, you responded to Richard_Kenneway as if he thought you were lying despite the fact that he explicitly stated that he was not imputing any dishonesty. I’m not sure where you simply missed that part or whether you don’t believe him, but either way it is very hard to have a conversation with someone that doesn’t engage with points like this at least enough to say why they aren’t convinced. I think, with a little more effort put into understanding how your interlocutors might be making reasonable, charitable, and valid points, you will be able to avoid the down votes in the future. That’s not to say that you have to believe that they’re being reasonable/charitable/etc, or that you have to act like you do, but it’s nice to at least put in some real effort to check and give them a chance to show when they are. Because the tendency for people to fail on the side of “insufficiently charitable” is really really strong, and even when the uncharitable view is the correct one (not that common on LW), the best way to show it is often to be charitable and have it visibly not fit.
It’s a very common problem that comes up in conversation, especially when pushing into new territory. I wouldn’t sweat it.
I should also declare up front that I have a bunch of weird emotional warping around this topic; hopefully I’m working around enough of it for this to still be useful.]
This is a really cool declaration. It doesn’t bleed through in any obvious way, but thanks for letting me know and I’ll try to be cautious of what I say/how I say them. Lemme know if I’m bumping into anything or if there’s anything I could be doing differently to better accommodate.
I think you’re interpreting “this is not how human psychology works” in a noncentral way compared to how Bob Jacobs is likely to have meant it, or maybe asserting your examples of psychology working that way more as normative than as positive claims.
I’m not really sure what you mean here, but I can address what you say below. I’m not sure if it’s related?
“felt foolish” together with the consequences looks like a description of an alief-based and alief-affecting social feedback mechanism. How safe is it for individuals to unilaterally train themselves out of such mechanisms?
Depends on how you go about it and what type of risk you’re trying to avoid. When I first started playing with this stuff I taught someone how to “turn off” pain, and in her infinite wisdom she used this new ability to make it easier to be stubborn and run on a sprained ankle. There’s no foolproof solution to make this never happen (in my infinite wisdom I’ve done similar things even with the pain), but the way I go about it now is explicitly mindful of the risks and uses that to get more reliable results. With the swelling, for example, part of my indignant reaction was “it doesn’t have to swell up, I just won’t move it”.
When you’ve seen something happen with your own eyes multiple times, I think that’s beyond the level where you should be foolish for thinking that it might be possible. When you see that the thing that is stopping other people from doing it too is ignorance of the possibility rather than an objection that it shouldn’t be done, then “thinking it through and making your reasoned best guess” isn’t going to be right all the time, but according to your own best guess it will be right more often than the alternative.
Or: individual coherence and social cohesion seem to be at odds often enough for that to be a way for “not-winning due to being too coherent” to sneak in through crazy backdoors in the environment, absent unbounded handling-of-detachment resources which are not in evidence and at some point may be unimplementable within human bounds.
It seems that this bit is your main concern?
It can be a real concern. More than once I’ve had people express concern about how it has become harder to relate with their old friends after spending a lot of time with me. It’s not because of stuff like “I can consciously prevent a lot of swelling, and they don’t know how to engage with that” but rather because stuff like “it’s hard to be supportive of what I now see as clearly bad behavior that attempt to shirk reality to protect feelings and inevitably ends up hurting everyone involved”. In my experience, it’s a consequence of being able to see the problems in the group before being able to see what to do about it.
I don’t seem to have that problem anymore, and I think it’s because of the thought that I’ve put into figuring out how to actually change how people organize their minds. Saying “here, let me use math and statistics to show you why you’re definitely completely wrong” can work to smash through dumb ideas, but then even when you succeed you’re left with people seeing their old ideas (and therefore the ideas of the rest of their social circle) as “dumb” and hard to relate to. When you say “here, let me empathize and understand where you’re coming from, and then address it by showing how things look to me”, and go out of your way to make their former point of view understandable, then you no longer get this failure mode. On top of that, by showing them how to connect with people who hold very different (and often less well thought out) views than you, it gives them a model to follow that can make connecting with others easier. My friend in the above example, for instance, went from sort of a “socially awkward nerd” type to a someone who can turn that off and be really effective when she puts her mind to it. If someone is depressed and not even his siblings can get him to talk, he’ll still talk to her.
If there’s a group of people you want to be able to relate to effectively, you can’t just dissociate off into your own little world where you give no thought to their perspectives, but neither can you just melt in and let your own perspective become that social consensus, because if you don’t retain enough separation that you can at least have your own thoughts and think about whether they might be better and how best to merge them with the group, then you’re just shirking your leadership responsibilities, and if enough people do this the whole group can become detached from reality and led by whomever wants to command the mob. This doesn’t tend to lead to great things.
Does that address what you’re saying?