Everyone has a plan until they get lied to the face
“Everyone has a plan until they get punched in the face.”
- Mike Tyson
(The exact phrasing of that quote changes, this is my favourite.)
I think there is an open, important weakness in many people. We assume those we communicate with are basically trustworthy. Further, I think there is an important flaw in the current rationality community. We spend a lot of time focusing on subtle epistemic mistakes, teasing apart flaws in methodology and practicing the principle of charity. This creates a vulnerability to someone willing to just say outright false things. We’re kinda slow about reacting to that.
Suggested reading: Might People on the Internet Sometimes Lie, People Will Sometimes Just Lie About You. Epistemic status: My Best Guess.
I.
Getting punched in the face is an odd experience. I’m not sure I recommend it, but people have done weirder things in the name of experiencing novel psychological states. If it happens in a somewhat safety-negligent sparring ring, or if you and a buddy go out in the back yard tomorrow night to try it, I expect the punch gets pulled and it’s still weird. There’s a jerk of motion your eyes try to catch up with, a sort of proprioceptive sliding effect as your brain wonders who moved your head if it wasn’t you.
If it happens by surprise out on the sidewalk and the punch had real strength behind it, so much the worse. The world changes colour, it feels like time stops being a steady linear progression and becomes unbelievably detailed beads on a string at irregular moments, internal narration becomes disassociated and tries to come up with an explanation for why your body is moving the way it is, whether staring up at the sky with a gormless expression on your face or shaking and shoving your hands forward.
And two seconds before the hit, you were thinking “He’s not actually going to hit me.”
Anyway, your emotional reaction might surprise you. Getting hit shakes you up a bit. Other people often don’t react intelligently either. “Are you okay?” Yeah, right as rain, I’m just holding my hand to my bleeding nose because I think I look good doing it. “I can’t believe he did that!” Do you not believe your eyes? “What just happened?” I just got punched in the face, that’s what happened, what’s up with you?
Punching people in the face is generally a bad idea. Not only is it likely to get you in trouble, but the face is one big crumple zone. Jaw, teeth, cheekbones, those are all hard and pointy. Your handbones are fragile. If you manage to put power behind the hit, they are going to have a very visible injury, which can put the observers on their side. And yet people still hit each other in the face. Sometimes that’s part of what stuns you. “Surely,” you think, “nobody would have just told a lie that obvious. Something else must have happened.” You can become so confused that you ask out loud “what just happened?” and a bystander has to say “you just got punched in the face, that’s what happened.”
Getting lied to is an odd experience. It’s not the same experience to be sure, but I noticed enough things in common between the two that I kept drawing useful comparisons.
II.
I’ve noticed this flaw in my own mind where I’m either skeptical of everybody, or basically trust everybody. If I’m skeptical of everybody I tend to say more false things, and if I basically trust everybody then I’m a lot more open and honest. Importantly, once I’m skeptical of everybody I start checking more and more, trying to verify things I once took on faith or contemplating exactly what I think I know and how I think I know that.
This flaw is not unlike the long, slow march I made out of having hair-trigger reflexes around getting hit. For reasons I’m not going to go into at the moment, when I arrived at college I was pretty quick to pattern match fast or unexpected movements as incoming strikes. I didn’t actually wind up misfiring and hurting anyone, but there were a few close calls where a friend or partner did something startling and I started to react before having to abort.
In college I contemplated working in information security, and ultimately decided not to. I didn’t like the competitive nature of it, and suspected the kind of thing in this tweet would not be good for my head.
(I edited the screenshot of this Twitter thread. Did you notice?)
One archetypical example I can think of where someone points out someone else might be lying is Concerns About Intentional Insights. It’s very careful, thorough, and organized. For my money though, Alison Gu’s fraud defenses are more illuminating. Mrs. Gu was accused of bank fraud, identity theft, and lying on a passport application. Here’s some excerpts from the Manchester Journal:
Gu, 49, now from Cheshire, Conn., is especially unhappy with defense lawyer Lisa Shelkrot, who refused to use the seven Chinese-speaking actors that the defendant recruited and were provided scripts about what to say during the jury trial.
…
Shelkrot has said she determined the bogus witnesses were actors after they arrived in Burlington midway through the trial in November 2017. Shelkrot, in a court affidavit, now says as she prepared three of them to testify at trial, one happened to remark, “It’s for the movie.”
Shelkrot said she asked about what movie. The witness responded, “Are you a real lawyer?”
Shelkrot wrote, “I answered that I was a real lawyer with a real case and a real client, and that we were going to real court on Monday, where the witness would be expected to take a real oath.”
What level of CONSTANT VIGILANCE do you have to be operating on where when a court case witness comes in, you check that the witness thinks they’re here for a real court case instead of a movie?
If I were a lawyer or judge, and I had to deal with people pulling that level of epistemic sabotage on me, I would become paranoid. After a couple months of that when I walked into a store and the cashier said “Welcome to Burger King” I would start reflexively checking nearby signs to make sure it wasn’t actually a Taco Bell, or maybe a Goodyear tire shop. I would start prodding my burger with a toothpick to make sure it didn’t secretly have live beetles in it. I would be so suspicious.
(I uh. Did wind up in a role where people keep trying shenanigans on me, and while the above is a bit of an exaggeration for humorous effect the experience of the last couple years has made me a less trusting human being.)
III.
In The Dark Arts Are A Scaffolding Skill For Rationality, I talk about how manipulation and lies aren’t skills we want a polished rationalist to have, but that they might be skills useful for training rationalists.
The minimum viable version of that training might be poker. Not because poker teaches you to practice some basic deceit and bluffing. This would be a special version of poker. In advance of the poker game, the person bringing the cards would get one of those explanation cards with all the poker hands, only it’s custom printed to be wrong. The explanation card would then insist that you can also make a straight out of every other card, so for instance 5 7 9 J K can be a straight. They would have a pair of audience plants who would agree that’s always been how poker works. They would have rigged the router for the local wifi to present a man-in-the-middle attack on the wikipedia page for poker.
As the magician Teller of Penn & Teller once said, “sometimes magic is just someone spending more time on something than anyone else might reasonably expect.”
Just so with deception, only there’s surprisingly low-hanging fruit in lies among rationalists.
I’ve run a few meetups where I wore a sign that said “Might Be Lying” on it in big letters. It took several tries before a group called me on the subterfuge I got up to during those meetups. I wasn’t even doing anything particularly weird or high effort! On the first occasion I spent fifteen bucks on a deck of marked cards and lied twice about the difference between a chi squared test and a t-test.
(I maintain that when I’m not wearing a Might Be Lying hat I’m at least if not more honest than the average person, though probably not the high watermark for honesty and forthrightness among rationalists.)
If you have never gotten straight up lied to—not a misunderstanding, not a subtle Non-Violent Communication use of words, not someone being mistaken, an obvious incorrect thing that there’s no way someone doesn’t know, yes I know this kind of description gets used as merely an intensifier an awful lot—you may not know the slippery feeling it can give and how it throws your previous plans out the window.
Best get that emotional reaction out in a controlled environment if you can. One of the best things about my favourite martial arts instructor was that he once took the time to listen to my disjointed rants about getting hit, and helped ground me out of that headspace.
Everyone has a plan until they get lied to the face.
IV.
Enough complaining. What should your plan be?
In On Stance, I talk about mental stances. There’s a useful reflex in physical activities where, when you’re confused or startled, you drop into a practiced stance. My current best mental stance when I realize I think I just got lied to is to start paying close attention to what I’ve directly observed, and what I’ve been told by who. My thoughts and notes start using evidentials, a part of speech present in other languages but not present in English which tracks how you came to hold some piece of information.[1]
(Just as in physical martial arts, false positive errors can be a problem just as false negative errors can be a problem, if a different kind of problem.)
First, slow down big decisions. If you were about to transfer money, or make a public statement, don’t.
Try to de-escalate. Offer a few ways you might have misunderstood, and see it they go for one of them. There’s some paths they might take that recover from an honest mistake. Yep, you’re giving them the opportunity to lie to you again, but this time you’re braced for it and don’t need to put trust in that statement.
When you can, run back through what you think you know about the situation, and how you know what you think you know. When you’re counting additional people’s claims, think about whether those claims are direct or secondhand and possibly from the same source. Try to untangle what the world looks like if they’re telling the truth and what the world looks like if they aren’t, including why they might be saying the false thing.
Then move forward. If it turns out they said a false thing, make your new moves in the world where you’re not going to be able to trust what they say. Sometimes it’s going to make sense to recover that relationship. Sometimes it’s not. Try to react proportionately where you can.
I don’t think it makes sense to have too much of a plan though, especially the first couple times it happens.
Everyone has a plan until they get lied to the face. It’s about knowing you’re going to be confused and hurting, and having good habits that will kick in while everything is spinning up again. And I think it might help to say out loud that people can act weird for a bit after getting hit or lied to, a bit disoriented or oddly obsessed with some bit of sense data. You have to get your head back together, and get back into what needs to happen next.
My ideal rationalist community members have this as a practiced skill. They’ve been lied to, and they’re not taken so flatfooted.
- ^
It owes some of its origin to to ymeskhout’s Miasma Clearing Protocol.
One of the reasons it’s hard to take the possibility of blatant lies into account is that it would just be so very inconvenient, and also boring.
If someone’s statements are connected to reality, that gives you something to do:
You can analyze them, critique them, use them to infer the person’s underlying models and critique those, identify points of agreement and controversy, identify flaws in their thinking, make predictions and projections about future actions based on them, et cetera. All those activities we love a lot, they’re fun and feel useful.
It also gives you the opportunity to engage, to socialize with the person by arguing with them, or with others by discussing their words (e. g., if it’s a high-profile public person). You can show off your attention to detail and analytical prowess, build reputation and status.
On the other hand, if you assume that someone is lying (in a competent way where you can’t easily identify what are the lies), that gives you… pretty much nothing to do. You’re treating their words as containing ~zero information, so you (1) can’t use them as an excuse to run some fun analyses/projections, (2) can’t use them as an opportunity to socialize and show off. All you can do is stand in the corner repeating the same thing, “this person is probably lying, do not believe them”, over and over again, while others get to have fun. It’s terrible.
Concrete example: Sam Altman. The guy would go on an interview or post some take on Twitter, and people would start breaking what he said down, pointing out what he gets right/wrong, discussing his character and vision and the X-risk implications, etc. And I would listen to the interview and read the analyses, and my main takeaway would be, “man, 90% of this is probably just lies completely decoupled from the underlying reality”. And then I have nothing to do.
Importantly: this potentially creates a community bias in favor of naivete (at least towards public figures). People who believe that Alice is a liar mostly ignore Alice,[1] so all analyses of Alice’s words mostly come from people who put some stock in them. This creates a selection effect where the vast majority of Alice-related discussion is by people who don’t dismiss her words out of hand, which makes it seem as though the community thinks Alice is trustworthy. That (1) skews your model of the community, and (2) may be taken as evidence that Alice is trustworthy by non-informed community members, who would then start trusting her and discussing her words, creating a feedback loop.
Edit: Hm, come to think of it, this point generalizes. Suppose we have two models of some phenomenon, A and B. Under A, the world frequently generates prompts for intelligent discussion, whereas discussion-prompts for B are much sparser. This would create an apparent community bias in favor of A: A-proponents would be generating most of the discussions, raising A’s visibility, and also get more opportunities for raising their own visibility and reputation. Note that this is completely decoupled from whether the aggregate evidence is in favor of A or B; the volume of information generated about A artificially raises its profile.
Example: disagreements regarding whether studying LLMs bears on the question of ASI alignment or not. People who pay attention to the results in that sphere get to have tons of intelligent discussions about an ever-growing pile of experiments and techniques. People who think LLM alignment is irrelevant mostly stay quiet, or retread the same few points they have for the hundredth time.
What else they’re supposed to do? Their only message is “Alice is a liar”, and butting into conversations just to repeat this well-discussed, conversation-killing point wouldn’t feel particularly productive and would quickly start annoying people.
Feeling called out by this relatable content.
Is this true? If I start talking about how Thane Ruthenis sexually assaulted me, that might be true or it might be false. If its true, that tells you something about the world. If its false, the statement doesn’t say anything about the world, but the fact that I said it still does.
Like, it would probably mean I don’t like you, or have some interest in having others not like you.
So like, that I make that statement does contain information. It should raise your p(will was SA’d by than) and p(will doesn’t like than) roughly in ratio proportional with how much you trust me.
Sure, but… I think one important distinction is that lies should not be interpreted as having semantic content. If you think a given text is lying, that means you can’t look at just the text and infer stuff from it, you have to look at the details of the context in which it was generated. And you often don’t have all the details, and the correct inferences often depend on them very precisely, especially in nontrivially complex situations. In those cases, I think lies do contain basically zero information.
For example:
It could mean any of:
You dislike me and want to hurt me, as a terminal goal.
I’m your opponent/competitor and you want to lower my reputation, as an instrumental goal.
You want to paint yourself as a victim because it would be advantageous for you in some upcoming situation.
You want to create community drama in order to distract people from something.
You want to erode trust between members of a community, or make the community look bad to outsiders.
You want to raise the profile of a specific type of discourse.
Etc., etc. If someone doesn’t know the details of the game-theoretic context in which the utterance is emitted, there’s very little they can confidently conclude from it. They can infer that we are probably not allies (although maybe we are colluding, who knows), and that there’s some sort of conflict happening, but that’s it. For all intents and purposes, the statement (or its contents) should be ignored; it could mean almost anything.
(I suppose this also depends on Simulacra Levels. SL2 lies and SL4 lies are fairly different beasts, and the above mostly applies to SL4 lies.)
I think this is being a little uncharitable or obtuse, no offense? Unless I’m misunderstanding, which is possible.
Like the list of stuff you said are extremely specific things. Like if I said the SA thing, maybe you’d update from p(will wants to cause commotion to distract people from something) = 1e-6 (or whatever the time-adjusted base rate is) to 10%. That’s a huge amount of information. Even if your posterior probability is well below 50%.
The fact that a event has “many plausible explanations” doesn’t mean it contains no information. This seems like a very elementary fact to me.
I assume those were not chosen randomly from a large set of possible motivations, but because those options were somehow salient for Thane. So I would guess the priors are higher than 1e-6.
For example, I have high priors on “wants to distract people from something” for politicians, because I have seen it executed successfully a few times. The amateur version is doing it after people notice some bad thing you did, to take attention away from the scandal; the pro version is doing it shortly before you do the bad thing, so that no one even notices it, and if someone does, no one will pay attention because the cool kids are debating something else.
Okay it was a specific (hypothetical) example where I in particular made the false claim.
Whats your current REAL p(williawa
currently wants to cause a commotion to distract lesswrong from something
currently wants to paint himself as a victim for some future gain
wants to erode trust between people on lesswrong
And how would you update* if I started making a as credible case I could about so and so person SA-ing me? How would you update if you were sure I was lying?
I think if you don’t make an update you’re very clearly just being irrational. And I don’t think you have any reason to update differently, or really have different priors, than Thane. I don’t know either of you I don’t think.
So if you’re updating, Thane should be as well, irrespective of the saliency thing.
Conditional on you not making the claim (or before you make the claim) and generally not doing anything exceptional, all three probabilities seem small… I hesitate to put an exact number on them, but yeah, 1e-6 could be a reasonable value.
Comparing the three options relatively to each other, I think there is no reason why someone would want to distract lesswrong from something. Wanting to erode trust seems unlikely but possible. So the greatest probability of these three would go to painting yourself a victim, because there are people like that out there.
If you made the claim, I would probably add a fourth hypothesis, which would be that you are someone else’s second account; someone who had some kind of conflict with Thane in the past, and that this is some kind of revenge. And of course the fifth hypothesis that the accusation is true. And a sixth hypothesis that the accusation is an exaggeration of something that actually happened.
(The details would depend on the exact accusation and Thane’s reaction. For example, if he confirmed having met you, that would remove the “someone else’s sockpuppet account” option.)
If you made the accusation (without having had this conversation), I would probably put 40% probabilities on “it happened” and “exaggeration”, and 20% on “playing victim”, with the remaining options being relatively negligible, although more likely that if you didn’t make the claim. The exact numbers would probably depend on my current mood, and specific words used.
I think this description missed the biggest reason that head-blows are bad. Concussions are long-term cumulative. This includes shakes that are FAR gentler than “comatose” levels of head trauma. When someone gets punched in the head, the world becomes slightly more depressed, slightly more demented, and slightly dumber. Never willingly get punched in the head.
Oof, I had a bad concussion earlier this year, and I’d been feeling like I never returned to my full mental acuity, but hadn’t wanted to believe it, and found reason not to: “if concussions leave permanent aftereffects more often than ‘almost never’, I would have heard of it.” Now I have heard of it, and am forced to revise the belief.
I’d probably grieve more, if this news weren’t hot on the tails of a significant improvement in my mental abilities.
(I’ve long suspected I might have early-stage Alzheimer’s caused by decades of profound insomnia, and some recent research out of Harvard Medical says Lithium Orotate might reverse Alzheimer’s progression. Historically I have had brain fog most days to some degree, with a lot of variability. Since trying Lithium Orotate supplementation, I’ve been consistently at “as mentally sharp as I ever routinely am” every day since. Worrying side effects though: kidney and joint pain, which I have never had before. Going to experiment with smaller doses.)
Thank you for sharing.
“Concussions are long-term cumulative” fits neatly into my emerging mental model that daily life actually abounds with avoidable ways to suffer irreversible-under-current-tech harm, including in very minor amounts or normalized ways, such that people routinely accumulate such permanent damage, and that it’s worth my effort to notice and avoid or reduce. I theorize that, for example, some tiny fraction of the dust you inhale gets lodged in the lungs in such an unfortunate orientation that it never leaves, gradually eroding lung function over a lifetime. Scars ~never go away, and incur ongoing costs. Etc.
I would expect this to fail, in that modern browsers attempt to demand HTTPS versions of known sites when they exist, and faking Wikipedia’s cert would take work. MitM-ing websites isn’t as easy as it used to be.
‘Not as easy as it used to be’ != ‘infeasible for a stage magician’. (Keep in mind they are well-documented to do things like research audience members in advance just to pull off better cold reads. They only need one thing to succeed. How many ways are there to hack a pinball machine?)
And you’re making a lot of assumptions here about the setup, like having a known device (maybe it’s a video) which has already contacted Wikipedia before and in-cache has pinned the WP HTTPS cert, and also the user having gone to the right URL/domain in the first place (hey you know what’s hard to see on current mobile browsers because all of the tech giants despise URLs and want to eliminate them and keep you in their walled garden?). I just checked on my phone right now, and if you browse to En, the default Android Chrome browser both does not show you the
httpsand as soon as you scroll down even slightly, the entire URL disappears. The only way I found to easily see the protocol was to edit the URL! It remains quite easy to phish or spoof or cross the ‘line of death’, and people fall for these things all the time. Or, what if it’s been vandalized (can take a long time to fix, and could’ve been vandalized by a confederate mere seconds before the audience member checks)? What if it was vandalized and you’re looking at a valid WP mirror which is out of date? What if you’re looking at a specific data-poisoned revision?(Note by the way that almost none of these exploits would count for bug bounties from anyone.)
Spoofing a DNS redirect record with the router which sends you to a homograh domain with a legitimate certificate should work.
It will not work. Or rather, if you have a way to make it work, you should collect the bug bounty for a few tens of thousands of dollars, rather than use it for a prank. Browser makers and other tech companies have gone to great lengths to prevent this sort of thing, because it is very important for security that people who go to sites that could have login pages never get redirected to lookalike pages that harvest their passwords.
Ah, that’s what I get for trusting Claude to check my first pass idea, and not poking it more extensively.
Aww, that doesn’t work anymore? Probably good for the world if sad for pranksters. I admit I last pulled some variant of this prank in the late aughts/early 2010s and haven’t tried recently. I got an afternoon of enjoyment out of upsidedownternet.
My next best idea I’m sure I could pull off would be to make my own website that looked like the wikipedia article, pull that up on my phone, and show it to the mark.
Reminds me of 2 hbomberguy videos:
ROBLOX_OOF.mp3,: Video game composer Tommy Tallarico reapetedly lies about easily verifiable matters (the content of his Guiness world records, having been featured on MTV Cribs (he wasn’t) &c).[1]
Plagiarism and You(Tube): Multiple people making successful youtube careers out of plagiarism, while, often, putting ~no effort into rephrasing (i.e. anyone could find the source/s by googling the words they were saying).
Feel free to skip to the ‘Tommy’s lies’ section or farther if you just want the brazen lies
Tangential comment, but one where I’d be interested in how people in this community feel: When you wrote about the part with the meetup and sign saying “I might be lying”, I immediately thought how little fun it must have been, and even how badly it might have felt for others attending. In my mind, people attending a meetup don’t want to be lied to, even if you semi-communicated it (I say “semi” because the statement on the sign was trivially true for every person. You did not clearly state you were definitely going to lie about certain things) and in the context of a “social experiment”. To me it seems quite similar to people wearing signs saying “I might be rude” and then actually being rude.
Entering a conversation with someone who is literally wearing a “Might be Lying” sign seems analogous to joining a social-deception game like Werewolf. Certainly an opt-in activity, but totally fair game and likely entertaining for people who’ve done so.
As a side note, Secret Hitler is actually a tabletop game which is about convincingly lying and identifying lies, but also counting probabilities like in poker. If the post author hasn’t tried it, could recommend
I also thought it sounded… really annoying. something I may have found interesting 10 years ago but would now cause me to simply avoid the person. And it might ruin my night by making me feel like a party pooper, a la Thane’s comment above.
Seems a reasonable set of preferences.
Since you’re bringing it up, I do want to clarify it wasn’t a pure social hangout. Some rationalist meetups are people standing around talking about whatever they’re into, some have board games people are playing or readings people are supposed to have read and then talk about, some have specific activities. This was one of the latter, specifically the thumb-on-the-scale variation of Zener Science.
I actually put a lot of thought into signposting for meetups, trying to make sure that people who don’t want to participate in some kind of meetup activity knows about it before making the decision to go. E.g. if someone doesn’t want to read Scott Alexander’s writing or talk about it, that’s pretty reasonable, but if that person show up to the event calling itself an ACX Reading Group where the announcement says you’re supposed to read one of Scott’s essays, it’s not the organizers fault.
(The next bit of this is me working through the Zener Science example, because I’m actually not satisfied with how I signpost it.)
The thumb-on-the-scale Zener Science example is not one of my better signposted meetups and I’d like to solve that problem someday! I’d give it a C+. From my perspective, the problem stems from the fact that most people (especially people at rationalist meetups!) don’t believe in psychic powers and aren’t motivated to practice the stats/science skill the event is trying to work on. What happened without thumb-on-the-scale is they show up, guess at two or three cards, and then conclude (because they were already confident of this, despite not having done enough tests to conclude this from the experiment) that psychic powers aren’t real.
So I can’t necessarily signpost this as an earnest investigation into ESP, because I don’t think ESP works, my attendees mostly don’t think ESP works, and unless I prompt it the experiment isn’t going to be good enough to prove anything.[1] I want to signpost it as a place to practice some stats/science skills, because that’s my goal, but that works better if I can get people to dig a little deeper than just looking at a couple cards. To achieve that, I want to make them a little suspicious and work through the stats of how many right answers would indicate something weird was going on. Since I’m not actually psychic, I sometimes cheat the deck somehow, but since I really don’t want people to wind up not able to trust me I want to signpost that I’m doing something unusual.
Hence the Might Be Lying sign. Good glomarization means sometimes I use the Might Be Lying sign when I’m not actually lying- attendees shouldn’t be able to look at the sign and go “okay, the answer is he’s cheating the deck/psychic” without doing any tests. In theory I might be lying at any time, but when ideally wearing the sign is a good signal that something different is going on; I claim people shouldn’t update much about my propensity to lie in normal life based on my propensity to lie when wearing the sign. But that’s getting a bit complicated for a meetup announcement.
Coming back to your assumptions:
I didn’t clearly state I was definitely going to lie about certain things, because sometimes I run the Zener Science straight without the thumb-on-the-scale variation and I’ll wear the sign there as part of a glomarization strategy. It’s in the context of a pretty specific experiment, and it’s parapsychology, not social science. (Though I can see myself wearing a Might Be Lying sign for similar reasons in other kinds of activities—though sometimes I don’t need the sign, as in Jimramdomh’s example of social-deception games.)
Funnily enough, I’ve kinda run that meetup too. I’d give myself an A- on signposting there, and cheerfully endorse people deciding not to go to meetups in that style, safe in the knowledge that I’m not going to try and make them use Crocker’s Rules at events announced as reading groups.
I’m actually pretty excited about doing some variation of Zener Science with a mix of people who believe in ESP and people who don’t, who were coming together in good faith to figure out what’s going on. Wiseman & Schlitz’s Experimenter Effects And The Remote Detection Of Staring sounds like a good afternoon to me.
And indeed, once or twice someone showed up to the Zener Science meetup who did believe in psychic powers. Whenever this happens I try to pivot to investigating how they think the psychic powers work and what we’d need to change about the test in order to provide evidence one way or another, without making them feel put on the spot or ~othered by being the one person out of a group to hold a contrary belief.
If you’d put in a link to a deleted tweet, I’d probably have believed it.
Suggested stance: emotional distance and compassion.
Your stance is focused on things (facts, reality) when it really should be on people (the person, your relationship with them, yourself).
Definitely don’t make any commitments, new payments etc to the person until you’ve figured out how you want to handle it, but other than that the object level is kinda irrelevant.
You should be protecting yourself, and making sure not to instinctively hurt the other person or your relationship with them.
While the examples in section II are good, this whole thing sounds to me like, to use a different sporting metaphor, “Everyone has a plan until your opponent serves to your backhand.” Most people experience politicians, journalists and the media, Scientists and Experts, lying to their faces routinely, and often as established policy. While I’m not allowed to give examples here, if this is comes as news to you, first, you aren’t beating the quokka allegations, and second, it’s probably because you’ve fallen for these lies so comprehensively that you don’t even notice anymore.
You don’t seem to understand the distinction between ‘mislead’ and ‘lie to the face.’ Or, you know, you’re lying to our face. Which would be a stupid plan, but maybe you’re doing it on purpose.
Even politicians lying to their audience’s faces is quite rare, in the USA. That it’s become meaningfully popular in the last 10 years is considered extremely notable and a sign of the decay and apocalypse of the United States, and to lesser degrees other Western countries. Bill Clinton got caught lying to Congress’s face about a trivial but embarrassing matter and “I did not have sexual relations with that woman” might as well be the words on his tombstone even though he suffered no real material consequences for it, because it’s very rare and no one will forget it.
Lying is hard. Being misinformed, or interpreting true fair information in a biased way to reach a biased conclusion, or writing the bottom line first, or quoting true statements in such a way as to give a deliberately misleading impression, are easy. Politicians do them all the time, and this is expected. Journalists did it a little until the Web revolution and now do it more than that, and this is, again, considered very notable and a sign of the decay of the industry. Scientists and experts do it a little, and almost always the first two types, but when they do even a little of the latter types this makes people riot and consider it very notable and a sign of the decay of the field.
For anyone under 30, 10 years of polical lying, or 20 years for journalists “lying” is a long time and could be seen as a new reality, especially seeing as it seems to be working out quite well for the liars so it’s unlikely to change anytime soon.
It’s still not been common until much more recently than that. Five years at most, which is not a new normal. It hasn’t been working out that well for anyone except Trump himself and there’s a decent chance it explodes on his death (which will almost certainly be before ’32).