Vague language (and low communication more generally) also gives you plausible deniability for bending the truth.Related: It’s a common feature of Machiavellianism to “keep one’s cards hidden” (12:25 here), i.e., not disclosing motives behind one’s actions and generally communicating little information.
People without anything to hide can build trust by communicating a lot and clearly.
The OP, as well as the other hypocrisy-favorable posts linked by Abram here in the comments, seem to do a poor job IMO at describing why anti-hypocrisy norms could be important. Edit: Or, actually, it seems like they argue in favor for a slightly different concept, not what I’d call “hypocrisy.” I like the definition given in the OP:
1. a feigning to be what one is not or to believe what one does not : behavior that contradicts what one claims to believe or feel
The OP then describes a case where someone thinks “behavior x is bad,” but engages in x anyway. Note that, according to the definition given, this isn’t necessarily hypocrisy! It only constitutes hypocrisy if you implicitly or explicitly lead others to believe that you never (or only very infrequently) do the bad thing yourself. If you engage in moral advocacy in an honest, humble or even self-deprecating way, there’s no hypocrisy. One might argue (e.g., Katja’s argument) that it’s inefficient to do moral advocacy without hypocrisy. That seems like dubious naive-consequentialist reasoning. Besides, I’m not sure it’s empirically correct. (Again, I might be going off a subtly different definition of “hypocrisy.”) I find arguments the most convincing when the person who makes them seems honest and relatable. There are probably target audiences to whom this doesn’t apply, but how important are those target audiences (e.g., they may also not be receptive to rational arguments)? I don’t see what there’s to lose by not falsely pretending to be a saint. The people who reject your ideas because you’re not perfect, they were going to reject your ideas anyway! That was never their true rejection – they are probably just morally apathetic / checked out. Or trolls.
The way I see it, hypocrisy is an attempt to get social credit via self-deception or deceiving others. All else equal, that seems clearly bad.
I’d say that the worst types of people are almost always extreme hypocrites. And they really can’t seem to help it. Whether it’s deceit of others or extreme self-deception, seeing this stuff in others is a red flag. I feel like it muddles the waters if you start to argue that hypocrisy is often okay.
I don’t disagree with the view in the OP, but I don’t like the framing. It argues not in favor of the hypocrisy as it’s defined, but something in the vicinity.
I feel like the framing of these “pro-hypocrisy” arguments should rather be “It’s okay to not always live up to your ideals, but also you should be honest about it.” Actual hypocrisy is bad, but it’s also bad to punish people for admitting imperfections. Perversely, by punishing people for not being moral saints, one incentivizes the bad type of hypocrisy. tl;dr hypocrisy is bad, fight me. (As you may notice, I do have a strong moral distaste for hypocrisy.)
In the UK there’s evidence that the Indian variant (“0.2”) is spreading rapidly in the population, outcompeting the UK variant. It may have reached >50% in some areas, including London probably. This could mess with the indoor reopening plans for next week somewhat, though given that the government mostly seems concerned with keeping hospitals from being overwhelmed, and that’s now easy to achieve with all the vaccinations, it could be that indoor stuff will be allowed despite relatively high and climbing infection levels. (The levels are still very low right now, but if the 0.2 variant is as contagious as it maybe seems, this could change really quickly and lead to massive spikes.)
Anti-realism is not quite correct here, it’s more that claims about external reality are meaningless as opposed to false.
This is semantics but I’d say what you’re describing fits the label “anti-realism” perfectly well. I wrote a post on Why Realists and Anti-Realists disagree. (It also mentions existence anti-realism briefly at the end.)
This raises the natural question: what if you gave an ape the buttons, and taught it from childhood, and put parent-level effort into it, not “70s research”-level effort? Perhaps the answer would surprise us.
The bonobo Kanzi had something very similar (“lexigrams”). And his sister Panbanisha was born in the research center and grew up with the lexigrams. As far as I’m aware, the research never seemed to generate extreme attention, so probably the learnings remained somewhat limited?
Bunny is quite obsessed over her bowel movements (how Freudian) and about her owners’ poop cycle.
As a youtube comment on the video points out, maybe the dog is just trying to be polite by imitating the conversation topic of its family. People probably ask their dog all the time about whether the dog needs or wants to go potty.
I feel like you can turn this point upside down. Even among primates that seem unusually docile, like orang utans, male-male competition can get violent and occasionally ends in death. Isn’t that evidence that power-seeking is hard to weed out? And why wouldn’t it be in an evolved species that isn’t eusocial or otherwise genetically weird?
Now you’re moving goalposts. Of course you can find places that didn’t need lockdowns. I thought your position was that lockdowns were almost never/nowhere worth it. If your position is just “some locations didn’t need lockdown (e.g., the ones where governments decided not to do it)” – that’s extremely different. Whether lockdowns make sense is to be assessed case-by-case, because the virus (and new variants of concern) affected different locations differently.In your other comment, you attribute a claim to me that I haven’t made (“you have provided zero support for your own claim that lockdowns do more good than harm”). All I did was saying that I’m already skeptical since you were making the opposite claim with extremely poor and flawed arguments; I didn’t say I confidently disagreed with your conclusion. Pointing out the favorable mention of Ioannnidis’s 0.15% IFR estimate isn’t “nitpicking of your evidence.” It’s damning that you rely on a source that does this – it’s off by a factor of three to seven. After more than a year of the pandemic, you simply cannot be off about the IFR by this much without looking quite poorly. If someone (the person you were citing/recommending) writes an entire report on how bad lockdowns are but thinks the virus is at least three times less deadly than it actually is, this person seems incompetent and I cannot trust their reasoning enough to buy into the conclusion.I will drop out of this discussion now.
Seconded. The situation in India looks worse, but kind of comparable, to the rapid spikes in South Africa and the UK when new variants arose there. In both cases, the strong reaction induced by the threatening situation led to things stabilizing. It’s true that things might be worse for India, but 95% seems really quite high. Maybe you have a detailed model of why the situation is much different and worse in India now? If so, I’d be curious about the reasoning. (JTBC, I also think it’s likely that things will be completely bad, but I don’t immediately see why >60% for a worst-case scenario seems obviously warranted. There’s a chance that if I looked into this for 2h or heard some convincing arguments, I’d also update to >90% now. )
The report you’re linking to contains this: >Estimates of the IFR have continued to fall over the year. The latest meta-study by Ioannidis (March 2021) estimates the average global IFR at 0.15%.That’s completely off, and so obviously and indefensibly so that it discredits the entire thing, IMO. Maybe there are economic arguments that suggest that alternatives to lockdown could be better, but it would be irresponsible to update on that based on arguments made by a person who cites Ioannidis’s IFR estimates favorably. Ioannidis is a crackpot when it comes to Covid. It’s ironic that you write “This image is a good example of how distorted pro-lockdown arguments are.”I have looked into IFR estimates quite a lot when I was following Covid and I won a large forecasting tournament (and got 3rd in the year-long version): https://forum.effectivealtruism.org/posts/xwG5MGWsMosBo6u4A/lukas_gloor-s-shortform?commentId=ZNgmZ7qvbQpy394kGAlso, from the anti-lockdown side I’ve always wanted to know how to justify letting hospitals get so overwhelmed that people will die of appendicitis – basic health care collapses for at least 2 weeks. Do we really want that if it’s avoidable? How would anyone feel as a doctor, nurse, caretaker, etc. if the government expects you to do triage under insane conditions when it’s totally avoidable? The anti-lockdown side has to engage with that argument. If you say the IFR is low enough that hospitals wouldn’t get overwhelmed without lockdowns, that’s simply not true and you’re engaging in wishful thinking or ideologically clouded thinking. I’m open to arguments that we should have a breakdown of civilization for 2+ weeks (and probably several times) if  “more hidden” consequences are extremely catastrophic otherwise, but then one has to be honest about the costs of a no-lockdown policy.
Edit to add: It’s a strawman that policymakers compare lockdown to “do nothing.” And by now, even the people who initially got it wrong have understood that there are control systems, that many people will stop taking risks as they read about hospitals being overwhelmed. However, there’s a 2-week lag from infections to the peak of hospital overwhelm and if the government isn’t strict enough, you overshoot things really quickly. It can happen extremely fast. You cannot assume that people will always time their behavior the correct way to anticipate hospital overstrain that’s 2 weeks ahead. That’s what government is for.
I don’t think status is a zero-sum game. Some people may play it as such, unfortunately. But some ways to increase your social standing also confer benefits to others without anyone losing out. By being being kind and considerate (as well as knowledgeable, competent, etc.) you can notice people’s good qualities and confer status to others, flattening the status hierarchy and making it more multi-dimensional (making sure different types of talents get noticed).It also depends what kind of status you’re after. If you care more about the approval of people with depth and good character, that’s easier to achieve in ways that build others up than if you care primarily about the most shallow metrics of status.
I did this and it worked really well. I spent maybe 3h on the training initially until it was mostly just showing me music. Then clicked away the occasional non-music video suggestion for a couple of days, until the music-only preference was completely locked in. I feel like I don’t get other suggestions anymore (and the habit to click them away is still installed anyway).
When I want to watch other youtube videos, I use incognito mode (unfortunately that disables the adblocker).
I’m also curious if Oliver or Anna think there’s a difference between EA longtermist endeavors vs. the reference class you’ve drawn from (“scoring very highly on broadly accepted metrics of success”), and if so, how that difference manifests itself for having children.
Good points. “How should we respond” is also a strange framing IMO because it unquestioningly assumes that there’s a need to coordinate as a community (on Lesswrong of all places, which isn’t even a Scott-themed reddit or the commenters on his blog). Personally I think any coordination around this sort of thing is pretty weird and people should just do what they think they should do (and maybe that includes some person writing a personal post on why they want to boycot the newspaper, in the hope to inspire some others, etc.).
The fact that the leaderboard has someone with a billion points, because they have been participating for years, is kind-of irrelevant, and misleading.
There are many leaderboards, including ones that only consider questions that opened recently. Or tournaments with a distinct start and end date.
(And this would do a far better job aligning incentives on questions than the current leaderboard system, since for a leaderboard system, proper scoring rules for points are not actually incentive compatible.)
This is true, but you can create leaderboards that minimize the incentive to use variance-increasing strategies (or variance-decreasing ones if you’re in the lead). (Basically just include a lot of questions so that variance-increasing strategies will most likely backfire, and then have gradually increasing payouts for better rankings.) I agree that what you describe sounds ideal, and maybe it makes sense for Metaculists to think of the points in that way. For making it a reality, I worry that it would cost a lot. (And you’d need a solution against the problem that everyone who wants a few extra dollars could create an account to predict the community median on every question to get some fraction of the total prize pool for just that.)
Yes, but it doesn’t take much time to just predict the community median when you don’t have a clue about a question and don’t want to take the time for getting into it. However, as another commenter points out, this means that Metaculus is rewarding a combination of time put in + prediction skills, rather than just prediction skills.
Metaculus points are not money, so positive points on a question doesn’t mean you’re a top predictor. However, they aren’t meaningless either. It’s about winning MORE points than the competition to win on the leaderboards. The incentive system is good for that (though there are some minor issues with variance-increasing strategies or questions with asymmetrical resolution timelines).
Building infrastructure and setting up preparations for doing this throughly could be an interesting safeguard against future pandemics worse than Covid. But I think there’s a big problem with continuing to run hospitals and care-taking facilities, and care-taking in general.
I’m similar and haven’t found anything that works well. Reading how most EAs talk about their self-improvement “life hacks” always makes me think “fuck you, lol.” I constantly alternate between periods where I’m trying lots of good routines at once and I’m somewhat productive and periods where things fell apart and I’m unproductive. In my experience, most of the leverage to be gained is by trying to reduce the difference between these two states by not punishing myself for falling off the wave, i.e. getting right back into the attempts after a bad day or five. And if I’m on the wave I try to be extra cautious about avoiding things that could derail me.I took time off from work late last year for personal reasons and used the opportunity to start some deeper-reaching attempts at mindset improvement based on CBT, visualizing my ideal day, and so on. I’m about to start schema therapy. Ideally I’d do the exercises daily but that’s already challenging for obvious reasons. I haven’t noticed any productivity improvements so far but I’m at least feeling better about myself.
I agree. I think of myself as a utilitarian in the same subjective sense that I think of myself as (kind of) identifying with voting Democrats (not that I’m a US citizen). I disagree with Republican values, but it wouldn’t even occur to me to poison a Republican neighbor’s tea so they can’t go voting. Sure, there’s a sense in which one could interpret “Democrat values” fanatically, so they might imply that I prefer worlds where the neighbor doesn’t vote, where then we’re tempted to wonder whether ends do justify the means in certain situations. But thinking like that seems like a category error if the sense in which I consider myself a Democrat is just one part of my larger political views, where I also think of things in terms of respecting the political process. So, it’s the same with morality and my negative utilitarianism. Utilitarianism is my altruism-inspired life goal, the reason I get up in the morning, the thing I’d vote for and put efforts towards. But it’s not what I think is the universal law for everyone. Contractualism is how I deal with the fact that other people have life goals different from mine. Nowadays, whenever I see discussions like “Is classical utilitarianism right or is it negative utilitarianism after all?” – I cringe.
So the emerging wisdom is that the SA variant is less contagious, or are you just using 20% as an example? The fact that SA is currently at the height of summer, and that they went from “things largely under control” to “more hospitalizations and deaths than the 1st wave in their winter” in a short amount of time, makes me suspect that the SA variant is at least as contagious as the UK variant. (I’m largely ignoring politicians bickering at each other over this, and of course if there’s already been research on this question then I’ll immediately quit speculating!)