LessWrong developer, rationalist since the Overcoming Bias days. Connoisseur of jargon.
On calorie restriction: The last time I looked into this the data was very thin, such that finding out a major study had faked data might reduce the experimental evidence base to zero. If researchers were giving the calorie-restriction group extra food that they weren’t supposed to, it’s also likely that it was different food, which might itself explain a lifespan difference (if that food was healthier).
I think the prior is mostly against calorie restriction; looking at what the marginal energy expenditures are which would be reduced during calorie restrictions, two big ones are immunity and damage-repair, both of which are things where you’d worry about under-expenditure to produce aging-like problems. (Worse, if calorie restriction weakens immunity and this causes aging, this isn’t likely to show up in animal studies, since animals have a different set of pathogens to worry about.)
On metformin: I followed the Campbell et al to the first of the four metformin-mortality studies it cites, Bannister et. I’m not entirely sure what exactly the trick is with this study, but I’m pretty sure it’s tricky in some way. This observational study found lower mortality in type 2 diabetics with a mean A1c of 8.6% on metformin than in matched controls. Given that metformin is already the standard of care for T2DM, this would seem to imply that being diabetic reduces risk of death, rather than increasing it, which is nonsense.
This was initially written in response to “Communicating effective altruism better—Jargon” by Rob Wiblin (Facebook link), but stands alone well and says something important. Rob argues that we should make more of an effort to use common language and avoid jargon, especially when communicating to audiences outside of your subculture.
If you’re writing for a particular audience and can do an editing pass, then yes, you should cut out any jargon that your audience won’t understand. A failure to communicate is a failure to communicate, and there are no excuses. For public speaking and outreach, your suggestions are good.
But I worry that people will treat your suggestions as applying in general, and trying to extinguish jargon terms from their lexicon. People have only a limited ability to code-switch. Most of the time, there’s no editing pass, and the processes of writing and thinking are comingled. The practical upshot is that people are navigating a tradeoff between using a vocabulary that’s widely understood outside of their subculture, and using the best vocabulary for thinking clearly and communicating within their subculture.
When it comes to thinking clearly, some of the jargon is load-bearing. Some of it is much more load-bearing than it looks. On the margin, people should be using jargon more.
I’m the author of Rationality Cardinality (http://carddb.rationalitycardinality.com/card/all/). The premise of the game is, I curated a collection of concepts that I thought it was important for people to be familiar with, optimized the definitions, and mixed them together with some jokes. I’ve given a lot of thought to what makes good jargon terms, and the effects that using and being immersed in jargon has on people.
I’m also a developer of LessWrong, a notoriously jargon-heavy site. We recently integrated a wiki, and made it so that if a jargon term links to the appropriate wiki page, you can hover over it for a quick definition. In the medium to long term, we hope to also have some mechanisms for getting jargon terms linked without the post author needing to do it, like having readers submit suggested linkifications, or a jargon-bot similar to what they have on the SpaceX wiki (which scans for keywords and posts a comment with definitions of all of them).
Jargon condenses ideas, but the benefit of condensation isn’t speed. Short phrases are more accessible to our thoughts, and more composeable. The price of replacing “steelmanning” with “giving the best defense of a position” is to less-often notice that steelmanning is an option, or that someone is doing it. The price of replacing “Moloch” with “coordination problems” is to stop noticing when what look like villain-shaped problems are actually coordination problems instead.
Much of our jargon is writers’ crystallized opinions about which concepts we should have available, and the jargon is the mechanism for doing so. If we reject those opinions, we will not notice what we fail to notice. We will simply see less clearly.
Appendix: A few illustrative examples from the slides
If I replaced the term “updated” with “changed my mind” in my lexicon, then I’d get tripped up whenever I wanted to tell someone my probability estimate had gone from 10% to 20%, or (worse) when I wanted to tell them my probability estimate had gone up, but didn’t want to commit to a new estimate. Ie, the power of the word “updating” is not that it’s extra precise, it’s that it’s *imprecise* in a way that’s useful.
Replacing “agenty” with “proactive and independent-minded” feels like obliterating the concept entirely, in a way that feels distinctly Orwellian. I think what’s actually going on here is that this concept requires a lot more words to communicate, but it also happens to be a concept that the villains in Orwell’s universe would actually try to erase, and this substitution would actually erase it.
Replacing “credence” with “estimate of the probability” would imply the existence of a person-independent probability to be argued over. This is a common misunderstanding, attached to a conversational trap, and this trap is enough of a problem in practice that I think I’d rather be occasionally inscrutable than lead people into it.
(When we curate something, admins get an email version of the post immediately, then everyone else who’s subscribed to curated gets the email after a 20 minute delay. Sometimes we notice a formatting problem with the email, un-curate during the 20 minute window, then re-curate it; that’s what happened in this case.)
He has asked for help numerous times but without giving much detail on what he has tried. For example, he said he had tried “paleo”. But paleo is a very vague term. People suggest things and he advises that he already tried them without giving details on exactly what he did.He has not published blood tests or other diagnostics etc as far as I can tell, so it is very difficult to know what the problem which he calls “metabolic disprivilege” is. Do his close relatives have particular weight problems? What are the genetic tests showing in terms of genes known to inhibit fat mobilization? Does blood sugar go down when dieting? We do not know this. I would expect rationality would dictate a rapid exploration of this problem but I do not see it.
Lots of things like this were explored, just not in public, for the usual medical-privacy reasons. During the period in which he was posting bounties on Facebook for things like blood tests to try, he was also working with a smaller group of doctors and community members (including myself) with greater information access. There are interesting takeaways about metabolism and diet from that project, which could be written up some day, but I’m not aware of anything which would warrant a retraction.
Generally speaking, if someone commits heinous and unambiguous crimes in service of an objective like “getting people to read X”, and it doesn’t look like they’re doing a tricky reverse-psychology thing or anything like that, then we should not cooperate with that objective. If Kaczynski had posted his manifesto on LessWrong, I would feel comfortable deleting it and any links to it, and I would encourage the moderator of any other forum to do the same under those circumstances.
But this is a specific and unusual circumstance. When people try to cancel each other, usually there’s no connection or a very tenuous connection between their writing and what they’re accused of. (Also the crime is usually less severe and less well proven.) In that case, the argument is different; either the people doing the cancelling think that the crime wasn’t adequately punished, and are trying to create justice via a distributed minor punishment. If people are right about whether the thing is bad, then the main issues are about standards of evidence (biased readings and out-of-context quotes go a long way), proportionality (it’s not worth blowing up peoples’ lives over having said something dumb on the internet), and relation to nonpunishers (problems happen when things escalate from telling people why someone is bad, to punishing people for not believing or not caring).
I am working on a longer review of the various pieces of PPE that are available, now that manufacturers have had time to catch up to demand. That review will take some time, though, and I think it’s important to say this now:
The high end of PPE that you can buy today is good enough to make social distancing unnecessary, even if you are risk averse, and is more comfortable and more practical for long-duration wear than a regular mask. I don’t just mean Biovyzr (which has not yet shipped all the parts for its first batch) and the AIR Microclimate (which has not yet shipped anything), though these hold great promise and may be good budget options.
If you have a thousand dollars to spare, you can get a 3M Versaflo TR-300N+. This is a hospital-grade positive air pressure respirator with a pile of certifications; it is effective at protecting you from getting COVID from others. Most of the air leaves through filter fabric under the chin, which I expect makes it about as effective at protecting others from you as an N95. Using it does not require a fit-test, but I performed one anyways with Bitrex, and it passed (I could not pass a fit-test with a conventional face-mask except by taping the edges to my skin). The Versaflo doesn’t block view of your mouth, gives good quality fresh air with no resistance, and doesn’t muffle sound very much. Most importantly, Amazon has it in stock (https://www.amazon.com/dp/B07J4WCK6R) so it doesn’t involve a long delay or worry about whether a small startup will come through.
The most important thing on the debate stage (I believe it was the same stage for both the presidential and vice-presidential debates) is not visible due to the camera angles in most of the photographs, but is visible in eg this photo: about 12ft directly above each candidate’s head is an air vent pointed down, connected to a long air tube leading offstage (presumably outside or to some other known-good air source). Those are presumably positive pressure, to prevent the candidates from breathing any air from the audience. So assuming the flow-rate was decent, I think modeling the debate as “indoor” for purposes of candidate-to-candidate transmission is probably a mistake; both candidates were likely well-protected from each others’ air. (The audience and the moderator, on the other hand, would have been at risk from Trump, unless there are additional vents I haven’t spotted.)
Programming language popularity is mostly driven by a positive feedback loop between which languages the projects/libraries/resources are written in, and which language the developers are most experienced and comfortable with. The properties of the languages do matter, since people will sometimes ignore the preexisting resources and use the language they think is best, and second-movers sometimes have an advantage in getting to use the lessons learned from a successful library. Causally speaking, though, language popularity today is mostly the result of language popularity yesterday.
From a market-equilibrium perspective: the amount of social status given to these content creators is high because in most cases that’s the only compensation they’re getting, for (in some cases) a quite significant number of hours. The reason there aren’t more people using their time to claim those status-rewards is because, for a similar amount of effort, they could instead be making salaries.
For most people, climate change is pretty much the only world-scale issue they’ve heard of. That makes it very important (in relative terms); climate change has a world-scale impact, and no other issues they’re familiar with do, so it’s very important.
LessWrong has a history of dealing with other world-scale issues, and EA (an overlapping neighboring community) likes to make a habit of pointing out all the cause areas and weighing them against each other. When climate change is weighed against AI risk, animal welfare, biosafety, developing-world poverty, and various meta-level options… well, AGW didn’t get any less important in absolute terms, but you can see why people’s enthusiasm and concern might lie elsewhere.
As a secondary issue, this is a community that prides itself on having high epistemic standards, so when the advocates of a cause area have conspicuously low epistemic standards, it winds up being a significant turn-off. When you have a skeptical eye, you start to automatically notice when people make overblown claims, and recommend interventions that obviously won’t help or will do more harm than good. Most of what I see about AGW on social media and on newspaper front pages falls into these categories, and while this fact isn’t going to show up on any cause-prioritization spreadsheets, on a gut level it’s a major turnoff.
For an example of what I’m talking about, look into the publicity surrounding hydrogen cars. They’re not a viable technology, and this is obvious to sufficiently smart people, but because they claim to be relevant to AGW, they get a lot of press anyways. The result is a con-artist magnet and a strong ick-feeling which radiates one conceptual level out to AGW-interventions in general.
I haven’t looked into the specific study you mention (about a neural net predicting sexuality), but the usual trick in papers like this is to oversell predictors with very low accuracy scores, which they can get away with because most people don’t know what the scores mean. For example, if you had a sexuality-by-demographics table and could estimate age and race from a picture, you would be able to “predict” a person’s sexuality with only the picture; you just wouldn’t have a good accuracy score. A neural net can do the same thing.
Regarding whether this is evidence, see What Is Evidence from R:A-Z. This definition is much broader than the one you use here, but is a more useful one.
According to Fedex tracking, on Thursday, I will have a Biovyzr. I plan to immediately start testing it, and write a review.
What tests would people like me to perform?
Tests that I’m already planning to perform:
To test its protectiveness, the main test I plan to perform is a modified Bittrex fit test. This is where you create a bitter-tasting aerosol, and confirm that you can’t taste it. The normal test procedure won’t work as-is because it’s too large to use a plastic hood, so I plan to go into a small room, and have someone (wearing a respirator themselves) spray copious amounts of Bittrex at the input fan and at any spots that seem high-risk for leaks.
To test that air exiting the Biovyzr is being filtered, I plan to put on a regular N95, and use the inside-out glove to create Bittrex aerosol inside the Biovyzr, and see whether someone in the room without a mask is able to smell it.
I will verify that the Biovyzr is positive-pressure by running a straw through an edge, creating an artificial leak, and seeing which way the air flows through the leak.
I will have everyone in my house try wearing it (5 adults of varied sizes), have them all rate its fit and comfort, and get as many of them to do Bittrex fit tests as I can.
Tier 2: Into the Breach
Tier 3: Monument Valley; Infinifactory (Zachtronics)
Tier 4: The Pedestrian; klocki; Bridge Construtor (series); Neverout (VR)
Not recommended: The Ball; Four Ways; Colorgrid
It seems to me that this is bordering on saying that persons who made a different choice to yours are therefore not just wrong, but suffering from something, their brain is not working properly and they need to be taught how to make better choices. where “better” obviously means, more in line with the choice you would make.
It’s not about the choice in isolation, it’s the mismatch between stated goals and actions. If someone says they want to save money, and they spend tens of hours of their time to avoid a $5 expense when there was a $500 expense they could have avoided with the same effort, then they aren’t doing the best thing for their stated goal. Scope-insensitivity problems like this are very common, because quantifying and comparing things is a skill that not everyone has; this causes a huge amount of wasted resources and effort. That doesn’t mean everything that looks like an example of scope insensitivity actually is one; people may have other, unstated goals. In the classic study with birds and oil ponds, for example, people might spend a little money to make themself look good to the experimenter.
(I would also note that, while the classic birds-and-oil-ponds example study is often used as an illustrative example, most peoples’ belief that scope insensitivity exists and is a problem does not rely on that example, and other examples are easy to find.)
I feel like Aumann’s Agreement Theorem is one of those concepts which the community was originally excited about, which didn’t quite pan out. It’s valid as a piece of math, but people want to use it as a shorthand for “the fact that we disagree means one of us must be being irrational”, when that is not the case. The reason is because it’s not enough for both people to be Bayesian agents, not enough for each person to also know that the other is a Bayesian agent, not enough to know that each person knows that the other person is a Bayesian agent, etc; they need actual common knowledge. And then it turns out that people mostly aren’t Bayesian agents. And that’s before getting into the weird anthropic stuff, where there are weird facts and pieces of evidence that aren’t person-symmetric; eg, I may think that my subjective experience means futures in which I-in-particular am mass-copied are more likely, but someone else should not believe this.
It doesn’t; this feature didn’t survive the switchover from old-LW to LW2.0.
Pedantically speaking, whether this is true or not depends on what you mean by “it”; owning up to it [a fact about the world external to oneself] does not make it [that fact] worse, but if your psychology can’t handle unpleasant truths, then owning up to it [a specific fact about the external world] make may it [the world as a whole] worse.
But this is a bit of a dodge; I think the right way to look at it is that, in most cases, a false belief is a form of debt; you’ll probably have to own up to it eventually, and there’s a cost to be paid when you do, but time-shifting that cost further into the future creates additional costs, because you make worse decisions and form other incorrect beliefs in the mean time.
Modern war is not total war; the fact that aircraft carriers can be destroyed is mostly irrelevant, because any conflict which escalated to the point where aircraft carriers are being sunk, has a very high risk of also escalating to nuclear armageddon. Modern military operations mostly consist of projecting force into asymmetric conflicts; aircraft carriers do well at this.
IRV advocacy confuses me greatly. First past the post voting is the worst voting system in widespread use; instant runoff is bad, but not as bad. Approval voting is actually good. I don’t think either of these facts is seriously disputed. Why, if you’re going to switch voting system, would you switch to one that’s merely less-bad, rather than switch directly to the best one? Why sabotage your voting-reform movement by choosing something that people will be right to hesitate about?
It depends on how tightly you draw the analogy. If your takeaway from the boots-story is that buying better versions of commodity, manufactured goods like shoes, is a key part of the story, then this is pretty clearly false, if only because those goods, even in aggregate, don’t make up a large enough part of anyone’s budget.
If you broaden it to include expenditure and accumulation of all resources, not just money, then it’s mostly true. In a given year, a person might work a minimum wage job (have more money now, less money later—cheap boots) or attend a programming bootcamp (have less money now, more money later—expensive boots). They might eat cheap unhealthy food (have more money now, face problems later), or high-quality more expensive food (have less money now, fewer problems later). And so on, repeated across many kinds of decisions, and many kinds of resources.