Words and Implications
Professor Quirrell didn’t care what your expression looked like, he cared which states of mind made it likely.
Words should not always be taken at face value. Presumably you know this. You probably have some heuristics about specific situations or claims in which a person’s words should not be taken literally. But I think most peoples’ heuristics here are far too narrow—that is, most people take words literally far too often.
The sequences talk about habitually asking, in everyday life, “What do I think I know, and how do I think I know it? What physical process produced this belief?”. I suggest a similar habit for words in everyday life: “What is being said, and why is it being said? What physical process produced these words?”.
This post is a bunch of examples, in an attempt to goad your system-1 into looking past surface-level meanings more often.
Once or twice a week, I’ll hear my girlfriend yell from the kitchen “Joooooohn! Why are there so many dirty dishes in the sink?”. Going to wash the dishes is not the correct response to this.
If I go wash the dishes, then she will quite consistently find something else to complain about in the meantime—floor needs sweeping, nothing to eat, neighbors are noisy, etc. Usually multiple other things. It was never really about the dishes in the first place, after all. Really, she’s stressed and looking for an outlet.
A hug fixes the problem much more effectively than washing the dishes would.
The general mental motions required to notice this are something like:
Stop. Don’t just go wash the dishes.
Ask why this is coming up, and in particular why it’s coming up right now specifically. Is there any particular reason the dishes are relevant right now? (Sometimes the answer is “yes”, and then it’s time to go do the dishes.)
If there isn’t a reason why the dishes are relevant right now, then I need to figure out the actual reason for the complaint.
This scene from the movie Limitless is a similar example. It’s a bit over-the-top, but it’s one of the few examples of a supposedly-intelligent person in a Hollywood movie actually doing something intelligent (as opposed to technobabble).
Designers and Engineers
If you work in software, one problem you’ve probably encountered on the job is “the thing they literally ask for is only very loosely correlated with the thing they actually want”.
A designer or product manager comes to a software engineer with some crazy request. They want to redesign a particular button, move it to a different place on the page, and change what it does, but still keep it the same button. A very confused engineer asks “What on earth does that even mean? How is it supposed to be the same button when everything has changed?”. After far too many questions, it turns out that the product team just wanted to re-use the tracking from the old button, because adding new columns to their data is annoying. Main point: the thing they literally ask for is only very loosely correlated with the thing they actually want.
Meanwhile, that same product team is testing out a prototype with potential users and collecting feedback. The users have all sorts of crazy requests. One of them wants a summary page with a bunch of app-internal numbers on it. After asking far too many questions, the product team figures out that what this user actually wants is a way to generate receipts for their customers. Once again, the thing they literally ask for is only very loosely correlated with the thing they actually want.
Down the hall, a manager asks an analyst for the click-through rate on the checkout screen. What the manager actually wants to know is whether lowering prices would lead to more sales. Whether that click-through rate is a good proxy for customers’ price sensitivity is the sort of question the analyst needs to answer, which means the analyst needs to figure out that that’s the real question in the first place. The thing they literally ask for is only very loosely correlated with the thing they actually want.
In another office, the COO has found a used conveyor system on sale and wants to buy it for the warehouse. They ask a lawyer to write up the contract for the purchase, and to keep it simple—just a straightforward asset purchase. The COO probably hasn’t even thought about what happens if the conveyor is defective; the lawyer needs to realize that the COO probably wants the contract to cover any potential problems, even though the COO has not thought about it. The thing they literally ask for...
We could go on all day.
In the information/knowledge economy, a key part of most jobs is realizing that what someone literally asks for is only very loosely correlated with what they actually want. The mental motions required to handle such problems effectively are much the same as the previous section:
Stop. Don’t just immediately do what was literally requested.
Ask why this particular request was made. Is there an obvious goal, and is this clearly the best way to achieve that goal?
If not, what’s the real goal, and what’s the best way to achieve that goal?
I was on vacation in Mexico with my parents and siblings. We were headed to a touristy beach-park, and took a cab. The cab driver “helpfully” suggested an alternative touristy beach-park. This sounded like the sort of “helpful” suggestion which would earn the driver a kickback, and this hypothesis was promptly confirmed when she handed my mother a laminated advertisement for the place.
… at which point my mother leaned over and said “Hey this looks pretty nice! And it’s even pretty cheap.”
Unrolled into a dialogue, my reaction to this was something like…
Inner voice 1: “Huh??? There isn’t any information about niceness or price on that piece of paper.”
Inner voice 2: “It’s the literal content of the words. There’s a price written on there, and it’s a bit lower than the place we’re going.”
Inner voice 1: “Ok, but what does the number on that piece of paper have to do with the amount of money which would change hands at the gate to this place? It’s an ad aimed at tourists, it’s almost certainly misleading. And same with the pictures.”
Inner voice 2: “I don’t think your mother realizes that.”
Inner voice 1: <exasperated sigh>
Point of the story: obviously do not trust information from advertisements or salespeople.
The one exception to this is information which does not seem tailored to make the sale happen. That said, be careful—salespeople can get kickbacks in nonobvious ways. Today’s car dealers, for instance, make most of their money on kickbacks from financing and warranty deals rather than the car itself. (I know this from firsthand experience—I worked at an online car dealership a few years back.)
I won’t do a political example here, but this also includes politicians. It especially includes politicians from your own preferred party. Also note that politicians tend to rely more on bullshit than lies, relative to salespeople—it’s not just a question of whether their words are “trustworthy”, but of whether they have any correspondence to the real world at all. Ask what physical process resulted in these particular words, and often the answer will be “signalling group loyalty” or “polled well with constituents”, with physical reality playing no significant role.
The Parable Of The Dagger
People tend to generalize “don’t trust ads/salespeople” to an heuristic like “be suspicious of the incentives behind information-sharing”. This isn’t a bad heuristic, but it’s the sort of heuristic which makes it a little too easy to miss the more general rule. The taxi driver trying to sell us on a beachpark is highly salient, but the key underlying factor is that the letters and numbers on the piece of paper do not necessarily have anything to do with the amount of money changing hands at the gate. “Be suspicious of incentives” is less general than “ask what causal process resulted in these words”.
The parable of the dagger makes this point more directly. A jester has angered the king (with a tricky logic puzzle) and been thrown in the dungeon. The king sets up a puzzle for him...
The jester was brought before the king in chains, and shown two boxes.
“One box contains a key,” said the king, “to unlock your chains; and if you find the key you are free. But the other box contains a dagger for your heart, if you fail.”
And the first box was inscribed:
“Either both inscriptions are true, or both inscriptions are false.”
And the second box was inscribed:
“This box contains the key.”
The jester correctly reasons through the puzzle, and picks the second box, only to find that it contains the dagger.
“How?!” cried the jester in horror, as he was dragged away. “It’s logically impossible!”
“It is entirely possible,” replied the king. “I merely wrote those inscriptions on two boxes, and then I put the dagger in the second one.”
The steps the jester would need to take to avoid his death are quite similar to the mental motions from earlier:
Stop. Don’t just take the words written on the boxes at face value.
Is there an obvious reason the words on the boxes would accurately predict my fate?
If not, then what is the king really up to?
Finally, a more complex example.
An evo-devo class I was sitting in on assigned this paper. The experimenters were interested in the evolution of Hox genes—genes typically used in animals to establish different roles for different body segments along the head-to-tail axis (e.g. the segments of an ant or bee, the sections of a human spine, etc). They found Hox-analogues in a sea anemone—rather odd, since the anemone doesn’t have the sort of specialized head-to-tail segments with which Hox are usually associated. So, the experimenters investigated the role of those genes in anemone specifically.
The highlight of the paper was this image, which tells most of the story on its own:
Those colors each represent the activity of one Hox protein. They behave exactly like they do in other animals, with each protein lighting up one segment further than the preceding protein… except rather than lighting up head-to-tail along the length of the animal, they’re ordered axially around the animal. (The order in which they light up is determined by the order in which their genes appear in the genome—the genes are in a line, so that a repressor/promoter targeting one will also repress/promote the Hox genes after it.)
The experimenters then use both RNA interference and CRISPR (in separate experiments) to suppress/knock out specific Hox genes, and show that this results in some of the segments “merging”—which in turn gives the anemone merged tentacles from those segments.
Here’s the weird thing: the results from the RNAi experiments were much more impressive, and much more detailed, than the results from the CRISPR experiments. (You can see that visually in the figures above: the tentacles merge much more dramatically in the RNAi examples on the top than the CRISPR examples on the bottom.) Why?
You might guess that there’s some biological weirdness going on, random things interfering with other random things, as sometimes happens in the messiness of biological systems. But my guess is that it’s mostly not about the underlying biology. Reading between the lines of the paper, it sounds like the lab has lots of expertise and experience with RNA interference methods. But CRISPR was the hot new thing, so probably some grad student or reviewer suggested that the paper would be sexier if they threw in a quick CRISPR experiment. The lab lacked experience with this sort of genetic engineering, so probably they just didn’t do it in the most effective way, and ended up with less-impressive results for that experiment. (The class professor, who was familiar with past work from this lab, confirmed that this sounded likely.)
As always, it’s the same basic steps:
Stop. Don’t just assume that the words and figures in the paper are directly representative of the system under study.
Ask where these words and figures came from. What did the experimenters actually do, what thoughts went through their heads?
To the extent that the words and figures reflect something other than the system under study, what can we deduce from them?
One particularly common case is researchers claiming implications which their data do not establish—especially “X causes Y”. (I won’t provide an example, partly because I don’t usually save those papers.) As always, we need to look at the actual process which generated the words: what experiments were actually run, what were the results, and do they actually establish the causal claim? Are the results not just necessary but sufficient to establish causality? If not, what information can we glean from the results?
Most people have a variety of heuristics about when (not) to take words at face value: beware of ulterior motives, beware of people asking for things they don’t understand, check whether claims in a paper’s abstract are actually established by the results. These are good heuristics to have, but they make it easy to overlook the more general technique: “What is being said, and why is it being said? What physical process produced these words?”.
The basic mental steps:
Stop. Don’t just automatically take the words at face value.
Ask what physical process generated the words. Where did they come from? Why these particular words at this particular time?
What can we deduce from the fact that the words were spoken, other than the literal content?