Empirical claims, preference claims, and attitude claims

What do the following statements have in common?

  • “Atlas Shrugged is the best book ever written.”

  • “You break it, you buy it.”

  • “Earth is the most interesting planet in the solar system.”

My answer: None of them are falsifiable claims about the nature of reality. They’re all closer to what one might call “opinions”. But what is an “opinion”, exactly?

There’s already been some discussion on Less Wrong about what exactly it means for a claim to be meaningful. This post focuses on the negative definition of meaning: what sort of statements do people make where the primary content of the statement is non-empirical? The idea here is similar to the idea behind anti-virus software: Even if you can’t rigorously describe what programs are safe to run on your computer, there still may be utility in keeping a database of programs that are known to be unsafe.

Why is it useful to be able to be able to flag non-empirical claims? Well, for one thing, you can believe whatever you want about them! And it seems likely that this pattern-matching approach works better for flagging them than a more constructive definition.

But first, a bit on the philosophy of non-empirical claims.

Let’s take a typical opinion statement: “Justin Bieber sucks”. There are a few ways we could interpret this as shorthand for a different claim. For example, maybe what the speaker really means is “I prefer not to listen to Justin Bieber’s music.” (Preference claim.) Or maybe what the speaker really means is “Of the people who have heard songs by Justin Bieber, the majority prefer not to listen to his music.” (Empirical claim.)

I don’t think shorthand interpretations like these are accurate for most people who claim that JB sucks. Instead, I suspect most people who argue this are communicating some combination of (a) negative affect towards JB and (b) tribal affiliation with fellow JB haters. I’ve taken to referring to statements like these, that are neither preference claims nor empirical claims, as “attitude claims”.

This example doesn’t mean that all “X sucks” style claims are attitude claims. Take the claim “Windows sucks”. It does seem plausible that someone who said this could be persuaded that their claim was false through empirical evidence—e.g. by a meta-analysis that compared Windows worker productivity favorably to worker productivity using other operating systems.

So if someone says Windows sucks, then whether their claim is empirical, attitudinal, or (most likely) some mixture depends on what’s going on in their head. You may be able to classify the claim with further conversation, however. If they say “even if users are happiest and most productive using Windows, it still sucks!”, that suggests the claim is almost entirely attitudinal.

Attitude claims taxonomy

I’ve been writing down attitude claims I think of or come across in my notebook. Here’s some that I’ve seen so far. Hopefully they’ll serve as good training data for your internal classifier.

Not all of the examples I’ve found fit neatly in to one of these categories (e.g. “I can do anything I want”), and it’s pretty common to find claims that seem like mixtures of attitude and fact/​preference statements. For example, if someone says “Being outrageous is the best way to be”, are they saying “I prefer to be outrageous” or “Yay outrageousness”? Probably a bit of both.

What attitudes should I have?

That’s a “should” question, i.e. a question about social rules. Unless you meant it as a shorthand for a question about how best to achieve some goal, e.g. “What attitudes should I have in order to best achieve my preferences?” Then it becomes an empirical question.

I suspect that most people can better achieve their preferences by consciously choosing and adopting attitudes rather than going with whatever defaults they grew up with or are prevalent within their social group. Attitude hacking is not trivial, so you might want to find a friend to adopt your preferred attitude with. (This isn’t anti-epistemic groupthink as long as you’re doing this for attitudes only and not for facts.)

Are attitudes bad?

That’s an attitude question.

I think to best achieve your preferences, it’s likely optimal to take some attitudes seriously, e.g. Jon Kabat-Zinn: “as long as you are breathing, there is more right with you than there is wrong, no matter how ill or how hopeless you may feel”, or Eliezer Yudkowsky: “probability theory is also a kind of Authority and I try to be ruled by it as much as I can manage.”

Unfortunately, I haven’t managed to take any attitude claims as seriously ever since I realized that they’re basically just made up. (Which is itself an attitude statement of the affect type, about the importance of attitudes.) But I’ve also felt more free to “cheat” and modify my attitudes directly in order to optimize for my preferences.

Will pointing out that social rules are social rules make people less likely to take them seriously? Probably. The ideas in this post are dangerous knowledge that shouldn’t be spread beyond rationalist circles.

If you’re like me, you may get kind of squeamish consuming attitude-heavy media (which is also produced by rationalists, by the way; see Paul Graham or Julia Galef). That’s an attitude.

Connection with Nonviolent Communication

Empirical claim: If you restrict yourself to empirical claims and preference claims when you have an argument, you and the people you argue with will be more pleased with the outcome of your arguments.

Nonviolent Communication is a philosophy that recommends replacing attitude claims like “You’re an awful neighbor” or “It’s your fault I can’t get to sleep” with empirical claims, preference claims, and requests: “Your music is playing very loudly (fact). I’m having a hard time sleeping (fact). I’d really like to be able to get to sleep (preference). Could you turn down the volume?” Presumably this works because (a) arguments over empirical claims are sometimes actually resolved and (b) if you share preferences instead of bludgeoning people with social rules, they’re more likely to empathize with you and do things to make you happy.

More thoughts

After crystallizing the fact/​attitude distinction, I started trying to apply self-skepticism to empirical claims only, and just ignoring attitude claims I didn’t like. (“That’s just, like, your opinion, man.”) Carefully considering uncomfortable empirical claims is a habit that will improve my model of the world, thereby helping me achieve my preferences. (That’s what it’s all about, right?) Carefully considering uncomfortable attitude claims, not so much, except maybe if they’re from people with whom I have valued relationships that I want to debug.

Does this post describe an attitude? I actually put it and other affect-free classification schemes in to a fourth category: that of a “cognitive tool”, like a description of an algorithm, that you can take or leave as you wish.