The Power of Positivist Thinking

Related to: No Logical Positivist I, Making Beliefs Pay Rent, How An Algorithm Feels From Inside, Disguised Queries

Call me non-conformist, call me one man against the world, but...I kinda like logical positivism.

The logical positivists were a dour, no-nonsense group of early 20th-century European philosophers. Indeed, the phrase “no-nonsense” seems almost invented to describe the Positivists. They liked nothing better then to reject the pet topics of other philosophers as being untestable and therefore meaningless. Is the true also the beautiful? Meaningless! Is there a destiny to the affairs of humankind? Meaningless? What is justice? Meaningless! Are rights inalienable? Meaningless!

Positivism became stricter and stricter, defining more and more things as meaningless, until someone finally pointed out that positivism itself was meaningless by the positivists’ definitions, at which point the entire system vanished in a puff of logic. Okay, it wasn’t that simple. It took several decades and Popper’s falsifiabilism to seal its coffin. But vanish it did. It remains one of the least lamented theories in the history of philosophy, because if there is one thing philosophers hate it’s people telling them they can’t argue about meaningless stuff.

But if we’ve learned anything from fantasy books, it is that any cabal of ancient wise men destroyed by their own hubris at the height of their glory must leave behind a single ridiculously powerful artifact, which in the right hands gains the power to dispel darkness and annihilate the forces of evil.

The positivists left us the idea of verifiability, and it’s time we started using it more.



Eliezer, in No Logical Positivist I, condemns the positivist notion of verifiability for excluding some perfectly meaningful propositions. For example, he says, it may be that a chocolate cake formed in the center of the sun on 8/​1/​2008, then disappeared after one second. This statement seems to be meaningful; that is, there seems to be a difference between it being true or false. But there’s no way to test it (at least without time machines and sundiver ships, which we can’t prove are possible) so the logical positivists would dismiss it as nonsense.

I am not an expert in logical positivism; I have two weeks studying positivism in an undergrad philosophy class under my belt, and little more. If Eliezer says that is how the positivists interpreted their verifiability criterion, I believe him. But it’s not the way I would have done things, if I’d been in 1930s Vienna. I would have said that any statement corresponding to a state of the material universe, reducible in theory to things like quarks and photons, testable by a being who has access to the machine running the universe1 and who can check the logs at will—such a statement is meaningful2. In this case the chocolate cake example passes: it corresponds to a state of the material world, and is clearly visible on the universe’s logs. “Rights are inalienable” remains meaningless, however. At the risk of reinventing the wheel3, I will call this interpretation “soft positivism”.

My positivism gets even softer, though. Consider the statement “Google is a successful company.” Though my knowledge of positivism is shaky, I believe that most positivists would reject this as meaningless; “success” is too fuzzy to be reduced to anything objective. But if positivism is true, it should add up to normality: we shouldn’t find that an obviously useful statement like “Google is a successful company” is total nonsense. I interpret the statement to mean certain objectively true propositions like “The average yearly growth rate for Google has been greater than the average yearly growth rate for the average company”, which itself reduces down to a question of how much money Google made each year, which is something that can be easily and objectively determined by anyone with the universe’s logs.

I’m not claiming that “Google is a successful company” has an absolute one-to-one identity with a statement about average growth rates. But the “successful company” statement is clearly allied with many testable statements. Average growth rate, average profits per year, change in the net worth of its founders, numbers of employees, et cetera. Two people arguing about whether Google was a successful company could in theory agree to create a formula that captures as much as possible of their own meaning of the word “successful”, apply that formula to Google, and see whether it passed. To say “Google is a successful company” reduces to “I’ll bet if we established a test for success, which we are not going to do, Google would pass it.”

(Compare this to Eliezer’s meta-ethics, where he says “X is good” reduces to “I’ll bet if we calculated out this gigantic human morality computation, which we are not going to do, X would satisfy it.”)

This can be a very powerful method for resolving debates. I remember getting into an argument with my uncle, who believed that Obama’s election would hurt America because having a Democratic president is bad for the economy. We were doing the normal back and forth, him saying that Democrats raised taxes which discouraged growth, me saying that Democrats tended to be more economically responsible and less ideologically driven, and we both gave lots of examples and we never would have gotten anywhere if I hadn’t said “You know what? Can we both agree that this whole thing is basically asking whether average GDP is lower under Democratic than Republican presidents?” And he said “Yes, that’s pretty much what we’re arguing about.” So I went and got the GDP statistics, sure enough they were higher under Democrats, and he admitted I had a point4.

But people aren’t always as responsible as my uncle, and debates aren’t always reducible to anything as simple as GDP. Consider: Zahra approaches Aaron and says: “Islam is a religion of peace.”5

Perhaps Aaron disagrees with this statement. Perhaps he begins debating. There are many things he could say. He could recall all the instances of Islamic terrorism, he could recite seemingly violent verses from the Quran, he could appeal to wars throughout history that have involved Muslims. I’ve heard people try all of these.

And Zahra will respond to Aaron in the same vein. She will recite Quranic verses praising peace, and talk about all the peaceful Muslims who never engage in terrorism at all, and all of the wars started by Christians in which Muslims were innocent victims. I have heard all these too.

Then Paula the Positivist comes by. “Hey,” she says, “We should reduce this statement to testable propositions, and then there will be no room for disagreement.”

But maybe, if asked to estimate the percentage of Muslims who are active in terrorist groups, Aaron and Zahra will give the exact same number. Perhaps they are both equally aware of all the wars in history in which Muslims were either aggressors or peacemakers. They may both have the entire Quran memorized and be fully aware of all appropriate verses. But even after Paula has checked to make sure they agree on every actual real world fact, there is no guarantee that they will agree on whether Islam is a religion of peace or not.

What if we ask Aaron and Zahra to reduce “Islam is a religion of peace” to an empirical proposition? In the best case, they will agree on something easy, like “Muslims on average don’t commit any more violent crimes than non-Muslims.” Then you just go find some crime statistics and the problem is solved. In the second-best case, the two of them reduce it to completely different statements, like “No Muslim has ever committed a violent act” versus “Not all Muslims are violent people.” This is still a resolution to the argument; both Aaron and Zahra may agree that the first proposition is false and the second proposition is true, and they both agree the original statement was too vague to go around professing.

In the worst-case scenario, they refuse to reduce the statement at all, or they deliberately reduce it to something untestable, or they reduce it to two different propositions but are outraged that their opponent is using a different proposition than they are and think their opponent’s proposition is clearly not equivalent to the original statement.

How are they continuing to disagree, when they agree on all of the relevant empirical facts and they fully understand the concept of reducing a proposition?

In How an Algorithm Feels From the Inside, Eliezer writes about disagreement on definitions. “We know where Pluto is, and where it’s going; we know Pluto’s shape, and Pluto’s mass—but is it a planet?” The question, he says, is meaningless. It’s a spandrel from our cognitive algorithm, which works more efficiently if it assigns a separate central variable is_a_planet apart from all the actual tests that determine whether something is a planet or not.

Aaron and Zahra seem to be making the same sort of mistake. They have a separate variable is_a_religion_of_peace that’s sitting there completely separate from all of the things you might normally use to decide whether one group of people is generally more violent than another.

But things get much worse than they do in the Pluto problem. Whether or not Pluto is a planet feels like a factual issue, but turns out to be underdetermined by the facts. Whether or not Islam is a religion of peace feels like a factual issue, but is really a false front for a whole horde of beliefs that have no relationship to the facts at all.

When Zahra says “Islam is a religion of peace,” she is very likely saying something along the lines of “I like Islam!” or “I like tolerance!” or “I identify with an in-group who say things like ‘Islam is a religion of peace’” or “People who hate Islam are mean!” or even “I don’t like Republicans.”. She may be covertly pushing policy decisions like “End the war on terror” or “Raise awareness of unfair discrimination against Muslims.”

When Aaron says “Islam is not a religion of peace,” he is probably saying something like “I don’t like Islam,” or “I think excessive tolerance is harmful”, or “I identify with an in-group who would never say things like ‘Islam is a religion of peace’” or even “I don’t like Democrats.” He may be covertly pushing policy decisions like “Continue the war on terror” or “Expel radical Muslims from society.”

Eliezer’s solution to the Pluto problem is to uncover the disguised query that made you care in the first place. If you want to know whether Pluto is spherical under its own gravity, then without worrying about the planet issue you can simply answer yes. And you’re wondering whether to worry about your co-worker Abdullah bombing your office, you can simply answer no. Islam is peaceful enough for your purposes.

But although uncovering the disguised query is a complete answer to the Pluto problem, it’s only a partial answer to the religion of peace problem. It’s unlikely that someone is going to misuse the definition of Pluto as a planet or an asteroid to completely misunderstand what Pluto is or what it’s likely to do (although it can happen). But the entire point of caring about the “Islam is a religion of peace” issue is so you can misuse it as much as possible.

Israel is evil, because it opposes Muslims, and Islam is a religion of peace. The Democrats are tolerating Islam, and Islam is not a religion of peace, so the Democrats must have sold out the country. The War on Terror is racist, because Islam is a religion of peace. We need to ban headscarves in our schools, because Islam is not a religion of peace.

I’m not sure how the chain of causation goes here. It could be (emotional attitude to Islam) → (Islam [is/​isn’t] a religion of peace) → (poorly supported beliefs about Islam). Or it could just be (emotional attitude to Islam) → (poorly supported beliefs about Islam). But even in the second case, that “Islam [is/​isn’t] a religion of peace” gives the poorly supported beliefs a dignity that they would not otherwise have, and allows the person who holds them to justify themselves in an argument. Basically, that one phrase holes itself up in your brain and takes pot shots at any train of thought that passes by.

The presence of that extra is_a_religion_of_peace variable is not a benign feature of your cognitive process anymore. It’s a malevolent mental smuggler transporting prejudices and strong emotions into seemingly reasonable thought processes.

Which brings us back to soft positivism. If we find ourselves debating statements that we refuse to reduce to empirical data6, or using statements in ways their reductions don’t justify, we need to be extremely careful. I am not positivist enough to say we should never be doing it. But I think it raises one heck of a red flag.

Agree with me? If so, which of the following statements do you think are reducible, and how would you begin reducing them? Which are completely meaningless and need to be scrapped? Which ones raise a red flag but you’d keep them anyway?

1. All men are created equal.
2. The lottery is a waste of hope.
3. Religious people are intolerant.
4. Government is not the solution; government is the problem.
5. George Washington was a better president than James Buchanan.
6. The economy is doing worse today than it was ten years ago.
7. God exists.
8. One impulse from a vernal wood can teach you more of man, of moral evil, and of good than all the sages can.
9. Imagination is more important than knowledge.
10. Rationalists should win.

Footnotes:

1: More properly the machine running the multiverse, since this would allow counterfactuals to be meaningful. It would also simplify making a statement like “The patient survived because of the medicine”, since it would allow quick comparison of worlds where the patient did and didn’t receive it. But if the machine is running the multiverse, where’s the machine?

2: One thing I learned from the comments on Eliezer’s post is that this criterion is often very hard to apply in theory. However, it’s usually not nearly as hard in practice.

3: This sounds like the sort of thing there should already be a name for, but I don’t know what it is. Verificationism is too broad, and empiricism is something else. I should point out that I am probably misrepresenting the positivist position here quite badly, and that several dead Austrians are either spinning in their graves or (more likely) thinking that this whole essay is meaningless. I am using “positivist” only as a pointer to a certain style of thinking.

4: Before this issue dominates the comments thread: yes, I realize that the president having any impact on the economy is highly debatable, that there’s not nearly enough data here to make a generalization, et cetera. But my uncle’s statement—that Democratic presidents hurt the economy, is clearly not supported.

5: If your interpretation of anything in the following example offends you, please don’t interpret it that way.

6: Where morality fits into this deserves a separate post.