This Territory Does Not Exist

Response to: Making Beliefs Pay Rent (in Anticipated Experiences), Belief in the Implied Invisible, and No Logical Positivist I

I recently decided that some form of strong verificationism is correct—that beliefs that don’t constrain expectation are meaningless (with some caveats). After reaching this conclusion, I went back and read EY’s posts on the topic, and found that it didn’t really address the strong version of the argument. This post consists of two parts—first, the positive case for verificationism, and second, responding to EY’s argument against it.

The case for Strong Verificationism

Suppose I describe a world to you. I explain how the physics works, I tell you some stories about what happens in that world. I then make the dual assertions that:

1. It’s impossible to reach that world from ours, it’s entirely causally disconnected, and

2. That world “really exists”

One consequence of verificationism is that, if 1 is correct, then 2 is meaningless. Why is it meaningless? For one, it’s not clear what it means, and every alternative description will suffer from similarly vague terminology. I’ve tried, and asked several others to try, and nobody has been able to give a definition of what it means for something to “really exist” apart from expectations that actually clarifies the question.

Another way to look at this is through the map-territory distinction. “X really exists” is a map claim (i.e a claim made by the map), not a territory claim, but it’s about the territory. It’s a category error and meaningless.

Now, consider our world. Again, I describe its physics to you, and then assert “This really exists.” If you found the above counterintuitive, this will be even worse—but I assert this latter claim is also meaningless. The belief that this world exists does not constrain expectations, above and beyond the map that doesn’t contain such a belief. In other words, we can have beliefs about physics that don’t entail a belief in “actual existence”—such a claim is not required for any predictions and is extraneous and meaningless.

As far as I can tell, we can do science just as well without assuming that there’s a real territory out there somewhere.

Some caveats: I recognize that some critiques of verificationism relate to mathematical or logical beliefs. I’m willing to restrict the set of statements I consider incoherent to ones that make claims about what “actually exists”, which avoids this problem. Also, following this paradigm, one will end up with many statements of the form “I expect to experience events based on a model containing X”, and I’m ok with a colloquial usage of exist to shorten that to “X exists”. But when you get into specific claims about what “really exists”, I think you get into incoherency.

Response to EY sequence

In Making Beliefs Pay Rent, he asserts the opposite without argument:

But the world does, in fact, contain much that is not sensed directly. We don’t see the atoms underlying the brick, but the atoms are in fact there.

He then elaborates:

You stand on top of a tall building, next to a grandfather clock with an hour, minute, and ticking second hand. In your hand is a bowling ball, and you drop it off the roof. On which tick of the clock will you hear the crash of the bowling ball hitting the ground?
To answer precisely, you must use beliefs like Earth’s gravity is 9.8 meters per second per second, and This building is around 120 meters tall. These beliefs are not wordless anticipations of a sensory experience; they are verbal-ish, propositional.

I disagree with the last sentence. These beliefs are ways of saying “I expect my experiences to be consistent with my map which says g=9.8m/​s^2, and also says this building is 120 meters tall”. Perhaps the beliefs are a compound of the above and also “my map represents an actual world”—but as I’ve argued, the latter is both incoherent and not useful for predicting experiences.

In Belief in the Implied Invisible, he begins an actual argument for this position, which is continued in No Logical Positivist I. He mostly argues that such things actually exist. Note that I’m not arguing that they don’t exist, but that the question of whether they exist is meaningless—so his arguments don’t directly apply, but I will address them.

If the expansion of the universe is accelerating, as current cosmology holds, there will come a future point where I don’t expect to be able to interact with the photon even in principle—a future time beyond which I don’t expect the photon’s future light cone to intercept my world-line. Even if an alien species captured the photon and rushed back to tell us, they couldn’t travel fast enough to make up for the accelerating expansion of the universe.
Should I believe that, in the moment where I can no longer interact with it even in principle, the photon disappears?
No.
It would violate Conservation of Energy. And the second law of thermodynamics. And just about every other law of physics. And probably the Three Laws of Robotics. It would imply the photon knows I care about it and knows exactly when to disappear.
It’s a silly idea.

As above, my claim is not that the photon disappears. That would indeed be a silly idea. My claim is that the very claim that a photon “exists” is meaningless. We have a map that makes predictions. The map contains a proton, and it contains that proton even outside any areas relevant to predictions, but why should I care? The map is for making predictions, not for ontology.

Later on, he mentions Solomonoff induction, which is somewhat ironic because that is explicitly a model for prediction. Not only that, but the predictions produced with Solomonoff induction are from an average of many different machines, “containing” many different entities. The map of Solomonoff induction, in other words, contains far more entities than anyone but Max Tegmark believes in. If we’re to take that seriously, then we should just agree that everything mathematically possible exists. I have much less disagreement with that claim (despite also thinking it’s incoherent) than with claims that some subset of that multiverse is “real” and the rest is “unreal”.

If you suppose that the photon disappears when you are no longer looking at it, this is an additional law in your model of the universe.

I don’t suppose that. I suppose that the concept of a photon actually existing is meaningless and irrelevant to the model.

When you believe that the photon goes on existing as it wings out to infinity, you’re not believing that as an additional fact.
What you believe (assign probability to) is a set of simple equations; you believe these equations describe the universe.

This latter belief is an “additional fact”. It’s more complicated than “these equations describe my expectations”.

To make it clear why you would sometimes want to think about implied invisibles, suppose you’re going to launch a spaceship, at nearly the speed of light, toward a faraway supercluster. By the time the spaceship gets there and sets up a colony, the universe’s expansion will have accelerated too much for them to ever send a message back. Do you deem it worth the purely altruistic effort to set up this colony, for the sake of all the people who will live there and be happy? Or do you think the spaceship blips out of existence before it gets there? This could be a very real question at some point.

This is a tough question, if only because altruism is complicated to ground on my view—if other people’s existence is meaningless, in what sense can it be good to do things that benefit other people? I suspect it all adds up to normality. Regardless, I’ll note that the question applies on my view just as much to a local altruistic act, since the question of whether other people have internal experiences would be incoherent. If it adds up to normality there, which I believe it does, then it should present no problem for the spaceship question as well. I’ll also note that altruism is hard to ground regardless—it’s not like there’s a great altruism argument if only we conceded that verificationism is wrong.

Now for No Logical Positivist I.

This is the first post that directly addresses verificationism on its own terms. He defines it in a way similar to my own view. Unfortunately, his main argument seems to be “the map is so pretty, it must reflect the territory.” It’s replete with map-territory confusion:

By talking about the unseen causes of visible events, it is often possible for me to compress the description of visible events. By talking about atoms, I can compress the description of the chemical reactions I’ve observed.

Sure, but a simpler map implies nothing about the territory.

Further on:

If logical positivism /​ verificationism were true, then the assertion of the spaceship’s continued existence would be necessarily meaningless, because it has no experimental consequences distinct from its nonexistence. I don’t see how this is compatible with a correspondence theory of truth.

Sure, it’s incompatible with the claim that beliefs are true if they correspond to some “actual reality” that’s out there. That’s not an argument for the meaning of that assertion, though, because no argument is given for this correspondence theory of truth—the link is dead, but the essay is at https://​​yudkowsky.net/​​rational/​​the-simple-truth/​​ and grounds truth with a parable about sheep. We can ground truth just as well as follows: a belief is a statement with implications as to predicted experiences, and a belief is true insofar as it corresponds to experiences that end up happening. None of this requires an additional assumption that there’s an “actual reality”.

Interestingly, in that post he offers a quasi-definition of “reality” that’s worth addressing separately.

“Frankly, I’m not entirely sure myself where this ‘reality’ business comes from. I can’t create my own reality in the lab, so I must not understand it yet. But occasionally I believe strongly that something is going to happen, and then something else happens instead. I need a name for whatever-it-is that determines my experimental results, so I call it ‘reality’. This ‘reality’ is somehow separate from even my very best hypotheses. Even when I have a simple hypothesis, strongly supported by all the evidence I know, sometimes I’m still surprised. So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief’, and the latter thingy ‘reality’.”

Here, reality is merely a convenient term to use, which helps conceptualize errors in the map. This doesn’t imply that reality exists, nor that reality as a concept is coherent. I have beliefs. Sometimes these beliefs are wrong, i.e. I experience things that are inconsistent with those beliefs. On my terms, if we want to use the word reality to refer to a set of beliefs that would never result in such inconsistency, that’s fine, and those beliefs would never be wrong. You could say that a particular belief “reflects reality” insofar as it’s part of that set of beliefs that are never wrong. But if you wanted to say “I believe that electrons really exist”, that would be meaningless—it’s just “I believe that this belief is never wrong”, which is just equal to “I believe this”.

Moving back to the Logical Positivism post:

A great many untestable beliefs are not meaningless; they are meaningful, just almost certainly false: They talk about general concepts already linked to experience, like Suns and chocolate cake, and general frameworks for combining them, like space and time. New instances of the concepts are asserted to be arranged in such a way as to produce no new experiences (chocolate cake suddenly forms in the center of the Sun, then dissolves). But without that specific supporting evidence, the prior probability is likely to come out pretty damn small—at least if the untestable statement is at all exceptional.
If “chocolate cake in the center of the Sun” is untestable, then its alternative, “hydrogen, helium, and some other stuff, in the center of the Sun at 12am on 8/​8/​1″, would also seem to be “untestable”: hydrogen-helium on 8/​8/​1 cannot be experientially discriminated against the alternative hypothesis of chocolate cake. But the hydrogen-helium assertion is a deductive consequence of general beliefs themselves well-supported by experience. It is meaningful, untestable (against certain particular alternatives), and probably true.

Again, the hydrogen-helium assertion is a feature of the map, not the territory. One could just as easily have a map that doesn’t make that assertion, but has all the same predictions. The question of “which map is real” is a map-territory confusion, and meaningless.

I don’t think our discourse about the causes of experience has to treat them strictly in terms of experience. That would make discussion of an electron a very tedious affair. The whole point of talking about causes is that they can be simpler than direct descriptions of experience.

Sure, as I mentioned above, I’m perfectly fine with colloquial discussion of claims using words like exist in order to make discussion /​ communication easier. But that’s not at all the same as admitting that the claim that electrons “exist” is coherent, rather than a convenient shorthand to avoid adding a bunch of experiential qualifiers to each statement.