Grasping Slippery Things

Followup to: Possibility and Could-ness, The Ultimate Source

Brandon Reinhart wrote:

I am “grunching.” Responding to the questions posted without reading your answer. Then I’ll read your answer and compare. I started reading your post on Friday and had to leave to attend a wedding before I had finished it, so I had a while to think about my answer.

Brandon, thanks for doing this. You’ve provided a valuable illustration of natural lines of thought. I hope you won’t be offended if, for educational purposes, I dissect it in fine detail. This sort of dissection is a procedure I followed with Marcello to teach thinking about AI, so no malice is intended.

Can you talk about “could” without using synonyms like “can” and “possible”?

When we speak of “could” we speak of the set of realizable worlds [A’] that follows from an initial starting world A operated on by a set of physical laws f.

(Emphases added.)

I didn’t list “realizable” explicitly as Tabooed, but it refers to the same concept as “could”. Rationalist’s Taboo isn’t played against a word list, it’s played against a concept list. The goal is to force yourself to reduce.

Because follows links two worlds, and the linkage is exactly what seems confusing, a word like “follows” is also dangerous.

Think of it as being like trying to pick up something very slippery. You have to prevent it from squeezing out of your hands. You have to prevent the mystery from scurrying away and finding a new dark corner to hide in, as soon as you flip on the lights.

So letting yourself use a word like “realizable”, or even “follows”, is giving your mind a tremendous opportunity to Pass the Recursive Buck—which anti-pattern, be it noted in fairness to Brandon, I hadn’t yet posted on.

If I was doing this on my own, and I didn’t know the solution yet, I would also be marking “initial”, “starting”, and “operated on”. Not necessarily at the highest priority, but just in case they were hiding the source of the confusion. If I was being even more careful I would mark “physical laws” and “world”.

So when we say “I could have turned left at the fork in the road.” “Could” refers to the set of realizable worlds that follow from an initial starting world A in which we are faced with a fork in the road, given the set of physical laws. We are specifically identifying a sub-set of [A’]: that of the worlds in which we turned left.

One of the anti-patterns I see often in Artificial Intelligence, and I believe it is also common in philosophy, is inventing a logic that takes as a primitive something that you need to reduce to pieces.

To your mind’s eye, it seems like “could-ness” is a primitive feature of reality. There’s a natural temptation to describe the properties that “could-ness” seems to have, and make lists of things that are “could” or “not-could”. But this is, at best, a preliminary step toward reduction, and you should be aware that it is at best preliminary step.

The goal is to see inside could-ness, not to develop a modal logic to manipulate primitive could-ness.

But seeing inside is difficult; there is no safe method you know you can use to see inside.

And developing a modal logic seems like it’s good for a publication, in philosophy. Or in AI, you manually preprogram a list of which things have could-ness, and then the program appears to reason about it. That’s good for a publication too.

This does not preclude us from making mistakes in our use of could. One might say “I could have turned left, turned right, or started a nuclear war.” The options “started a nuclear war” may simply not be within the set [A’]. It wasn’t physically realizable given all of the permutations that result from applying our physical laws to our starting world.

Your mind tends to bounce off the problem, and has to be constrained to face it—like your mind itself is the slippery thing that keeps squeezing out of your hands.

It tries to hide the mystery somewhere else, instead of taking it apart—draw a line to another black box, releasing the tension of trying to look inside the first black box.

In your mind’s eye, it seems, you can see before you the many could-worlds that follow from one real world.

The real answer is to resolve a Mind Projection Fallacy; physics follows a single line, but your search system, in determining its best action, has to search through multiple options not knowing which it will make real, and all the options will be labeled as reachable in the search.

So, given that answer, you can see how talking about “physically realizable” and “permutations(?) that result from applying physical laws” is a bounce-off-the-problem, a mere-logic, that squeezes the same unpenetrated mystery into “realizable” and “permutations”.

If our physical laws contain no method for implementing free will and no randomness, [A’] contains only the single world that results from applying the set of physical laws to A. If there is randomness or free will, [A’] contains a broader collection of worlds that result from applying physical laws to A...where the mechanisms of free will or randomness are built into the physical laws.

Including a “mechanism of free will” into the model is a perfect case of Passing the Recursive Buck.

Think of it from the perspective of Artificial Intelligence. Suppose you were writing a computer program that would, if it heard a burglar alarm, conclude that the house had probably been robbed. Then someone says, “If there’s an earthquate, then you shouldn’t conclude the house was robbed.” This is a classic problem in Bayesian networks with a whole deep solution to it in terms of causal graphs and probability distributions… but suppose you didn’t know that.

You might draw a diagram for your brilliant new Artificial General Intelligence design, that had a “logical reasoning unit” as one box, and then a “context-dependent exception applier” in another box with an arrow to the first box.

So you would have convinced yourself that your brilliant plan for building AGI included a “context-dependent exception applier” mechanism. And you would not discover Bayesian networks, because you would have prematurely marked the mystery as known.

I don’t mean “worlds” in the quantum mechanics sense, but as a metaphor for resultant states after applying some number of physical permutations to the starting reality.

“Permutations”? That would be… something that results in several worlds, all of which have the could-property? But where does the permuting come from? How does only one of the could-worlds become real, if it is a matter of physics? After you ask these questions you realize that you’re looking at the same problem as before, which means that saying “permutations” didn’t help reduce it.

Why can a machine practice free will? If free will is possible for humans, then it is a set of properties or functions of the physical laws (described by them, contained by them in some way) and a machine might then implement them in whatever fashion a human brain does. Free will would not be a characteristic of A or [A’], but the process applied to A to reach a specific element of [A’].

Again, if you remember that the correct answer is “Forward search process that labels certain options as reachable before judging them and maximizing”, you can see the Mind Projection Fallacy on display in trying to put the could-ness property into basic physics.

So...I think I successfully avoided using reference to “might” or “probable” or other synonyms and closely related words.

now I’ll read your post to see if I’m going the wrong way.

Afterward, Brandon posted:

Hmm. I think I was working in the right direction, but your procedural analogy let you get closer to the moving parts. But I think “reachability” as you used it and “realizable” as I used it (or was thinking of it) seem to be working along similar lines.

I hate to have to put it this way, because it seems harsh: but it’s important to realize that, no, this wasn’t working in the right direction.

Again to be fair, Marcello and I used to generate raw material like this on paper—but it was clearly labeled as raw material; the point was to keep banging our heads on opaque mysteries of cognition, until a split opened up that helped reduce the problem to smaller pieces, or looking at the same mystery from a different angle helped us get a grasp on at least its surface.

Nonetheless: Free will is a Confusing Problem. It is a comparatively lesser Confusing Problem but it is still a Confusing Problem. Confusing Problems are not like the cheap damn problems that college students are taught to solve using safe prepackaged methods. They are not even like the Difficult Problems that mathematicians tackle without knowing how to solve them. Even the simplest Confusing Problem can send generations of high-g philosophers wailing into the abyss. This is not high school homework, this is beisutsukai monastery homework.

So you have got to be extremely careful. And hold yourself, not to “high standards”, but to your best dream of perfection. Part of that is being very aware of how little progress you have made. Remember that one major reason why AIfolk and philosophers bounce off hard problems and create mere modal logics, is that they get a publication and the illusion of progress. They rewarded themselves too easily. If I sound harsh in my criticism, it’s because I’m trying to correct a problem of too much mercy.

They overestimated how much progress they had made, and of what kind. That’s why I’m not giving you credit for generating raw material that could be useful to you in pinning down the problem. If you’d said you were doing that, I would have given you credit.

I’m sure that some people have achieved insight by accident from their raw material, so that they moved from the illusion of progress to real progress. But that sort of thing cannot be left to accident. More often, the illusion of progress is fatal: your mind is happy, content, and no longer working on the difficult, scary, painful, opaque, not-sure-how-to-get-inside part of the mystery.

Generating lots of false starts and dissecting them is one methodology for working on an opaque problem. (Instantly deadly if you can’t detect false starts, of course.) Yet be careful not to credit yourself too much for trying! Do not pay yourself for labor, only results! To run away from a problem, or bounce off it into easier problems, or to convince yourself you have solved it with a black box, is common. To stick to the truly difficult part of a difficult problem, is rare. But do not congratulate yourself too much for this difficult feat of rationality; it is only the ante you pay to sit down at the high-stakes table, not a victory.

The only sign-of-success, as distinguished from a sign-of-working-hard, is getting closer to the moving parts.

And when you are finally unconfused, of course all the black boxes you invented earlier, will seem in retrospect to have been “driving in the general direction” of the truth then revealed inside them. But the goal is reduction, and only this counts as success; driving in a general direction is easy by comparison.

So you must cultivate a sharp and particular awareness of confusion, and know that your raw material and false starts are only raw material and false starts—though it’s not the sort of thing that funding agencies want to hear. Academia creates incentives against the necessary standard; you can only be harsh about your own progress, when you’ve just done something so spectacular that you can be sure people will smile at your downplaying and say, “What wonderful modesty!”

The ultimate slippery thing you must grasp firmly until you penetrate is your mind.