Thoughts on the frame problem and moral symbol grounding

(some thoughts on frames, grounding symbols, and Cyc)

The frame problem is a problem in AI to do with all the variables not expressed within the logical formalism—what happens to them? To illustrate, consider the Yale Shooting Problem: a person is going to be shot with a gun, at time 2. If that gun is loaded, the person dies. The gun will get loaded at time 1. Formally, the system is:

  • alive(0) (the person is alive to start with)

  • ¬loaded(0) (the gun begins unloaded)

  • true → loaded(1) (the gun will get loaded at time 1)

  • loaded(2) → ¬alive(3) (the person will get killed if shot with a loaded gun)

So the question is, does the person actually die? It would seem blindingly obvious that they do, but that isn’t formally clear—we know the gun was loaded at time 1, but was it still loaded at time 2? Again, this seems blindingly obvious—but that’s because of the words, not the formalism. Ignore the descriptions in italics, and the names of the suggestive LISP tokens.

Since that’s hard to do, consider the following example. Alicorn, for instance, hates surprises—they make her feel unhappy. Let’s say that we decompose time into days, and that a surprise one day will ruin her next day. Then we have a system:

  • happy(0) (Alicorn starts out happy)

  • ¬surprise(0) (nobody is going to surprise her on day 0)

  • true → surprise(1) (somebody is going to surprise her on day 1)

  • surprise(2) → ¬happy(3) (if someone surprises her on day 2, she’ll be unhappy the next day)

So here, is Alicorn unhappy on day 3? Well, it seems unlikely—unless someone coincidentally surprised her on day 2. And there’s no reason to think that would happen! So, “obviously”, she’s not unhappy on day 3.

Except… the two problems are formally identical. Replace “alive” with “happy” and “loaded” with “surprise”. And though our semantic understanding tells us that “(loaded(1) → loaded (2))” (guns don’t just unload themselves) but “¬(surprise(1) → surprise(2))” (being surprised one day doesn’t mean you’ll be surprised the next), we can’t tell this from the symbols.

And we haven’t touched on all the other problems with the symbolic setup. For instance, what happens with “alive” on any other time than 0 and 3? Does that change from moment to moment? If we want the words to do what we want, we need to put in a lot of logical conditionings, so that our intuitions are all there.

This shows that there’s a connection between the frame problem and symbol grounding. If we and the AI both understand what the symbols mean, then we don’t need to specify all the conditionals—we can simply deduce them, if asked (“yes, if the person is dead at 3, they’re also dead at 4”). But conversely, if we have a huge amount of logical conditioning, then there is less and less that the symbols could actually mean. The more structure we put in our logic, the less structures there are in the real world that fit it (“X(i) → X(i+1)” is something that can apply to being dead, not to being happy, for instance).

This suggests a possible use for the Cyc project—the quixotic attempt to build an AI by formalising all of common sense (“Bill Clinton belongs to the collection of U.S. presidents” and “all trees are plants”). You’re very unlikely to get an AI through that approach—but it might be possible to train an already existent AI with it. Especially if the AI had some symbol grounding, then there might not be all that many structures in the real world that could correspond to that mass of logical relations. Some symbol grounding + Cyc + the internet—and suddenly there’s not that many possible interpretations for “Bill Clinton was stuck up a tree”. The main question, of course, is whether there is a similar restricted meaning for “this human is enjoying a worthwhile life”.

Do I think that’s likely to work? No. But it’s maybe worth investigating. And it might be a way of getting across ontological crises: you reconstruct a model as close as you can to your old one, in the new formalism.