Sapient Algorithms

I notice my mind runs lots of cached programs. Like “walk”, “put away the dishes”, “drive home”, “go to the bathroom”, “check phone”, etc.

Most of these can run “on autopilot”. I don’t know how to define that formally. But I’m talking about how, e.g., I can start driving and get lost in thought and suddenly discover I’m back home — sometimes even if that wasn’t where I was trying to go!

But some programs cannot run on autopilot. The algorithm has something like a “summon sapience” step in it. Even if the algorithm got activated due to autopilot, some step turns it off.

When I look at the examples of sapient algorithms that I run, I notice they have a neat kind of auto-generalization nature to them. I have some reason to think that property is general. It’s the opposite of how, e.g., setting up webpage blockers can cause my fingers to autopilot learn how to bypass them.

I’ll try to illustrate what I mean via examples.

Example: Look at my car keys

I got tired of risking locking my keys in my car. So I started making a habit of looking at my keys before closing the door.

Once, right after I’d closed the locked car door, I realized I’d looked at the phone in my hand and shut the door anyway. Luckily the key was in my pocket. But I noticed that this autopilot program just wasn’t helping.

So I modified it (as a TAP): If I was about to close the car door, I would look at my hand, turn on consciousness, and check if I was actually looking at my keys.

First, that TAP just worked. To this day I still do this when stepping out of a car.

Second, it generalized without my trying to:

  • After a while it would fire whenever I was about to close any locked door.

  • It then generalized to anyone I was with. If they were about to close a locked door, I would sort of “pop awake” with a mental question about whether someone had the key.

  • It then generalized even more. It now fires when I’m, say, preparing for international travel. Crossing a border feels a bit like going through a door that locks behind me. So now I “wake up” and check that I and my travel companions all have passports. (I would usually check before, but now it’s specifically this mental algorithm that summons sapience. It’s reliable instead of being an extra thing to remember.)

This generalization wasn’t intentional. But it’s been really good. I haven’t noticed any problems at all from this program sort of spreading on its own.

Example: Taste my food

When I’m in autopilot mode while eating, it can feel at the end like my food kind of vanished. Like I wasn’t there for the meal.

So I installed a TAP: If I’m about to put food in my mouth, pause & remember emptiness.

“Remember emptiness” has a “summon sapience” type move embedded in it. It’s something like “Turn on consciousness, pause, and really look at my sensory input.” It’s quite a bit deeper than that, but if this kind of emptiness just sounds like gobbledegook to you then you can pretend I said the simplified version.

In this case, the TAP itself didn’t install as cleanly as with the car keys example. Sometimes I just forget. Sometimes the TAP fires only after I’ve taken my first bite.

But all the same, the algorithm still sort of auto-generalized. When I’m viewing a beautiful vista, or am part of a touching conversation, or hear some lovely music, the TAP sometimes fires (about as regularly as with food). One moment there are standard programs running, and then all of a sudden “I’m there” and am actually being touched by whatever it is (the same way I’m actually tasting my food when I’m “there”).

Yesterday I noticed this sapient algorithm booting up in a conversation. Someone asked me “Can I speak plainly?” and I knew she was about to say something I’d find challenging to receive. My autopilot started to say “Yes” with my mouth. But at the same time I noticed I was about to take something in, which caused me to pause and remember emptiness. From there I could check whether I was actually well-resourced enough to want to hear what she had to say. My “No” became accessible.

I’ve noticed this kind of calm care for my real boundaries happening more and more. I think this is due in part to this sapient algorithm auto-generalizing. I’m willing not to rush when I’m about to take things in.

Example: Ending caffeine addiction

When I first tried to break my caffeine addiction, I did so with rules and force of will. I just stopped drinking coffee and gritted my teeth through the withdrawal symptoms.

…and then I got hooked back on coffee again a few months later. After I “wasn’t addicted” (meaning not chemically dependent) anymore.

What actually worked was a sapient algorithm: When I notice an urge to caffeinate, I turn on consciousness and look at the sensational cause of the urge.

Best as I can tell, addictions are when the autopilot tries to keep the user from experiencing something, but in a way that doesn’t address the cause of said something.

By injecting some sapient code into the autopilot’s distraction routine, I dissolve the whole point of the routine by addressing the root cause.

For this sapient algorithm to work, I had to face a lot of emotional discomfort. It’s not just “caffeine withdrawal feels bad”. It’s that it would kick up feelings of inadequacy and of not being alert enough to feel socially safe. It related to feeling guilty about not being productive enough. I had to be willing to replace the autopilot’s “feel bad --> distract” routine with “feel bad --> turn on consciousness --> experience the bad feelings”.

But now I don’t mind caffeine withdrawal. I don’t prefer it! But I’m not concerned. If I need some pick-me-up, I’m acutely aware of the cost the next day, but I consciously just pay it. There’s no struggle. I’m not using it to avoid certain emotional states anymore.

And this sapient algorithm also auto-generalizes. When I notice an addictive urge to (say) check social media, I kind of wake up a little and notice myself wanting to ask “What am I feeling in my body right now?” I find that checking social media is way, way more complex an addiction than coffee was; I have several reasons for checking Facebook and Twitter that have nothing to do with avoiding internal sensations. But I notice that this algorithm is becoming more intelligent: I’m noticing how the urge to check social media kind of wakes me up and has me check what the nature of the urge is. I might have thought to install that as a TAP, but I didn’t have to. It sort of installed itself.

Auto-generalization

So what’s up with the auto-generalization?

I honestly don’t know.

That said, my impression is that it comes from the same thing that makes addictions tricky to break: the autopilot seems able to adapt in some fashion.

I remember encountering a kind of internal arms race with setting up website blockers. I added a plugin, and my fingers got used to keyboard shortcuts that would turn off the plugin. I disabled the shortcuts, and then I noticed myself opening up a new browser. I added similar plugins to all my browsers… and I started pulling out my phone.

It’s like there’s some genie in me following a command with no end condition.

But with sapient algorithms, the genie summons me and has me take over, often with a suggestion about what I might want to attend to. (“Consider checking that your keys are actually in your hand, sir.”)

In theory a sapient algorithm could over-generalize and summon me when I don’t want to be there. Jamming certain flow states.

I did encounter something like this once: I was deep into meditative practices while in math graduate school. My meditations focused on mental silence. At one point I sat down to work on some of the math problems we’d been given… and I was too present to think. My mind just wouldn’t produce any thoughts! I could understand the problem just fine, but I couldn’t find the mental machinery for working on the problems. I was just utterly at peace staring at the detailed texture of the paper the problems were written on.

But in practice I find I have to try to overgeneralize sapient algorithms this way. For the most part, when they generalize themselves from simple use cases, they’re always… nice. Convenient. Helpful!

At least best as I can tell.