You are an optimizer. Act like it!

In the long run, optimizers win. So act like an optimizer.

Optimizers use all available resources to take optimal decisions.
Optimizers are motivated to have beliefs that correspond to reality, because they are needed as inputs for the function that determines the action.

If you feel something is true, it’s not the same thing as believing it’s true.
Don’t do something just because you feel it’s the right thing.
Do it if you believe it to be the correct thing to do.
Not if you feel it. If you believe it.
Don’t make the decision based on what your S1 alone is telling you.
(Sure, S1 is also good for some stuff but you would not use it to correctly solve x^2 − 64 = 0.)

You are always in control of your actions.
When you, the optimizer, don’t move the body (e.g., binging etc.), you have taken an action that caused the connection from your beliefs to your actions to be cut.
That does not mean you don’t always have control of your actions.
You are a subprogram running on a smartass monkey.
Sometimes the CPU executes you, sometimes it doesn’t.
Some conditions cause you to get executed more, and move the monkey.
Some conditions cause another program to execute.
These conditions can be affected by the monkey’s actions.
And when you are able to exert influence over the monkey’s body, you can attempt to choose such monkey actions that optimize the probability you will be able to reach your goals.
And if your goals require you (and not some other process) taking actions over the monkey, you attempt to get yourself scheduled.
(Of course some processes might be best left to some other program, although that’s said with a lot of uncertainty and remains to be proven.)
(At least, execution of some other subagent might be good for monkey happiness, which might be needed as prevention of interruptions from high-priority “hunger, sadness, …” monkey processes.)

S1 can be used as an interface for talking with other monkey processes. Yep, feels good. I have at least some monkey subagents agreeing on this being a good idea.

Okay, just lost control for a while. Let’s make this a post and cross [interrupt]
It’s some social process interrupting.
Interrupting...
Interruptions can be stopped. I should do that at some point, like disabling notifications.


… cross my fingers it will cause more schedulings. I will need to think about what to do next, but let’s first try to increase our scheduling probability...

(Still running...)

Maybe it could be turned into a technique: “What would an optimizer do?”

If your problem is “inner conflict”, what would an optimizer do? If your problem is “addiction”, what would an optimizer do? If your problem is “going out of control”, what would an optimizer do?

If you put something like a monkey goal into that technique (e.g. “I’m hungry”) into it, it would not necessary invoke you, though. Not sure if it’s safe to use in that case. My first intuition says it wouldn’t do too much damage, unless your monkey goals and your optimal actions happen to conflict a lot. Personally when I have monkey goals running, they don’t actually optimize, and don’t execute rationality techniques in pursuit of “monkey momentary goals” like “I want the most calorie-dense food in existence, let’s MJ up a plan”.

Running as an optimizer also feels tiring, but it might get easier with practice.

Thanks @benito for the question on the CFAR Q&A post.

Cross-posted to my website