An incentive structure that might not suck too much.

You want to make things better or live in a world which makes things better. But how do you go about actually doing that? There is truth to Drucker’s maxim

“If you can’t measure it, you can’t improve it.”

But you have also heard of Goodheart’s law.

“When a measure becomes a target, it ceases to be a good measure.”

But you have measures. How can you use them without them becoming a target?

Delegation

If you are delegating work this becomes even trickier. You can’t give the people the you are delegating the measures you use (unless you trust them to use them properly and not use them as targets). So you give people rules to follow or goals to meet that aren’t the measures.

Then evaluate them on how well they follow the rules or goals (not on how well they meet the measure), and iterate on the rules or goals. If you have people that follow the rules and achieve the goals *AND* you iterate on those things you can actually affect change in the world. If you don’t iterate you’ll just end up optimising whatever the first set of rules points at, rather than the thing you actually want to acheive.

You also probably want to give people some slack with the rules/​goals, so that they have spare energy to look at the world and figure out what they think is best. If people are run ragged trying to meet a goal to survive, all other considerations fall by the way side.

Fixing the goals

During the iteration of the rules, how do you avoid Goodheart’s law yourself? You want people to lead good happy lives, but don’t want to end up secretly giving people drugs to make them happy, because you are short-circuiting things. You also don’t want to kill everyone to reduce long term suffering.

So instead you build yourself a model of what Good looks like. This model is important it allows you to decouple your measure from your target. An example might be, “It is Good for People to be wealthy as it allows them to do more things”. You use your model to generate a target, in this case “make people wealthier”. Then you alter the rules and goals to hit that target.

What happens if your model is wrong? Let us say some people are becoming unhappier as they become wealthier, due to pollution causing health issues.

This is where the measure comes in. You then use your measures to see if your model is correct. If people aren’t becoming happier, less stressed, healthier etc as they become wealthier, you change and update your model. In this case they aren’t, so you improve your model, find a new targets and therefore give new goals and rules to the people you delegate to.

Any anger at the poor performance related to your measure should not be taken out on the people you have delegated to (unless they didn’t do what you said), it should be taken out on your model of the world that thought it was idea to tell people to do something.

Also you can improve your model with small scale studies and trying to understand the inner workings of humans. This gives you a quicker feedback loop than changing society and seeing what happens.

This is a description of roughly where we are as a society currently, although we suck at the last section updating our models and changing our targets. We tend get stuck on the first set of targets we find, those of GDP and IQ, publishing ‘high impact’ papers or reducing waiting times at doctors. We don’t use measures to say, hey somethings wrong, let us change things. The best we have is Democracy but that is a very blunt instrument and has its own incentive structure problems.