I agree this is better (the system rewards only a subset of what it does), but it is still overgeneralizing. Systems could have a different purpose but fallible reward structure (also Goodharting). You could be analyzing the purpose-reward link at the wrong level: political parties want change, so they seek power, so they need donations. This makes it look like the purpose is to just get donations because of rewards to bundlers, but it ignores the rewards at other levels and confuses local rewards with global purpose. Just as a system does a lot of things, so it rewards a lot of things.
Good points. I think it’s fine and reasonable for a system to reward leading metrics like “are we raising money” in the service of a higher goal. But if you aren’t also doing something to reward the higher goal, or including counter-metrics to catch the obvious ways you could be moving the leading metric without serving the higher goal, then I don’t think it’s reasonable to claim that the higher goal is actually your system’s purpose.
This is of course where things get subtle—but I think this part is important.
Do the counter-metrics have to be measurable&measured or do you see any way how organizations can make room for humane intangible interactions like “Thank you!” without optimization pressure to capture those as net promoter score?
Good question. Sometimes the counter metric is inherently tricky to measure and the best available metric is simply “does a reasonable person thing this is causing harm”.
Even if you measure it in a way that’s totally subjective, you can still make it part of the way you reward people and thus part of the systems purpose.
I agree this is better (the system rewards only a subset of what it does), but it is still overgeneralizing. Systems could have a different purpose but fallible reward structure (also Goodharting). You could be analyzing the purpose-reward link at the wrong level: political parties want change, so they seek power, so they need donations. This makes it look like the purpose is to just get donations because of rewards to bundlers, but it ignores the rewards at other levels and confuses local rewards with global purpose. Just as a system does a lot of things, so it rewards a lot of things.
Good points. I think it’s fine and reasonable for a system to reward leading metrics like “are we raising money” in the service of a higher goal. But if you aren’t also doing something to reward the higher goal, or including counter-metrics to catch the obvious ways you could be moving the leading metric without serving the higher goal, then I don’t think it’s reasonable to claim that the higher goal is actually your system’s purpose.
This is of course where things get subtle—but I think this part is important.
Do the counter-metrics have to be measurable&measured or do you see any way how organizations can make room for humane intangible interactions like “Thank you!” without optimization pressure to capture those as net promoter score?
Good question. Sometimes the counter metric is inherently tricky to measure and the best available metric is simply “does a reasonable person thing this is causing harm”.
Even if you measure it in a way that’s totally subjective, you can still make it part of the way you reward people and thus part of the systems purpose.