One thought is that, when the evaluator of a team is also a member of the team (e.g. the leader), everyone on the team is incentivized to appease the evaluator to get a good evaluation. This makes the leader/evaluator very biased in favor of individuals who befriend him/her. It also makes everyone biased in not just cooperating with the leader/evaluator (which is good in a team), but always agreeing with the leader/evaluator (which is bad in any team responsible for decision making).
Having an external system to evaluate members of a team can reduce how much evaluation the leader does, and reduce these perverse incentives.
It might be especially helpful for decision making teams. Unfortunately it’s also especially difficult to evaluate members of a decision making team using AI, because AI sucks at decision making, and decision making is a skill where “only people with the skill can judge others’ skills.”
This is a real dynamic that Conway will have to carefully design incentives around — one intermediate solution is to attribute reward and blame to actions instead of individuals. There can still be internal debate over whose responsibility the actions themselves were, which will likely continue to be political and outside of Conway’s scope. I think it’s reasonable to believe that companies will be penalized for unruly internal politics more now than ever before, but also to concede that some internal politics will probably always persist.
Oh, I think I read your post without understanding what “credit attribution” meant.
There are indeed two completely distinct problems of “evaluating decisions by attributing credit to them,” and “evaluating individuals by attributing credit to them.” I somehow assumed the latter interpretation when you were talking about the former the whole time. The words “credit” and “attribution” usually refer to individuals, and I also wasn’t reading closely.
One thought is that, when the evaluator of a team is also a member of the team (e.g. the leader), everyone on the team is incentivized to appease the evaluator to get a good evaluation. This makes the leader/evaluator very biased in favor of individuals who befriend him/her. It also makes everyone biased in not just cooperating with the leader/evaluator (which is good in a team), but always agreeing with the leader/evaluator (which is bad in any team responsible for decision making).
Having an external system to evaluate members of a team can reduce how much evaluation the leader does, and reduce these perverse incentives.
It might be especially helpful for decision making teams. Unfortunately it’s also especially difficult to evaluate members of a decision making team using AI, because AI sucks at decision making, and decision making is a skill where “only people with the skill can judge others’ skills.”
This is a real dynamic that Conway will have to carefully design incentives around — one intermediate solution is to attribute reward and blame to actions instead of individuals. There can still be internal debate over whose responsibility the actions themselves were, which will likely continue to be political and outside of Conway’s scope. I think it’s reasonable to believe that companies will be penalized for unruly internal politics more now than ever before, but also to concede that some internal politics will probably always persist.
Oh, I think I read your post without understanding what “credit attribution” meant.
There are indeed two completely distinct problems of “evaluating decisions by attributing credit to them,” and “evaluating individuals by attributing credit to them.” I somehow assumed the latter interpretation when you were talking about the former the whole time. The words “credit” and “attribution” usually refer to individuals, and I also wasn’t reading closely.
Never mind.