I like the phrase “Trust Network” which I’ve been hearing here and there.
TRUST NO ONE seems like a reasonable approximation of a trust network before you actually start modelling a trust network. I think it’s important to think of trust not as a boolean value, not “who can I trust” or “what can I trust” but “how much can I trust this” and in particular, trust is defined for object-action pairs. I trust myself to drive places since I’ve learned how and done so many times before, but I don’t trust myself to pilot an airplane. Further, when I get on an airplane, I don’t personally know the pilot, yet I trust them to do something I wouldn’t trust myself to do. How is this possible? I think there is a system of incentives and a certain amount of lore which informs me that the pilot is trustworthy. This system which I trust to ensure the trustworthiness of the pilot is a trust network.
When something in the system goes wrong, maybe blame can be traced to people, maybe just to systems, but in each case, something in the system has gone wrong, it has trusted someone or some process that was not ideally reliable. That accountability is important for improving the system. Not because someone must be punished, but because, if the system is to perform better in the future, some part of it must change.
I agree with the main article that accountability sinks protect individuals from punishment for their failures are often very good. In a sense, this is what insurance is, which is a good enough idea that it is legally enforced for dangerous activities like driving. I think accountability sinks in this case paradoxically make people less averse to making decisions. If the process has identified this person as someone to trust with some class of decision, then that person is empowered to make those decisions. If there is a problem because of it, it is the fault of the system for having identified them improperly.
I wonder if anyone is modelling trust networks like this. It seems like I might be describing reliability engineering with bayes-nets. In any case, I think it’s a good idea and we should have more of it. Trace the things that can be traced and make subtle accountability explicit!
I like the phrase “Trust Network” which I’ve been hearing here and there.
TRUST NO ONE seems like a reasonable approximation of a trust network before you actually start modelling a trust network. I think it’s important to think of trust not as a boolean value, not “who can I trust” or “what can I trust” but “how much can I trust this” and in particular, trust is defined for object-action pairs. I trust myself to drive places since I’ve learned how and done so many times before, but I don’t trust myself to pilot an airplane. Further, when I get on an airplane, I don’t personally know the pilot, yet I trust them to do something I wouldn’t trust myself to do. How is this possible? I think there is a system of incentives and a certain amount of lore which informs me that the pilot is trustworthy. This system which I trust to ensure the trustworthiness of the pilot is a trust network.
When something in the system goes wrong, maybe blame can be traced to people, maybe just to systems, but in each case, something in the system has gone wrong, it has trusted someone or some process that was not ideally reliable. That accountability is important for improving the system. Not because someone must be punished, but because, if the system is to perform better in the future, some part of it must change.
I agree with the main article that accountability sinks protect individuals from punishment for their failures are often very good. In a sense, this is what insurance is, which is a good enough idea that it is legally enforced for dangerous activities like driving. I think accountability sinks in this case paradoxically make people less averse to making decisions. If the process has identified this person as someone to trust with some class of decision, then that person is empowered to make those decisions. If there is a problem because of it, it is the fault of the system for having identified them improperly.
I wonder if anyone is modelling trust networks like this. It seems like I might be describing reliability engineering with bayes-nets. In any case, I think it’s a good idea and we should have more of it. Trace the things that can be traced and make subtle accountability explicit!