It just feels like a semantics debate around a teleological vs a more empirical mindset. I think the concept of that sentence is more that if a system meant to do A does in practice B, after long enough of this fact being transparent to everyone involved, and plenty of chances arising to either dismantle or thoroughly rearrange the system, if it’s still allowed to do B, it’s as good as a stamp of approval for B.
In practice I think this is a bit harsh in terms of how the people involved are judged, because a lot of the time this will happen due to inertia, lack of imagination, lack of accountability, general confusion about who should do what and on whose authority, etcetera, rather than genuine malice. But also, at some point, you kind of lose patience with people fumbling around that way and decide that ineptitude is as bad as malice if it leads to the same results.
Where it gets tricky is that it can be hard to see from the outside whether a system continues to do B because they don’t care about B, because it’s impossible hard to do A without also doing B, or because they are trying really hard to avoid doing B but haven’t really worked out how yet.
Looking at what a system rewards lets you see which of these situations the system is actually in. If they are actively rewarding people for not doing B, then B is not the purpose of the system.
One interesting subtlety is what we should say about a system whose purpose is A, that rewards A, and yet ends up doing a lot of B. I think that B isn’t the purpose unless it actively rewards B, but you could say the purpose is to “do A, and tolerate doing B in course of doing A”.
It just feels like a semantics debate around a teleological vs a more empirical mindset. I think the concept of that sentence is more that if a system meant to do A does in practice B, after long enough of this fact being transparent to everyone involved, and plenty of chances arising to either dismantle or thoroughly rearrange the system, if it’s still allowed to do B, it’s as good as a stamp of approval for B.
In practice I think this is a bit harsh in terms of how the people involved are judged, because a lot of the time this will happen due to inertia, lack of imagination, lack of accountability, general confusion about who should do what and on whose authority, etcetera, rather than genuine malice. But also, at some point, you kind of lose patience with people fumbling around that way and decide that ineptitude is as bad as malice if it leads to the same results.
There is definitely some truth in what you say.
Where it gets tricky is that it can be hard to see from the outside whether a system continues to do B because they don’t care about B, because it’s impossible hard to do A without also doing B, or because they are trying really hard to avoid doing B but haven’t really worked out how yet.
Looking at what a system rewards lets you see which of these situations the system is actually in. If they are actively rewarding people for not doing B, then B is not the purpose of the system.
One interesting subtlety is what we should say about a system whose purpose is A, that rewards A, and yet ends up doing a lot of B. I think that B isn’t the purpose unless it actively rewards B, but you could say the purpose is to “do A, and tolerate doing B in course of doing A”.