Tangentially, is it possible for a good reputation metric to survive attacks in real life?
Imagine that you become e.g. a famous computer programmer. But although you are a celebrity among free software people, you fail to convert this fame to money. So must keep a day job at a computer company which produces shitty software.
One day your boss will realize that you have high prestige in the given metric, and the company has low prestige. So the boss will ask you to “recommend” the company on your social network page (which would increase the company prestige and hopefully increase the profit; might decrease your prestige as a side effect). Maybe this would be illegal, but let’s suppose it isn’t, or that you are not in a position to refuse. Or you could imagine a more dramatic situation: you are a widely respected political or economical expert, it is 12 hours before election, and a political party has kidnapped your family and threatens to kill them unless you “recommend” this party, which according to their model would help them win the election.
In other words, even a digital system that works well could be vulnerable to attacks from outside of the system, where otherwise trustworthy people are forced to act against their will. A possible defense would be if people could somehow hide their votes; e.g. your boss might know that you have high prestige and the company has low prestige, but has no methods to verify whether you have “recommended” the company or not (so you could just lie that you did). But if we make everything secret, is there a way to verify whether the system is really working as described? (The owner of the system could just add 9000 trust points to his favorite political party and no one would ever find out.)
I suspect this is all confused and I am asking a wrong question. So feel free to answer to question I should have asked.
There are simultaneously a large number of laws prohibiting employers from retaliating against persons for voting, and a number of accusations of retaliation for voting. So this isn’t a theoretical issue. I’m not sure it’s distinct from other methods of compromising trusted users—the effects are similar whether the compromised node was beaten with a wrench, got brain-eaten, or just trusted Microsoft with their Certificates—but it’s a good demonstration that you simply can’t trust any node inside a network.
(There’s some interesting overlap with MIRI’s value stability questions, but they’re probably outside the scope of this thread and possibly only metaphor-level.)
Interestingly, there are some security metrics designed with the assumption that some number of their nodes will be compromised, and with some resistance to such attacks. I’ve not seen this expanded to reputation metrics, though, and there are technical limitations. TOR, for example, can only resist about a third of its nodes being compromised, and possibly fewer than that. Other setups have higher theoretical resistance, but are dependent on central high-value nodes that trade resistance against to vulnerability against spoofing.
It seems like there’s some value in closing the gap between carrier wave and signal in reputation systems, rather than a discrete reputation system, but my sketched out implementations become computationally intractable quickly.
I don’t have a solution for you, but a related probably-unsolvable problem is what some friends of mine call “cashing in your reputation capital”: having done the work to build up a reputation (for trustworthiness, in particular), you betray it in a profitable way and run.
… otherwise trustworthy people are forced to act against their will. … But if we make everything secret, is there a way to verify whether the system is really working as described?
This is a problem in elections. In the US (I believe depending on state) there are rules which are intended to prevent someone from being able to provide proof that they have voted a particular way (to make coercion futile), and the question then is whether the vote counting is accurate. I would suggest that the topic of designing fair elections contains the answer to your question insofar as an answer exists.
In the US (I believe depending on state) there are rules which are intended to prevent someone from being able to provide proof that they have voted a particular way (to make coercion futile),
And then there are absentee ballots which potentially make said laws a joke.
Tangentially, is it possible for a good reputation metric to survive attacks in real life?
Imagine that you become e.g. a famous computer programmer. But although you are a celebrity among free software people, you fail to convert this fame to money. So must keep a day job at a computer company which produces shitty software.
One day your boss will realize that you have high prestige in the given metric, and the company has low prestige. So the boss will ask you to “recommend” the company on your social network page (which would increase the company prestige and hopefully increase the profit; might decrease your prestige as a side effect). Maybe this would be illegal, but let’s suppose it isn’t, or that you are not in a position to refuse. Or you could imagine a more dramatic situation: you are a widely respected political or economical expert, it is 12 hours before election, and a political party has kidnapped your family and threatens to kill them unless you “recommend” this party, which according to their model would help them win the election.
In other words, even a digital system that works well could be vulnerable to attacks from outside of the system, where otherwise trustworthy people are forced to act against their will. A possible defense would be if people could somehow hide their votes; e.g. your boss might know that you have high prestige and the company has low prestige, but has no methods to verify whether you have “recommended” the company or not (so you could just lie that you did). But if we make everything secret, is there a way to verify whether the system is really working as described? (The owner of the system could just add 9000 trust points to his favorite political party and no one would ever find out.)
I suspect this is all confused and I am asking a wrong question. So feel free to answer to question I should have asked.
There are simultaneously a large number of laws prohibiting employers from retaliating against persons for voting, and a number of accusations of retaliation for voting. So this isn’t a theoretical issue. I’m not sure it’s distinct from other methods of compromising trusted users—the effects are similar whether the compromised node was beaten with a wrench, got brain-eaten, or just trusted Microsoft with their Certificates—but it’s a good demonstration that you simply can’t trust any node inside a network.
(There’s some interesting overlap with MIRI’s value stability questions, but they’re probably outside the scope of this thread and possibly only metaphor-level.)
Interestingly, there are some security metrics designed with the assumption that some number of their nodes will be compromised, and with some resistance to such attacks. I’ve not seen this expanded to reputation metrics, though, and there are technical limitations. TOR, for example, can only resist about a third of its nodes being compromised, and possibly fewer than that. Other setups have higher theoretical resistance, but are dependent on central high-value nodes that trade resistance against to vulnerability against spoofing.
It seems like there’s some value in closing the gap between carrier wave and signal in reputation systems, rather than a discrete reputation system, but my sketched out implementations become computationally intractable quickly.
I don’t have a solution for you, but a related probably-unsolvable problem is what some friends of mine call “cashing in your reputation capital”: having done the work to build up a reputation (for trustworthiness, in particular), you betray it in a profitable way and run.
This is a problem in elections. In the US (I believe depending on state) there are rules which are intended to prevent someone from being able to provide proof that they have voted a particular way (to make coercion futile), and the question then is whether the vote counting is accurate. I would suggest that the topic of designing fair elections contains the answer to your question insofar as an answer exists.
And then there are absentee ballots which potentially make said laws a joke.