I am thinking about the design of quantum money, quantum copy-protected programs, and quantum program obfuscation. Basically, understanding whether and how quantum computers can implement the strongest possible cryptographic operations (all of which are known to be classically impossible and to have a wide range of applications).
I am working on the development of collaborative filtering / recommendation protocols for large communities. This includes, for example, a system for aggregating product reviews which is robust to the presence of large numbers of planted reviews, or a system for spam filtering based on trust which remains robust in the presence of a supermajority of sophisticated spammers. More generally, this work seems like an important first step in the development of automated and robust systems of trust.
I started working on these projects because they were interesting problems I was well equipped to work on which seemed likely to yield publications in time for graduate school applications (optimistically, these positions have been born out in both cases). I don’t recommend this decision making process.
I think automating trust and designing better recommendation systems are more important than the overwhelming majority of theoretical problems currently studied, but I have realized more recently that there are more important issues to deal with. I am continuing to work on these problems now because of inertia, the fallacy of sunk costs, and a desire for status.
I’d be interested in your work on recommendation systems. How well does it deal with semi-intelligent spammers? That is spammers that copy other normal people’s ratings for the most part but then alter there behaviour to prefer something more than it is worth.
Personally I think a good recommendation system is something that can have a vast impact on society. Mainly through knock-on effects; if you save charity X time and money on deciding which programmer to employ (due to a recommendation system) they can then spend that money on actually helping people.
I’m interested in what you think is more important!
How well does it deal with semi-intelligent spammers?
The most obvious thing distinguishing my work from previous attempts is that it attains all its guarantees even if 99% of all users are arbitrarily sophisticated adversaries. The amount of time depends on the logarithm of the fraction of honest users. So if only one user in 10^10 is honest, it will take about 30 times longer to converge to good recommendations.
The goal is to find the best fixed moderation strategy (ie, block all messages from people X, Y, Z...) for the honest users. Here the definition of honest users is arbitrary: if you choose any subset of the users to declare honest, then over a reasonable time frame the recommendations produced by the recommender system should do as well (using whatever scoring metric you use to give feedback) as the best fixed moderation strategy tailored to those users. Note that the system doesn’t know which users are “honest”: the guarantee holds simultaneously over all choices.
Right now I have a protocol which I believe converges quite quickly (you only get a polylogarithmic number of “bad” recommendations before convergence), but which is computationally completely infeasible (it takes an exponential amount of time to produce a recommendation, where I would like the amount of time to be significantly sub-linear in the number of users). I’m optimistic about reducing the computation time (for example, I have a linear time system which probably works in practice; the question is whether there is any way for an adversary to game the approximations I have to use).
I am very interested in your “collaborative filtering / recommendation protocols for large communities” work, as I’ve been toying with similar ideas for a while. If you could say more about your approach or can link to anything I could read, I’d appreciate it very much.
I am thinking about the design of quantum money, quantum copy-protected programs, and quantum program obfuscation. Basically, understanding whether and how quantum computers can implement the strongest possible cryptographic operations (all of which are known to be classically impossible and to have a wide range of applications).
I am working on the development of collaborative filtering / recommendation protocols for large communities. This includes, for example, a system for aggregating product reviews which is robust to the presence of large numbers of planted reviews, or a system for spam filtering based on trust which remains robust in the presence of a supermajority of sophisticated spammers. More generally, this work seems like an important first step in the development of automated and robust systems of trust.
I started working on these projects because they were interesting problems I was well equipped to work on which seemed likely to yield publications in time for graduate school applications (optimistically, these positions have been born out in both cases). I don’t recommend this decision making process.
I think automating trust and designing better recommendation systems are more important than the overwhelming majority of theoretical problems currently studied, but I have realized more recently that there are more important issues to deal with. I am continuing to work on these problems now because of inertia, the fallacy of sunk costs, and a desire for status.
I’d be interested in your work on recommendation systems. How well does it deal with semi-intelligent spammers? That is spammers that copy other normal people’s ratings for the most part but then alter there behaviour to prefer something more than it is worth.
Personally I think a good recommendation system is something that can have a vast impact on society. Mainly through knock-on effects; if you save charity X time and money on deciding which programmer to employ (due to a recommendation system) they can then spend that money on actually helping people.
I’m interested in what you think is more important!
The most obvious thing distinguishing my work from previous attempts is that it attains all its guarantees even if 99% of all users are arbitrarily sophisticated adversaries. The amount of time depends on the logarithm of the fraction of honest users. So if only one user in 10^10 is honest, it will take about 30 times longer to converge to good recommendations.
The goal is to find the best fixed moderation strategy (ie, block all messages from people X, Y, Z...) for the honest users. Here the definition of honest users is arbitrary: if you choose any subset of the users to declare honest, then over a reasonable time frame the recommendations produced by the recommender system should do as well (using whatever scoring metric you use to give feedback) as the best fixed moderation strategy tailored to those users. Note that the system doesn’t know which users are “honest”: the guarantee holds simultaneously over all choices.
Right now I have a protocol which I believe converges quite quickly (you only get a polylogarithmic number of “bad” recommendations before convergence), but which is computationally completely infeasible (it takes an exponential amount of time to produce a recommendation, where I would like the amount of time to be significantly sub-linear in the number of users). I’m optimistic about reducing the computation time (for example, I have a linear time system which probably works in practice; the question is whether there is any way for an adversary to game the approximations I have to use).
If you want another pair of eyeballs, I’ll have a look.
I am very interested in your “collaborative filtering / recommendation protocols for large communities” work, as I’ve been toying with similar ideas for a while. If you could say more about your approach or can link to anything I could read, I’d appreciate it very much.