Book Review: The Reputation Society. Part I

The Reputation Society (MIT Press, 2012), edited by Hassan Masum and Mark Tovey, is an anthology on the possibilities of using online rating and reputation systems to systematically disseminate information about virtually everything—people, goods and services, ideas, etc., etc. Even though the use of online rating systems is an overarching theme, the book is, however, quite heterogeneous (like many anthologies). I have therefore chosen to structure the material in a somewhat different way. This post consists of a short introduction to the book, while in the next, far longer post, I list a number of concepts and distinctions commented on by the authors (either explicitly or implicitly) and briefly summarize their take on them.

My hope is that this Wiki-style approach maximizes the amount of information per line of text. Also, though these concepts and distinctions are arguably the most useful stuff in the book, they are unfortunately not gathered in any one place in the book. Hence I think that my list should be of use for those that go on to read the book, or parts of it. I also hope that this list of entries could be a start to a series of Less Wrong Wiki entries on reputation systems. Moreover, it could be a good point of departure for general discussions on rating and reputation systems. I would be happy to receive feedback on this choice of presentation form (as well as on the content, of course).

A chapter-by-chapter review (more of a guide to what chapters to read, really) can be found on my blog. (This review is already too long which is why I put the chapter-by-chapter overview there rather than here at Less Wrong.) Monique Sadarangani has also written a review (which focuses on various legal aspects of online rating systems). Another associated text you might consider reading is Masum’s and Yi-Cheng Zhang’s “Manifesto for the Reputation Society” (2004).


Introduction

People have of course always relied on others’ recommendations on a massive scale. We often don’t have time to figure out who is reliable and who is not, what goods are worth buying and what are not, which university education is valued by employers and which is not, etc. Instead we look to the testimonies and recommendations of others.


As pointed out by several of the authors, these recommendations have, however, often been given in a quite unsystematic fashion. In small societies, this lack of systematicity and structure was, though, to some extent outweighed by the wealth of information you would obtain about any individual person or item. Everybody knew everybody, which meant that a crook would sooner or later typically be identified as such, even though information about people’s trustworthiness was not being spread in an organized, rational fashion.


However, when people moved into cities, it became easier for dishonest people to hide in the crowds. One-off encounters with strangers became much more common, and with them the incentives to cheat increased: these strangers could typically not identify you, which meant that your reputation was not damaged by dishonorable behavior (see chs. 4, 6).


The inhabitants of cities, particularly those working in professions such as trade, tried to counter these problems by forming associations which guaranteed that their members conducted themselves properly (or else they would be thrown out). As the complexity of society has increased, so has the number and efficiency of these recommendation and reputation systems (italicized terms appear as entries in the next post). Today there are countless organizations that keep track of the creditworthiness of individuals (e.g., FICO), companies and countries (e.g., Standard & Poor and Moody), the quality of education (e.g., The Guardian’s University League Table, which provides an influential annual university ranking in the UK), the quality of restaurants (Guide Michelin), etc.


As virtually all of the authors argue, the Internet offers, however, spectacular opportunities for constructing rating systems that are more reliable and vastly much more pervasive than anything yet seen. The editors sum up this optimism in their introduction
(Location 182, 2nd page of introduction):

In today’s world, reliable advice from others’ experience is often unavailable, whether for buying products, choosing a service, or judging a policy. One promising solution is to bring to reputation a similar kind of systematization as currencies, laws, and accounting brought to primitive barter economies. Properly designed reputation systems have the potential to reshape society for the better by shining the light of accountability into dark places, through the mediated judgments of billions of people worldwide, to create what we call the Reputation Society.


There are of course already a great number of Internet rating systems, including those used by Google, Facebook (the like system), Amazon, eBay, Slashdot, Yelp, Netflix, Reddit, and, not to forget, Less Wrong. Many of these systems are discussed in the book (not Less Wrong, though). In particular, the authors try to assess what we can learn from the successes and failures of these rating sites.

There is a general (though seldom explicitly stated) sentiment in the book that the existing rating systems do not nearly exhaust the opportunities that the Internet provides us with. I certainly share this sentiment. As pointed out in ch. 5 (see the entry Underutilization of reputational information), people make a great number of “private judgments” in their heads which it would be very useful for others to learn about, but which they do not share. If they could be persuaded to share them to a greater extent, the social gains would be huge. If consumers got more reliable information about the quality of different goods and services (via consumer rating systems), the providers of those items would be forced to increase the quality of their products. In some areas, this is already happening, but other areas are lagging. The potential gains stretch far beyond goods and services, though: public debates would be conducted more rationally if rating systems penalized bullshitters (ch. 15), government would be better run if its actions and policies were rated in a rational way (ch. 13), science could improve if peers rated others’ work in a more rational way than they do at present (chs. 10-12). You would even imagine people rating your life plans, your behavior, and other stuff that is primarily interesting to yourself. Only imagination puts a limit to the potential uses of rating systems.


Even though there is some research on rating systems—on which Chrysantos (Chris) Dellarocas, the author of the first chapter, is an expert—most rating systems seem to be created in a quite unsystematic, trial-and-error fashion. Instead we should draw from the full range of the social sciences – e.g., from psychology, sociology, economics, law, history, anthropology and political sciences – when constructing such systems. I am convinced that we could benefit greatly as a society if we spent more time and resources on the construction of efficient rating systems. I also think that the Less Wrong community, with its combination of an intellectually curious and rational attitude and strong programming skills, potentially has a lot to contribute here.

At the same time, one shouldn’t get over-optimistic. There are lots of hurdles to pass. I certainly do not share the wild optimism of Craig Newmark, the founder of Craigslist, who writes as follows in the foreword (Location 70, 1st page of Foreword):

By the end of this decade, power and influence will have shifted largely to those people with the best reputations and trust networks and away from people with money and nominal power. That is, peer networks will confer legitimacy on people emerging from the grassroots.

These bold words remind me of some similarly bold predictions of how prediction markets, Wikis, and other forms of collective enterprises in the spirit of the Wisdom of the crowd will transform society and especially human knowledge, made in Cass Sunstein’s Infotopia (2006). So far, Sunstein’s predictions haven’t been borne out and the odds don’t look too good for Newmark, either. I do agree with Newmark that there is a huge potential in rating systems, but realizing that potential is not going to happen by itself. It will take lots of testing, lots of ingenuity, lots of hard work, and certainly a considerably greater amount of time than Newmark believes, to do that.