I only read the AI Impacts article that includes that quote, not the data to which the quote alludes. Maybe ask the author?
Two recent articles that review the existing economic literature on information cascades:
Sushil Bikhchandani, David Hirshleifer and Ivo Welch, Information cascades, The new Palgrave dictionary of economics (Macmillan, 2018), pp. 6492-6500.
Oksana Doherty, Informational cascades in financial markets: review and synthesis, Review of behavioral finance, vol. 10, no. 1 (2018), pp. 53-69.
An earlier review:
Maria Grazia Romano, Informational cascades in financial economics: a review, Giornale degli Economisti e Annali di Economia, vol. 68, no. 1 (2009), pp. 81-109.
Information Cascades in Multi-Agent Models by Arthur De Vany & Cassey Lee has a section with a useful summary of the relevant economic literature up to 1999. (For more recent overviews, see my other comment.) I copy it below, with links to the works cited (with the exception of Chen (1978) and Lee (1999), both unpublished doctoral dissertations, and De Vany and Walls (1999b), an unpublished working paper):
A seminal paper by Bikhchandani et al (1992) explains the conformity and fragility of mass behavior in terms of informational cascades. In a closely related paper Banerjee (1992) models optimizing agents who engage in herd behavior which results in an inefficient equilibrium. Anderson and Holt (1997) are able to induce information cascades in a laboratory setting by implementing a version of Bikhchandani et al (1992) model.
The second strand of literature examines the relationship between information cascades and large fluctuations. Lee (1998) shows how failures in information aggregation in a security market under sequential trading result in market volatility. Lee advances the notion of “informational avalanches” which occurs when hidden information (e.g. quality) is revealed during an informational cascade thus reversing the direction of information cascades.
The third strand explores the link between information cascades and heavy tailed distributions. Cont and Bouchaud (1998) put forward a model with random groups of imitators that gives rise to stock price variations that are heavy-tailed distributed. De Vany and Walls (1996) use a Bose-Einstein allocation model to model the box office revenue distribution in the motion picture industry. The authors describe how supply adapts dynamically to an evolving demand that is driven by an information cascade (via word-of-mouth) and show that the distribution converges to a Pareto-Lévy distribution. The ability of the Bose-Einstein allocation model to generate the Pareto size distribution of rank and revenue has been proven by Hill (1974) and Chen (1978). De Vany and Walls (1996) present empirical evidence that the size distribution of box office revenues is Pareto. Subsequent work by Walls (1997), De Vany and Walls (1999a), and Lee (1999) has verified this finding for other markets, periods and larger data sets. De Vany and Walls (1999a) show that the tail weight parameter of the Pareto-Levy distribution implies that the second moment may not be finite. Lastly, De Vany and Walls (1999b) have shown that motion picture information cascades begin as action-based, noninformative cascades, but undergo a transition to an informative cascade after enough people have seen it to exchange “word of mouth” information. At the point of transition from an uninformed to an informed cascade, there is loss of correlation and an onset of turbulence, followed by a recovery of week to week correlation among high quality movies.
Information cascades does not seem to be the best choice of keywords.
I wouldn’t say that ‘information cascades’ isn’t the best choice of keywords. What’s happening here is that the same phenomenon is studied by different disciplines in relative isolation from each other. As a consequence, the phenomenon is discussed under different names, depending on the discipline studying it. ‘Information cascades’ (or, as it is sometimes spelled, ‘informational cascades’) is the name used in economics, while network science seems to use a variety of related expressions, such as the one you mention.
[meta] Not sure why the link to the overview isn’t working. Here’s how the comment looks before I submit it:
(The same problem is affecting this comment.)
In any case, the URL is:
Thanks for this.
Re extremizing, the recent (excellent) AI Impacts overview of good forecasting practices notes that “more recent data suggests that the successes of the extremizing algorithm during the forecasting tournament were a fluke.”
Here’s an insight I had about how incentives work in practice, that I’ve not seen explained in an econ textbook/course.
There are at least three ways in which incentives affect behaviour: 1) via consciously motivating agents, 2) via unconsciously reinforcing certain behaviour, and 3) via selection effects. I think perhaps 2) and probably 3) are more important, but much less talked about.
Jon Elster distinguishes these three different ways in Explaining Social Behavior. He first draws a distinction between 1-2 (“reinforcement”) on the one hand, and 3 (“selection”), on the other. He then draws a further distinction between 1 (“conscious rational choice”) and 2 (“unintentional choice”). Here are the relevant excerpts from ch. 11 (emphasis in the original; I have added numbers in square brackets to make the correspondence between your distinctions and his more conspicuous):
In this chapter, I discuss explanations of actions in terms of their objective consequences… There are two main ways in which this can happen: by reinforcement [1-2] and by selection … If the consequences of given behavior are pleasant or rewarding, we tend to engage in it more often; if they are unpleasant or punishing it will occur less often. The underlying mechanism could simply be conscious rational choice , if we notice the pleasant or unpleasant consequences and decide to act in the future so as to repeat or avoid repeating the experience. Often, however, the reinforcement can happen without intentional choice .
In game theory, the costs and benefits in terms of which defection is defined occur in a well-defined context of strategic interaction. My objection was to the use of defection in a way that implied that the situation described in the post had a particular game-theoretic structure, when in fact no clear account was given of what that structure was supposed to be.
Feed the Spinoff Heuristic!
Parapsychology: the control group for science
The view you articulate is perfectly intelligible. I’m just not sure it corresponds to the view expressed in the OP. Why invoke notions like defection, if all you want to say is that you should not impose a great cost on others when you can do so at a small cost to yourself?
I think the point of the OP is not to encourage people to go to physical restaurants, but to discourage the use of online delivery services relative to other ways of placing orders . As they write (boldfaced added):
If you like the restaurant and want those working there to earn a living, and the place to continue to exist, do not order via online services like SeamlessWeb, GrubHub, Delivery.com or Caviar, if there is another way to contact the restaurant.
I do find the post confusing in certain ways; for example, the following quote expresses a view which I find hard to understand, let alone agree with:
If you would cost your local place $5 to save the cost of a fifteen second phone call, make no mistake. You are defecting. You are playing zero-sum games with those who should be your allies. You are bad, and you should feel bad.
See Ben Garfinkel’s talk at EA Global London 2018 (which I’ll link when it’s available online).
Ben’s talk is now online.
(Loved the post, BTW.)
EDIT: A transcript is now also available.
Any further changes since the post was published?
Yet that last chapter also showcases one of the book’s main failings. Thing 21′s title is about the benefits of big government, but its content is only about the welfare state. I’m happy to grant that social safety nets can be beneficial for job mobility, while still strongly believing that increased regulation and state-sector employment have the exact opposite effect.
I agree with this. Jason Brennan draws a useful distinction between the “social insurance state”, which seeks to provide various goods and services, and the “administrative state”, which seeks to regulate the economy. Since these two functions of the state are clearly very different, it makes little sense to frame the discussion as one where one should be either for or against “big government”.
Subject: History of Economics
Recommendation: Economics Evolving, by Agnar Sandmo
Reason: A superbly clear overview of the history of economics, from Adam Smith until the 1970s. Each chapter provides a guide to further reading. I found this book much better than the alternatives in the genre that I consulted, including Lionel Robbins’ opinionated A History of Economic Thought and Joseph Schumpeter’s chaotic History of Economic Analysis.
As a companion, I recommend Keynes’ Essays in Biography, a collection of wonderfully written (and astonishingly well-researched) essays on some of the great English economists, including Malthus, Jevons, Edgeworth and Marshall.
Thanks for the feedback. I agree that a comment worded in the manner you suggest would have communicated my point more effectively.
My point is that these early pronouncements are (limited) evidence that we should treat Eliezer’s predictions with more caution that we would otherwise.
Yes, I am aware that this is what Eliezer has said, and I wasn’t implying that those early statements reflect Eliezer’s current thinking. There is a clear difference between “Eliezer believed this in the past, so he must believe it at present” and “Eliezer made some wrong predictions in the past, so we must treat his current predictions with caution”. Eliezer is entitled to ask his readers not to assume that his past beliefs reflect those of his present self, but he is not entitled to ask them not to hold him responsible for having once said stuff that some may think was ill-judged. (If Eliezer had committed serious crimes at the age of 18, it would be absurd for him to now claim that we should regard that person as a different individual who also happens to be called ‘Eliezer Yudkowsky’. Epistemic responsibility seems analogous to moral responsibility in this respect.)