It also hints at the notion of signaling equilibria. Consider the Helen of Troy example—this is clearly not an equilibrium, because Helen ends up marrying a bankrupt. Soon “spends lots of money on diamonds” will no longer be a signal of wealth, but will instead be a signal of profligacy—as indeed it is where I live. A man walking around in flashy jewellery would be considered low-class, presumably because in the past there has been exactly this signaling reversal.
In a stable signaling equilibrium, the signal needs to be hard-to-fake. This is why easy-to-fake signals are unstable—in the flowers example, the proles can and will catch on, and switch to the upper-middle-class flowers, so the upper-middle-class have to keep moving to stay ahead of them. The same phenomenon is seen in baby names, where upper-middle-class names become prole after a generation.
One thing I would have preferred is a discussion of the positive externalities of signaling, not just the negative ones. For example, if Yvain and lukeprog are both trying to signal their superior intelligence by writing insightful posts, this may get into an “arms race” for them, losing utility. However, the Lesswrong community gains utility overall. I think the externalities of signaling are generally positive in the real world, they only tend to be negative in what are anyway zero-sum games (e.g. begging).
One thing I would have preferred is a discussion of the positive externalities of signaling, not just the negative ones. For example, if Yvain and lukeprog are both trying to signal their superior intelligence by writing insightful posts, this may get into an “arms race” for them, losing utility. However, the Lesswrong community gains utility overall. I think the externalities of signaling are generally positive in the real world, they only tend to be negative in what are anyway zero-sum games (e.g. begging).
You have just summarized “civilisation” in a nutshell.
I am assuming that it takes effort to generate insightful posts, and that, for sufficiently large numbers of posts, the disutility of the effort predominates.
I have a very hard time envisioning them being driven by signaling to do far more productive work than they ought to for their own good.
This is because it was specified that the work remain high quality. If you work yourself to the bone (producing negative utility for yourself), the product will be sub-par.
Every now and then, I’ve neglected college assignments due to being more driven to write a post (for LW or elsewhere). E.g. The Curse of Identity lost me some points for a course grade, because I was so caught up with writing it that I didn’t go to an exercise session that would have earned me the points.
Of course, whether or not this was overall disutility for myself is debatable.
If you work for maximum signaling value, what is the likelihood that you are also working for maximum productive value? Unless the signalling is completely without noise, the most effective signalling behavior will be less productive than the most productive behavior.
I enjoyed this post.
It also hints at the notion of signaling equilibria. Consider the Helen of Troy example—this is clearly not an equilibrium, because Helen ends up marrying a bankrupt. Soon “spends lots of money on diamonds” will no longer be a signal of wealth, but will instead be a signal of profligacy—as indeed it is where I live. A man walking around in flashy jewellery would be considered low-class, presumably because in the past there has been exactly this signaling reversal.
In a stable signaling equilibrium, the signal needs to be hard-to-fake. This is why easy-to-fake signals are unstable—in the flowers example, the proles can and will catch on, and switch to the upper-middle-class flowers, so the upper-middle-class have to keep moving to stay ahead of them. The same phenomenon is seen in baby names, where upper-middle-class names become prole after a generation.
One thing I would have preferred is a discussion of the positive externalities of signaling, not just the negative ones. For example, if Yvain and lukeprog are both trying to signal their superior intelligence by writing insightful posts, this may get into an “arms race” for them, losing utility. However, the Lesswrong community gains utility overall. I think the externalities of signaling are generally positive in the real world, they only tend to be negative in what are anyway zero-sum games (e.g. begging).
You have just summarized “civilisation” in a nutshell.
...and still counts himself king of infinite space.
Looks like your signaling backfired in this instance.
Yeah, I noticed that.
How does generating insightful posts end up being negative utility for them? Unless they don’t LIKE generating insightful posts, which seems doubtful.
I am assuming that it takes effort to generate insightful posts, and that, for sufficiently large numbers of posts, the disutility of the effort predominates.
I have a very hard time envisioning them being driven by signaling to do far more productive work than they ought to for their own good.
This is because it was specified that the work remain high quality. If you work yourself to the bone (producing negative utility for yourself), the product will be sub-par.
Every now and then, I’ve neglected college assignments due to being more driven to write a post (for LW or elsewhere). E.g. The Curse of Identity lost me some points for a course grade, because I was so caught up with writing it that I didn’t go to an exercise session that would have earned me the points.
Of course, whether or not this was overall disutility for myself is debatable.
If you work for maximum signaling value, what is the likelihood that you are also working for maximum productive value? Unless the signalling is completely without noise, the most effective signalling behavior will be less productive than the most productive behavior.