Signalling implies an evaluator trying to guess the truth. At equilibrium, a signaller reveals as much information as is cheap to reveal. Not revealing cheap-to-reveal information is a bad sign; if the info reflected well on you, you’d have revealed it, and so at equilibrium, evaluators literally assume the worst about non-revealed but cheap-to-reveal info (see: market for lemons).
This is stage 1 signalling. Stage 2 signalling is this but with convincing lies, which actually are enough to convince a Bayesian evaluator (who may be aware of the adversarial dynamic, and audit sometimes).
At stage 3, the evaluators are no longer attempting to discern the truth, but are instead discerning “good performances”, the meaning of which shifts over time, but which initially bears resemblance to stage 2′s convincing lies.
Narcissism is stage 3, which is very importantly different from stage 1 signalling (maximal revealing of information and truth-discernment) and stage 2 lying (maximizing for impression convincingly).
This is stage 1 signalling. Stage 2 signalling is this but with convincing lies, which actually are enough to convince a Bayesian evaluator (who may be aware of the adversarial dynamic, and audit sometimes).
Stage 2 signalling is this but with convincing lies, which actually are enough to convince a Bayesian evaluator (who may be aware of the adversarial dynamic, and audit sometimes).
Do you have a formal (e.g., game theoretic) model of this in mind, or see an approach to creating a formal model for it?
On the one hand, I don’t want to Goodhart on excess formality / mathematization or not take advantage of informal models where available, but on the other hand, I’m not sure if long-term intellectual progress is possible without using formal models, since informal models seem very lossy in transmission and it seems very easy to talk past each other when using informal models (e.g., two people think they’re discussing one model but actually have two different models in mind). I’m thinking of writing a Question Post about this. If the answer to the above question is “no”, would you mind if I used this as an example in my post?
It seems to me like the first two stages are simple enough that Jessica’s treatment is an adequate formalization, insofar as the “market for lemons” model is well-understood. Can you say a bit more about how you’d expect additional formalization to help here?
It’s in the transition from stage 2 to 3 and 4 that some modeling specific to this framework seems needed, to me.
It seems to me like the first two stages are simple enough that Jessica’s treatment is an adequate formalization, insofar as the “market for lemons” model is well-understood. Can you say a bit more about how you’d expect additional formalization to help here?
In the original “market for lemons” game there was no signaling. Instead the possibility of “lemons” in the market just drives out “peaches” until the whole market collapses.
As I mentioned in my reply to Jessica, the actual model for stage 2 she had in mind seems more complex than any formal model in the literature that I can easily find. I was unsure from her short verbal description in the original comment what model she had in mind (in particular I wasn’t sure how to interpret “convincing lies”), and am still unsure whether the math would actually work out the way she thinks (although I grant that it seems intuitively plausible). I was also unsure whether she is assuming standard unbounded rationality or something else.
It’s in the transition from stage 2 to 3 and 4 that some modeling specific to this framework seems needed, to me.
I was confused/uncertain about stage 2 already, but sure I’d be interested in thoughts about how to model the higher stages too.
We can imagine a world where job applicants can cheaply reveal information about themselves (e.g. programming ability), and can more expensively generate fake information that looks like true information (e.g. cheating on the programming ability test, making it look like they’re good at programming). The employer, meanwhile, is doing a Bayesian evaluation of likely features given the revealed info (which may contain lies), to estimate the applicant’s expected quality. We could also give the employer audit powers (paying some amount to see the ground truth of some applicant’s trait).
This forms a game; each player’s optimal strategy depends on the other’s, and in particular the evaluator’s Bayesian probabilities depend on the applicant’s strategy (if they are likely to lie, then the info is less trustworthy, and it’s more profitable to audit).
I would not be surprised if this model is already in the literature somewhere. Ben mentioned the costly signalling literature, which seems relevant.
I would not be surprised if this model is already in the literature somewhere.
I couldn’t find one after doing a quick search. According to http://www.rasmusen.org/GI/chapters/chap11_signalling.pdf there are separate classes of Audit Games and Signaling Games in the literature. It would seem natural to combine auditing and signaling into a single model but I’m not sure anyone has done so, or how the math would work out.
Signalling implies an evaluator trying to guess the truth. At equilibrium, a signaller reveals as much information as is cheap to reveal. Not revealing cheap-to-reveal information is a bad sign; if the info reflected well on you, you’d have revealed it, and so at equilibrium, evaluators literally assume the worst about non-revealed but cheap-to-reveal info (see: market for lemons).
This is stage 1 signalling. Stage 2 signalling is this but with convincing lies, which actually are enough to convince a Bayesian evaluator (who may be aware of the adversarial dynamic, and audit sometimes).
At stage 3, the evaluators are no longer attempting to discern the truth, but are instead discerning “good performances”, the meaning of which shifts over time, but which initially bears resemblance to stage 2′s convincing lies.
Narcissism is stage 3, which is very importantly different from stage 1 signalling (maximal revealing of information and truth-discernment) and stage 2 lying (maximizing for impression convincingly).
The theory of costly signaling is specifically about stage 1 strategies in an environment where stage 2 exists—sometimes a false signal is much more expensive than a true signal of the same thing.
Do you have a formal (e.g., game theoretic) model of this in mind, or see an approach to creating a formal model for it?
On the one hand, I don’t want to Goodhart on excess formality / mathematization or not take advantage of informal models where available, but on the other hand, I’m not sure if long-term intellectual progress is possible without using formal models, since informal models seem very lossy in transmission and it seems very easy to talk past each other when using informal models (e.g., two people think they’re discussing one model but actually have two different models in mind). I’m thinking of writing a Question Post about this. If the answer to the above question is “no”, would you mind if I used this as an example in my post?
It seems to me like the first two stages are simple enough that Jessica’s treatment is an adequate formalization, insofar as the “market for lemons” model is well-understood. Can you say a bit more about how you’d expect additional formalization to help here?
It’s in the transition from stage 2 to 3 and 4 that some modeling specific to this framework seems needed, to me.
In the original “market for lemons” game there was no signaling. Instead the possibility of “lemons” in the market just drives out “peaches” until the whole market collapses.
As I mentioned in my reply to Jessica, the actual model for stage 2 she had in mind seems more complex than any formal model in the literature that I can easily find. I was unsure from her short verbal description in the original comment what model she had in mind (in particular I wasn’t sure how to interpret “convincing lies”), and am still unsure whether the math would actually work out the way she thinks (although I grant that it seems intuitively plausible). I was also unsure whether she is assuming standard unbounded rationality or something else.
I was confused/uncertain about stage 2 already, but sure I’d be interested in thoughts about how to model the higher stages too.
We can imagine a world where job applicants can cheaply reveal information about themselves (e.g. programming ability), and can more expensively generate fake information that looks like true information (e.g. cheating on the programming ability test, making it look like they’re good at programming). The employer, meanwhile, is doing a Bayesian evaluation of likely features given the revealed info (which may contain lies), to estimate the applicant’s expected quality. We could also give the employer audit powers (paying some amount to see the ground truth of some applicant’s trait).
This forms a game; each player’s optimal strategy depends on the other’s, and in particular the evaluator’s Bayesian probabilities depend on the applicant’s strategy (if they are likely to lie, then the info is less trustworthy, and it’s more profitable to audit).
I would not be surprised if this model is already in the literature somewhere. Ben mentioned the costly signalling literature, which seems relevant.
Fine to refer to this in a question, in any case.
I couldn’t find one after doing a quick search. According to http://www.rasmusen.org/GI/chapters/chap11_signalling.pdf there are separate classes of Audit Games and Signaling Games in the literature. It would seem natural to combine auditing and signaling into a single model but I’m not sure anyone has done so, or how the math would work out.