Thank you for the reference, I’ll read it tomorrow (or skim if it’s >50 pages). By “training in game theory and information theory” I meant something akin to “training in chess”, “training in math competitions”, or perhaps most similar, “training in quantitative trading”. I say this like it is a prerequisite, because I think there are certain ways of thinking you can only automatically do after beating your head against the subject for a long time. For example, writing correct proofs came naturally to me because I had already trained in chess where you constantly check for mistakes, and writing essays came naturally to me because I had already trained via coding to plan pages of text ahead of time. I think without having similar training in game theory and information theory, the cognitive load and inferential leaps may be too much to overcome through trial and error, analogous to building a rocket without Newtonian mechanics or a computer language without formal logic.
Here are some things I learned that are obviously true in hindsight, but were not something I thought I should think about in the first place:
-
Adversaries have no reason to divulge information, allies have no reason to conceal information (assuming you are true adversaries or allies).
-
Talk without imposed costs only allows coordination when all parties benefit from distinguishing signals. Otherwise it is just babble. Notice how this makes resumes mostly useless (and why the equilibrium is everyone lying about too-difficult-to-prove signals like years of experience).
-
A reputation’s value comes from costly signals. You can also spend reputation. For example, take an up/down dilemma [1] with a long-run (institutional) actor and a series of one-off agents that know the institution’s prior history. The long-run actor should buy reputation by playing up enough so every agent believes they will be paid more by playing up than down, while cashing in on that reputation by occasionally playing down.
That last part is where the information theory comes in. Now that I think about it, it is probably enough for the first few companies to realize they can use game/information theory in pricing recruitment and stumble their way from there. The more rigorous analysis can come later, as the market gets more competitive. I’m also curious how well this is done in the credit and insurance industries.
- ↩︎
Player A gets paid most for down/up responses and a little for up/up, while Player B gets paid most for up/up and a little for down/down.
Okay, this helps me understand where you are coming from. Basically, there are antinormative conspiracies that are bad for these institutions, but less bad for themselves, so they grow in relative power and are difficult to dislodge by uncoordinated pronormative actors [1] . I would say, sure, these conspiracies exist [2] , but the people within them would readily jump to the pronormative side if they can see greater benefits. It is possible for individual actors to defeat these conspiracies just by proposing better solutions to invested parties.
For example… after several minutes, I could not think of an example. I wanted to say, “clearly Linux will outcompete Windows, it’s a better and free OS,” but in reality the Microsoft conspiracy bribes schools to indoctrinate children into buying their OS when they grow up. Or maybe, “well don’t people leave cults when they realize they’re better off outside them?” but in reality the cult teaches people an inconsistent method of integrating value and sourcing information on consistent methods. So it’s actually rarer and harder than I thought, and even if a pronormative actor succeeds, why won’t their object-level gains just get expropriated by a new antinormative conspiracy?
I think it should be possible to protect against that. For example, honest education studies and assessments would calibrate the system. But what happens in reality is politically motivated studies that are dishonestly reported and passed on as good teaching standards. There isn’t really a mechanism to stop politics in research funding, even the theoretical physicists couldn’t figure it out.
How do you solve this problem? Would fiercer competition work, so parasitized institutions die out faster?
EDIT: The solution is already there in my previous comment! We can’t prevent antinormative conspiracies since they can just adopt the same policies as normative institutions until the right time to cash out. However, is it really an antinormative conspiracy if it perfectly mimics a normative institution? If you put in guard labor to audit the long-run institution’s history, you can force them to play ‘up’ enough that they are essentially a normative institution. Of course, someone has to watch the guards, but you can watch them in a circular pattern.
Uncoordinated, because most of the time they do not even realize they are in conflict with a coordinated enemy. If they did, they could coordinate and win, which is why their opposition must be a conspiracy.
Though those within the conspiracy might not label it as such; are the MIT students that lied their way into the school (this comprises the majority of the student body) part of a conspiracy to cash out on the value of MIT’s reputation? They would say no, they’re completely unaware of being in a conspiracy, we would say if they quack like a conspiracy and act like a conspiracy, they’re part of a conspiracy.