Thanks, I really like these concepts, especially Grice’s maxims were new to me and they seem very useful. Your list also got me thinking and I feel like I also have some (obvious) concepts in mind which I often usefully apply but which may not be so well known:
Data-processing inequality
Public good games
Evolutionarily stable equilibrium
Don’t be results oriented (when acting in stochastic environments)
The data-processing inequality is often useful, especially when thinking about automated tools, like LLMs. It states that for any fixed channel K, the mutual information between X and Z is always larger than that between Y and Z, if Y is the output of X processed by the channel K. E.g., if you tell an LLM simple “refine the following paragraph”, and the goal of the paragraph is to transmit information from your brain towards a reader, then using the LLM with this prompt can only destroy information (because the information is processed by a fixed channel which does not know the contents of your mind). Important are also the cases where the data processing inequality does not directly apply, e.g. with a prompt like “refine the following paragraph, note that what I want to communicate is [more detailed description than the paragraph alone]”.
2. and 3. are just particularly useful concepts from game theory. I see public good games everywhere (e.g., climate, taking on duties in a community, etc.) and actually think many situations are sufficiently well explained by a very simple modeling as a public good game. Evolutionarily stable equilibrium is a stronger concept than Nash equilibrium and useful to think about which equilibria actually occur in society. E.g., especially for large games with many players and mixed strategies, it’s a useful concept to think about cultural or group norms, globalization, etc.
The last one is probably well known to everyone who seriously played online poker or did sports betting, but it applies more generally. Roughly speaking, if you get money into the pot while having an 80% chance at winning, don’t focus on whether you lose or win the hand eventually. The feedback for your actions should be the assessed correctness of your actions, without including factors completely independent of your actions. So basically: De-noise as much as possible (without destroying information).
Edit: Just to clarify the details of the data-processing inequality because I noticed Wikipedia uses different notation: The (Markov) model is Z-X-Y in my description, and in the example Z is the reader, X the brain and Y the output of the LLM.
Thanks, I really like these concepts, especially Grice’s maxims were new to me and they seem very useful. Your list also got me thinking and I feel like I also have some (obvious) concepts in mind which I often usefully apply but which may not be so well known:
Data-processing inequality
Public good games
Evolutionarily stable equilibrium
Don’t be results oriented (when acting in stochastic environments)
The data-processing inequality is often useful, especially when thinking about automated tools, like LLMs. It states that for any fixed channel K, the mutual information between X and Z is always larger than that between Y and Z, if Y is the output of X processed by the channel K. E.g., if you tell an LLM simple “refine the following paragraph”, and the goal of the paragraph is to transmit information from your brain towards a reader, then using the LLM with this prompt can only destroy information (because the information is processed by a fixed channel which does not know the contents of your mind). Important are also the cases where the data processing inequality does not directly apply, e.g. with a prompt like “refine the following paragraph, note that what I want to communicate is [more detailed description than the paragraph alone]”.
2. and 3. are just particularly useful concepts from game theory. I see public good games everywhere (e.g., climate, taking on duties in a community, etc.) and actually think many situations are sufficiently well explained by a very simple modeling as a public good game. Evolutionarily stable equilibrium is a stronger concept than Nash equilibrium and useful to think about which equilibria actually occur in society. E.g., especially for large games with many players and mixed strategies, it’s a useful concept to think about cultural or group norms, globalization, etc.
The last one is probably well known to everyone who seriously played online poker or did sports betting, but it applies more generally. Roughly speaking, if you get money into the pot while having an 80% chance at winning, don’t focus on whether you lose or win the hand eventually. The feedback for your actions should be the assessed correctness of your actions, without including factors completely independent of your actions. So basically: De-noise as much as possible (without destroying information).
Edit: Just to clarify the details of the data-processing inequality because I noticed Wikipedia uses different notation: The (Markov) model is Z-X-Y in my description, and in the example Z is the reader, X the brain and Y the output of the LLM.
I’d heard of the data processing inequality but don’t remember every understanding it. Now I feel like I do. Great example.