So, your proposed definition of knowledge is information that pays rent in the form of anticipated experiences?
Does agency matter? There are 21 x 21 x 4 possible payoff matrixes for a 2x2 game if we use Ordinal payoffs. For the vast majority of them (all but about 7 x 7 x 4 of them) , one or both players can make a decision without knowing or caring what the other player’s payoffs are, and get the best possible result. Of the remaining 182 arrangements, 55 have exactly one box where both players get their #1 payoff (and, therefore, will easily select that as the equilibrium).
All the interesting choices happen in the other 128ish arrangements, 6⁄7 of which have the pattern of the preferred (1st and 1st, or 1st and 2nd) options being on a diagonal. The most interesting one (for the player picking the row, and getting the first payoff) is:
1 / (2, 3, or 4) ; 4 / (any)
2 / (any) ; 3 / (any)
The optimal strategy for any interesting layout will be a mixed strategy, with the % split dependent on the relative Cardinal payoffs (which are generally not calculatable since they include Reputation and other non-quantifiable effects).
Therefore, you would want to weight the quality of any particular result by the chance of that result being achieved (which also works for the degenerate cases where one box gets 100% of the results, or two perfectly equivalent boxes share that)
So, given this payoff matrix (where P1 picks a row and gets the first payout, P2 picks column and gets 2nd payout):
5 / 0 ; 5 / 100
0 / 100 ; 0 / 1
Would you say P1′s action furthers the interest of player 2?
Would P2′s action further the interest of player 1?
Where would you rank this game on the 0 − 1 scale?
Correlation between outcomes, not within them.
If both players prefer to be in the same box, they are aligned. As we add indifference and opposing choices, they become unalienable.
In your example, both people have the exact same ordering of outcome. In a classic PD, there is some mix.
Totally unaligned (constant value) example:
Tabooing “aligned” what property are you trying to map on a scale of “constant sum” to “common payoff”?
Um… the definition of the normal form game you cited explicitly says that the payoffs are in the form of cardinal or ordinal utilities. Which is distinct from in-game payouts.
Also, too, it sounds like you agree that the strategy your counterparty uses can make a normal form game not count as a “stag hunt” or “prisoner’s dillema” or “dating game”
It’s a definitional thing. The definition of utility is “the thing people maximize.” If you set up your 2x2 game to have utilities in the payout matrix, then by definition both actors will attempt to pick the box with the biggest number. If you set up your 2x2 game with direct payouts from the game that don’t include phychic (eg “I just like picking the first option given”) or reputational effects, then any concept of alignment is one of:
assume the players are trying for the biggest number, how much will they be attempting to land on the same box?
alignment is completely outside of the game, and is one of the features of function that converts game payouts to global utility
You seem to be muddling those two, and wondering “how much will people attempt to land on the same box, taking into account all factors, but only defining the boxes in terms of game payouts.” The answer there is “you can’t.” Because people (and computer programs) have wonky screwed up utility functions (eg (spoiler alert) https://en.wikipedia.org/wiki/Man_of_the_Year_(2006_film))
Quote: Or maybe we’re playing a game in which the stag hunt matrix describes some sort of payouts that are not exactly utilities. E.g., we’re in a psychology experiment and the experimenter has shown us a 2x2 table telling us how many dollars we will get in various cases—but maybe I’m a billionaire and literally don’t care whether I get $1 or $10 and figure I might as well try to maximize your payout, or maybe you’re a perfect altruist and (in the absence of any knowledge about our financial situations) you just want to maximize the total take, or maybe I’m actually evil and want you to do as badly as possible.
So, if the other player is “always cooperate” or “always defect” or any other method of determining results that doesn’t correspond to the payouts in the matrix shown to you, then you aren’t playing “prisoner’s dillema” because the utilities to player B are not dependent on what you do. In all these games, you should pick your strategy based on how you expect your counterparty to act, which might or might not include the “in game” incentives as influencers of their behavior.
The function should probably be a function of player A’s alignment with player B; for example, player A might always cooperate and player B might always defect. Then it seems reasonable to consider whether A is aligned with B (in some sense), while B is not aligned with A (they pursue their own payoff without regard for A’s payoff).
That seems to be confused reasoning. “Cooperate” and “defect” are labels we apply to a 2x2 matrix sometimes, and applying those labels changes the payouts. If I get $1 or $5 for picking “A” and $0 or $3 for picking “B” depending on a coin flip that leads me to a different choice than if A is labeled “defect” and B is labeled “cooperate” and the payout depends on another person, because I get psychic/reputational rewards for cooperating/defecting (which one is better depends on my peer group, but whichever is better the story equity is much higher than $5, so my choice is dominated by that, and the actual payout matrix is: pick S: 1000 util or 1001 util. Pick T: 2 util or 2 util.
None of which negates the original question of mapping the 8! possible arrangements of relative payouts in a 2x2 matrix game to some sort of linear scale.
Asking someone to watch a video is rude and filters your audience to “people with enough time to consume content slowly, and an environment that allows audio/streaming”
Since this comment thread is apparently “share what you do to be on time” here’s mine.
I consider it a test of estimation skills to arrive places exactly on time, so I get a little dopamine hit by arriving at the predicted moment. And I can set that target time according to the risk and importance of the event (ie, I aimed 5 minutes early for swim lessons yesterday, because I wasn’t sure if the drive was 7 or 11 minutes long, and being late is bad, and I aim 30 minutes early to catch a plane, since missing late by 1 minute is extremely costly, but when going to visit a single counterparty (grandma, a friend) I aim at the time suggested)
But the action needed to avoid/mitigate in those cases is very different, so it doesn’t seem useful to get a feeling for “how far off of ideal are we likely to be” when that is composed of:1. What is the possible range of AI functionality (as constrained by physics)? - ie what can we do?
2. What is the range of desirable outcomes within that range? - ie what should we do?
3. How will politics, incumbent interests, etc. play out? - ie what will we actually do?
Knowing that experts think we have a (say) 10% chance of hitting the ideal window says nothing about what an interested party should do to improve those chances. It could be “attempt to shut down all AI research” or “put more funding into AI research” or “it doesn’t matter because the two majority cases are “General AI is impossible − 40%” and “General AI is inevitable and will wreck us − 50%”″
Saying poor naming instead of bad names would be clearer, since it wouldn’t call up the idea of “bad names” = swear words.
Saying “look in” instead of “open” would also distance from the AI concept.
See comment below about Intentionality.
English is not Newspeak: there are multiple words for the same basic concept that convey shades of meaning and emotion, and allow for poetic usage that sometimes becomes mainstream.
The normal sourdough recipe is to take some of the starter, mix it with more flour and water, and let it rise/ferment for only 1-2 hours before baking.
Return has more intentionality than Regress.
I Return an purchase, Return to the scene of a crime, or Return to the left side of the page by pressing Enter.
Student’s learning Regresses over the summer, people Regress to a bestial state when hungry, an organized closet Regresses into chaos.
I can see how the choice is architecture dependent. If you can write something like:
having the function be written without a verb makes sense.
If you have a multi-tier architecture where you want to cache things locally, the code might have to be:
PostList = getPromotedPosts()
I would say the distinction is that if a function takes a long time to go look at a database and do some post-processing, we don’t want to run around using it like a variable. Especially if the database might change between one use of the data and the next, but we want to keep the results the same. That way, the code can be:
PromotedPosts = getPromotedPosts()
…user clicks a button
Email(PromotedPosts) //this sends the displayed posts, not whatever the promoted one happen to be at that moment
Heh, this is why well written automated tests are so great.
If the test for “are the first 5 posts marked as promoted” existed there would be an obvious failure when the old wrong code came back into use. Of course it would also throw failures while the Farah post function was active, but that should be bypassed by a date-limited switch. (Ie, update the test case to say:
IF now() < EXCEPTION_END_DATE then return(pass)
…run the test...) that way when the system should stop doing the Farah thing, there will be an automatic defect thrown against whatever code is actually being run, and it can be corrected.
Huh? Aren’t some functions puts? Or calculates?
That test / class example isn’t even a case because the test is instrumental to the goal, it’s not a metric. Your U in this case is “time spent studying” which you accurately see will be un-correlatrd from “graduating” if all students (or all counterfactual “you”s) attempt to optomize it.