Yeah, “transferrable utility games” are those where there is a resource, and the utilities of all players are linear in that resource (in order to redenominate everyone’s utilities as being denominated in that resource modulo a shift factor). I believe the post mentioned this.
Diffractor
Infra-Exercises, Part 1
Less Threat-Dependent Bargaining Solutions?? (3/2)
Task completed.
Agreed. The bargaining solution for the entire game can be very different from adding up the bargaining solutions for the subgames. If there’s a subgame where Alice cares very much about victory in that subgame (interior decorating choices) and Bob doesn’t care much, and another subgame where Bob cares very much about it (food choice) and Alice doesn’t care much, then the bargaining solution of the entire relationship game will end up being something like “Alice and Bob get some relative weights on how important their preferences are, and in all the subgames, the weighted sum of their utilities is maximized. Thus, Alice will be given Alice-favoring outcomes in the subgames where she cares the most about winning, and Bob will be given Bob-favoring outcomes in the subgames where he cares the most about winning”
And in particular, since it’s a sequential game, Alice can notice if Bob isn’t being fair, and enforce the bargaining solution by going “if you’re not aiming for something sorta like this, I’ll break off the relationship”. So, from Bob’s point of view, aiming for any outcome that’s too Bob-favoring has really low utility since Alice will inevitably catch on. (this is the time-extended version of “give up on achieving any outcome that drives the opponent below their BATNA”) Basically, in terms of raw utility, it’s still a bargaining game deep down, but once both sides take into account how the other will react, the payoff matrix for the restaurant game (taking the future interactions into account) will look like “it’s a really bad idea to aim for an outcome the other party would regard as unfair”
Actually, they apply anyways in all circumstances, not just after the rescaling and shifting is done! Scale-and-shift invariance means that no matter how you stretch and shift the two axes, the bargaining solution always hits the same probability-distribution over outcomes, so monotonicity means “if you increase the payoff numbers you assign for some or all of the outcomes, the Pareto frontier point you hit will give you an increased number for your utility score over what it’d be otherwise” (no matter how you scale-and-shift). And independence of irrelevant alternatives says “you can remove any option that you have 0 probability of taking and you’ll still get the same probability-distribution over outcomes as you would in the original game” (no matter how you scale-and-shift)
Unifying Bargaining Notions (2/2)
Unifying Bargaining Notions (1/2)
If you’re looking for curriculum materials, I believe that the most useful reference would probably be my “Infra-exercises”, a sequence of posts containing all the math exercises you need to reinvent a good chunk of the theory yourself. Basically, it’s the textbook’s exercise section, and working through interesting math problems and proofs on one’s own has a much better learning feedback loop and retention of material than slogging through the old posts. The exercises are short on motivation and philosophy compared to the posts, though, much like how a functional analysis textbook takes for granted that you want to learn functional analysis and doesn’t bother motivating it.
The primary problem is that the exercises aren’t particularly calibrated in terms of difficulty, and in order for me to get useful feedback, someone has to actually work through all of them, so feedback has been a bit sparse. So I’m stuck in a situation where I keep having to link everyone to the infra-exercises over and over and it’d be really good to just get them out and publicly available, but if they’re as important as I think, then the best move is something like “release them one at a time and have a bunch of people work through them as a group” like the fixpoint exercises, instead of “just dump them all as public documents”.
I’ll ask around about speeding up the public—ation of the exercises and see what can be done there.
I’d strongly endorse linking this introduction even if the exercises are linked as well, because this introduction serves as the table of contents to all the other applicable posts.
So, if you make Nirvana infinite utility, yes, the fairness criterion becomes “if you’re mispredicted, you have any probability at all of entering the situation where you’re mispredicted” instead of “have a significant probability of entering the situation where you’re mispredicted”, so a lot more decision-theory problems can be captured if you take Nirvana as infinite utility. But, I talk in another post in this sequence (I think it was “the many faces of infra-beliefs”) about why you want to do Nirvana as 1 utility instead of infinite utility.
Parfit’s Hitchiker with a perfect predictor is a perfectly fine acausal decision problem, we can still represent it, it just cannot be represented as an infra-POMDP/causal decision problem.
Yes, the fairness criterion is tightly linked to the pseudocausality condition. Basically, the acausal->pseudocausal translation is the part where the accuracy of the translation might break down, and once you’ve got something in pseudocausal form, translating it to causal form from there by adding in Nirvana won’t change the utilities much.
So, the flaw in your reasoning is after updating we’re in the city, doesn’t go “logically impossible, infinite utility”. We just go “alright, off-history measure gets converted to 0 utility”, a perfectly standard update. So updates to (0,0) (ie, there’s 0 probability I’m in this situation in the first place, and my expected utility for not getting into this situation in the first place is 0, because of probably dying in the desert)
As for the proper way to do this analysis, it’s a bit finicky. There’s something called “acausal form”, which is the fully general way of representing decision-theory problems. Basically, you just give an infrakernel that tells you your uncertainty over which history will result, for each of your policies.So, you’d have
Ie, if you pay, 99 percent chance of ending up alive but paying and 1 percent chance of dying in the desert, if you don’t pay, 99 percent chance of dying in the desert and 1 percent chance of cheating them, no extra utility juice on either one.
You update on the event “I’m alive”. The off-event utility function is like “being dead would suck, 0 utility”. So, your infrakernel updates to (leaving off the scale-and-shift factors, which doesn’t affect anything)
Because, the probability mass on “die in desert” got burned and turned into utility juice, 0 of it since it’s the worst thing. Let’s say your utility function assigns 0.5 utility to being alive and rich, and 0.4 utility to being alive and poor. So the utility of the first policy is , and the utility of the second policy is , so it returns the same answer of paying up. It’s basically thinking “if I don’t pay, I’m probably not in this situation in the first place, and the utility of “I’m not in this situation in the first place” is also about as low as possible.”
BUT
There’s a very mathematically natural way to translate any decision theory to “causal form”, and as it turns out, the process which falls directly out of the math is that thing where you go “hard-code in all possible policies, go to Nirvana if I behave differently from the hard-coded policy”. This has an advantage and a disadvantage. The advantage is that now your decision-theory problem is in the form of an infra-POMDP, a much more restrictive form, so you’ve got a much better shot at actually developing a practical algorithm for it. The disadvantage is that not all decision-theory problems survive the translation process unchanged. Speaking informally the “fairness criterion” to translate a decision-theory problem into causal form without too much loss in fidelity is something like “if I was mispredicted, would I actually have a good shot at entering the situation where I was mispredicted to prove the prediction wrong”.
Counterfactual mugging fits this. If Omega flubs its prediction, you’ve got a 50 percent chance of being able to prove it wrong.
XOR blackmail fits this. If the blackmailer flubs its prediction and thinks you’ll pay up, you’ve got like a 90 percent chance of being able to prove it wrong.
Newcomb’s problem fits this. If Omega flubs its prediction and thinks you’ll 2-box, you’ll definitely be able to prove it wrong.
Transparent Newcomb and Parfait’s Hitchiker don’t fit this “fairness property” (especially for 100 percent accuracy), and so when you translate them to a causal problem, it ruins things. If the predictor screws up and thinks you’ll 2-box on seeing a filled transparent box/won’t pay up on seeing you got saved, then the transparent box is empty/you die in the desert, and you don’t have a significant shot at proving them wrong.
Let’s see what’s going wrong. Our two a-environments are
Update on the event “I didn’t die in the desert”. Then, neglecting scale-and-shift, our two a-environments are
Letting N be the utility of Nirvana,
If you pay up, then the expected utilities of these are and
If you don’t pay up, then the expected utilities of these are and
Now, if N is something big like 100, then the worst-case utilities of the policies are 0.396 vs 0.005, as expected, and you pay up. But if N is something like 1, then the worst-case utilities of the policies are 0.01 vs 0.005, which… well, it technicallygets the right answer, but those numbers are suspiciously close to each other, the agent isn’t thinking properly. And so, without too much extra effort tweaking the problem setup, it’s possible to generate decision-theory problems where the agent just straight-up makes the wrong decision after changing things to the causal setting.
Infra-Topology
Infra-Miscellanea
[Closed] Job Offering: Help Communicate Infrabayesianism
Omega and hypercomputational powers isn’t needed, just decent enough prediction about what someone would do. I’ve seen Transparent Newcomb being run on someone before, at a math camp. They were predicted to not take the small extra payoff, and they didn’t. And there was also an instance of acausal vote trading that I managed to pull off a few years ago, and I’ve put someone in a counterfactual mugging sort of scenario where I did pay out due to predicting they’d take the small loss in a nearby possible world. 2⁄3 of those instances were cases where I was specifically picking people that seemed unusually likely to take this sort of thing seriously, and it was predictable what they’d do.
I guess you figure out the entity is telling the truth in roughly the same way you’d figure out a human is telling the truth? Like “they did this a lot against other humans and their prediction record is accurate”.
And no, I don’t think that you’d be able to get from this mathematical framework to proving “a proof of benevolence is impossible”. What the heck would that proof even look like?
The key piece that makes any Lobian proof tick is the “proof of X implies X” part. For Troll Bridge, X is “crossing implies bridge explodes”.
For standard logical inductors, that Lobian implication holds because, if a proof of X showed up, every trader betting in favor of X would get free money, so there could be a trader that just names a really really big bet in favor of the X (it’s proved, after all), the agent ends up believing X, and so doesn’t cross, and so crossing implies bridge explodes.
For this particular variant of a logical inductor, there’s an upper limit on the number of bets a trader is able to make, and this can possibly render the statement “if a proof of X showed up, the agent would believe X” false. And so, the key piece of the Lobian proof fails, and the agent happily crosses the bridge with no issue, because it would disbelieve a proof of bridge explosion if it saw it (and so the proof does not show up in the first place).
Said actions or lack thereof cause a fairly low utility differential compared to the actions in other, non-doomy hypotheses. Also I want to draw a critical distinction between “full knightian uncertainty over meteor presence or absence”, where your analysis is correct, and “ordinary probabilistic uncertainty between a high-knightian-uncertainty hypotheses, and a low-knightian uncertainty one that says the meteor almost certainly won’t happen” (where the meteor hypothesis will be ignored unless there’s a meteor-inspired modification to what you do that’s also very cheap in the “ordinary uncertainty” world, like calling your parents, because the meteor hypothesis is suppressed in decision-making by the low expected utility differentials, and we’re maximin-ing expected utility)
Something analogous to what you are suggesting occurs. Specifically, let’s say you assign 95% probability to the bandit game behaving as normal, and 5% to “oh no, anything could happen, including the meteor”. As it turns out, this behaves similarly to the ordinary bandit game being guaranteed, as the “maybe meteor” hypothesis assigns all your possible actions a score of “you’re dead” so it drops out of consideration.
The important aspect which a hypothesis needs, in order for you to ignore it, is that no matter what you do you get the same outcome, whether it be good or bad. A “meteor of bliss hits the earth and everything is awesome forever” hypothesis would also drop out of consideration because it doesn’t really matter what you do in that scenario.
To be a wee bit more mathy, probabilistic mix of inframeasures works like this. We’ve got a probability distribution , and a bunch of hypotheses , things that take functions as input, and return expectation values. So, your prior, your probabilistic mixture of hypotheses according to your probability distribution, would be the function
It gets very slightly more complicated when you’re dealing with environments, instead of static probability distributions, but it’s basically the same thing. And so, if you vary your actions/vary your choice of function f, and one of the hypotheses is assigning all these functions/choices of actions the same expectation value, then it can be ignored completely when you’re trying to figure out the best function/choice of actions to plug in.
So, hypotheses that are like “you’re doomed no matter what you do” drop out of consideration, an infra-Bayes agent will always focus on the remaining hypotheses that say that what it does matters.
Well, taking worst-case uncertainty is what infradistributions do. Did you have anything in mind that can be done with Knightian uncertainty besides taking the worst-case (or best-case)?
And if you were dealing with best-case uncertainty instead, then the corresponding analogue would be assuming that you go to hell if you’re mispredicted (and then, since best-case things happen to you, the predictor must accurately predict you).
Alright, this is kind of a Special Interest, so here’s your relevant thought dump.
First up, the image is kind of misleading, in the sense that you can always tack on extra orders of magnitude. You could tack on another thousand orders of magnitude and make it look even longer, or just go “this is 900 OOM’s of literally nothing happening, lets clip that off and focus on the interesting part”
Assuming proton decay is a thing (that free protons decay with a ridiculously long half-life)....
ok, I was planning on going “as a ludicrous upper bound, here’s the number”, but, uh, the completely ludicrous upper bound wound up being a WHOLE LOT longer than I thought. I… I didn’t even think it was possible to stall till the evaporation of even a small black hole. But this calculation indicates that if you’re aiming solely at living ludicrously long, you can stall about a googol years, enough for even the largest black holes to evaporate, and to get to the end of the black hole era. I’m gonna need to rethink some stuff.
EDIT: rethought some stuff, realized it doesn’t change my conclusions from when I last looked into this. The fundamental problem is that, for any remotely realistic numbers, if you’re trying to catch the final evaporation of a black hole to harvest its mass-energy, you’ll blow a lot more than the amount of mass-energy that you could gain, in order to wait that long.
Final conclusion: If proton decay is a thing, it’s definitely not worth waiting to the end of a black hole, you’ll want to have things wrapped up far earlier. If proton decay isn’t a thing, you’ll want to wait till the black hole evaporates to catch that final party and last 1019 kg of mass-energy. If proton decay is a thing and you’re willing to blow completely ridiculous cosmic amounts of resources on it, you can last till the late parts of the black hole era.
The rough rationale is as follows. Start with 10x the mass of the largest black holes in the universe, around 1012 solar masses stockpiled. If they’re spinning fast enough, you can extract energy from them, assume you can extract all of it (it’s over 10 percent, so let’s round it up to 100 percent). Assume that proton decay is 1040 years (a high estimate), and that we use the energy at 100 percent efficiency to make matter (also high estimate), you can take out one proton, wait for around a proton decay time, take out the next proton, and so on. Then you can take out around 1069 protons, and each one lasts you around 1040 years, getting around 10109 years (high uncertainty). And, coincidentally, natural Hawing radiation finishes off the black hole of that size in 10103 years, leaving a small margin left over for silly considerations like “maybe the intelligence needs more than one proton to physically implement”.
So, not remotely practical, but maybe something like 1080 years would actually be doable? That extra 29 OOM’s of wiggle room patches over a lot of sins in this calculation.
But, in terms of what would actually be practical for the far future of humanity, it’d be the strat of “dump as much mass into a fast-spinning black hole as possible. Like, eat the entire Laniakea supercluster complex. Wait a trillion years for the cosmic microwave background radiation to cool to its floor temperature. You’d be in the late Stelliferous era at this point, with a few red dwarfs around, if you didn’t dump all the stars in the mega-black-hole already. Set up some infrastructure around the mega-hole, and use the Blandford-Znajek mechanism to convert the mega-hole spin into electrical power. You should be able to get about a gigawatt of power for the next 1045 years to run a whole lotta computation and a little bit of maintanence, and if proton decay is messing with things, chop however many OOM’s you need off the time and add those OOM’s to the power output. Party for a trillion trillion trillion eons with your super-optimized low-temperature computing infrastructure”