# The Number Choosing Game: Against the existence of perfect theoretical rationality

In order to ensure that this post delivers what it promises, I have added the following content warnings:

**Content Notes:Pure Hypothetical Situation**: The claim that perfect theoretical rationality doesn’t exist is restricted to a purely hypothetical situation. No claim is being made that this applies to the real world. If you are only interested in how things apply to the real world, then you may be disappointed to find out that this is an exercise left to the reader.

**Technicality Only Post**: This post argues that perfectly theoretical rationality doesn’t exist due to a technicality. If you were hoping for this post to deliver more, well, you’ll probably be disappointed.

**Contentious Definition**: This post (roughly) defines perfect rationality as the ability to maximise utility. This is based on Wikipedia, which defines rational agents as an agent that: “always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions”.

We will define the number choosing game as follows. You name any single finite number x. You then gain x utility and the game then ends. You can only name a finite number, naming infinity is not allowed.

Clearly, the agent that names x+1 is more rational than the agent that names x (and behaves the same in every other situation). However, there does not exist a completely rational agent, because there does not exist a number that is higher than every other number. Instead, the agent who picks 1 is less rational than the agent who picks 2 who is less rational than the agent who picks 3 and so on until infinity. There exists an infinite series of increasingly rational agents, but no agent who is perfectly rational within this scenario.

Furthermore, this hypothetical doesn’t take place in our universe, but in a hypothetical universe where we are all celestial beings with the ability to choose any number however large without any additional time or effort no matter how long it would take a human to say that number. Since this statement doesn’t appear to have been clear enough (judging from the comments), we are explicitly considering a theoretical scenario and no claims are being made about how this might or might not carry over to the real world. In other words, I am claiming the the existence of perfect rationality does not follow purely from the laws of logic. If you are going to be difficult and argue that this isn’t possible and that even hypothetical beings can only communicate a finite amount of information, we can imagine that there is a device that provides you with utility the longer that you speak and that the utility it provides you is exactly equal to the utility you lose by having to go to the effort to speak, so that overall you are indifferent to the required speaking time.

In the comments, MattG suggested that the issue was that this problem assumed unbounded utility. That’s not quite the problem. Instead, we can imagine that you can name any number less than 100, but not 100 itself. Further, as above, saying a long number either doesn’t cost you utility or you are compensated for it. Regardless of whether you name 99 or 99.9 or 99.9999999, you are still choosing a suboptimal decision. But if you never stop speaking, you don’t receive any utility at all.

I’ll admit that in our universe there is a perfectly rational option which balances speaking time against the utility you gain given that we only have a finite lifetime and that you want to try to avoid dying in the middle of speaking the number which would result in no utility gained. However, it is still notable that a perfectly rational being cannot exist within a hypothetical universe. How exactly this result applies to our universe isn’t exactly clear, but that’s the challenge I’ll set for the comments. Are there any realistic scenarios where the lack of existence of perfect rationality has important practical applications?

Furthermore, there isn’t an objective line between rational and irrational. You or I might consider someone who chose the number 2 to be stupid. Why not at least go for a million or a billion? But, such a person could have easily gained a billion, billion, billion utility. No matter how high a number they choose, they could have always gained much, much more without any difference in effort.

I’ll finish by providing some examples of other games. I’ll call the first game the Exploding Exponential Coin Game. We can imagine a game where you can choose to flip a coin any number of times. Initially you have 100 utility. Every time it comes up heads, your utility triples, but if it comes up tails, you lose all your utility. Furthermore, let’s assume that this agent isn’t going to raise the Pascal’s Mugging objection. We can see that the agent’s expected utility will increase the more times they flip the coin, but if they commit to flipping it unlimited times, they can’t possibly gain any utility. Just as before, they have to pick a finite number of times to flip the coin, but again there is no objective justification for stopping at any particular point.

Another example, I’ll call the Unlimited Swap game. At the start, one agent has an item worth 1 utility and another has an item worth 2 utility. At each step, the agent with the item worth 1 utility can choose to accept the situation and end the game or can swap items with the other player. If they choose to swap, then the player who now has the 1 utility item has an opportunity to make the same choice. In this game, waiting forever is actually an option. If your opponents all have finite patience, then this is the best option. However, there is a chance that your opponent has infinite patience too. In this case you’ll both miss out on the 1 utility as you will wait forever. I suspect that an agent could do well by having a chance of waiting forever, but also a chance of stopping after a high finite number. Increasing this finite number will always make you do better, but again, there is no maximum waiting time.

(This seems like such an obvious result, I imagine that there’s extensive discussion of it within the game theory literature somewhere. If anyone has a good paper that would be appreciated).

Link to part 2: Consequences of the Non-Existence of Rationality

- 9 Jan 2016 20:31 UTC; 12 points) 's comment on Are we failing the ideological Turing test in the case of ISIS? (a crazy ideas thread) by (
- 5 Jan 2016 10:51 UTC; 0 points) 's comment on Naturalism versus unbounded (or unmaximisable) utility options by (
- Consequences of the Non-Existence of Perfect Theoretical Rationality by 9 Jan 2016 1:22 UTC; -1 points) (

There is some confusion in the comments over what utility is.

kithpendragon writes:

and Usul writes:

Utility is not a resource. It is not something that you can acquire and then use, or save up and then spend. It is not that sort of thing. It is nothing more than a numerical measure of the value you ascribe to some outcome or state of affairs. The blackjack and hookers, if that’s what you’re into, are the things that you would be specifically seeking by seeking the highest utility, not something you would afterwards get in exchange for some acquired quantity of utility.

Heat Death still comes into play. If you stand there calculating high numbers for longer than that, or mashing on the 9 key, or swapping 1 utilon for 2 (or 2 billion), it never matters. You

stillend up with zero at the end of things.ETA: If you come back an tell me that “these scenarios assume an unlimited availability of time” or something like that, I’ll ask to see if the dragon in your garage is permeable to flour.

Note that I am not the person making the argument, just clarifying what is meant by “utility”, which in its use around here specifically means that which is constructed by the VNM theorem. I am not a particular fan of applying the concept to universal decision-making.

Are you arguing that all things end, therefore there is no value in anything?

Well, there is precedent:

reviews VNM TheoremNoted, and thanks for the update. :)

My argument was not meant to imply nihilism, though that is an interesting point. (Aside: Where is the quote from?) Rather, I meant to imply the hidden costs (e.g. time for calculation or input) making the exercise meaningless. As has been argued by several people now, having the Agent be able to state arbitrarily large or accurate numbers, or able to wait an arbitrarily large amount of time without losing any utility is… let’s say problematic. As much so as the likelyhood of the Game Master being able to actually hand out utility based on an arbitrarily large/accurate number.

The quotation is from the biblical Book of Ecclesiastes, traditionally (but probably wrongly) ascribed to the allegedly very wise King Solomon.

Heat death is a problem that the builders of the game have to deal with. Every time I type out BB(BB(BB(...))) the builder of the game has to figure out how I can get a noncomputable increase to the degree of the function by which the multiple of my preference for the world increases. If there is some conceivable world with no heat death which I prefer any computable amount more than any world with a heat death (and infinity is not a utility!), then by playing this game I enter such a world.

Not if your current universe ends before you are able to finish specifying the number. Remember: you receive

noutility before you complete your input.“If you come back an tell me that “these scenarios assume an unlimited availability of time” or something like that, I’ll ask to see if the dragon in your garage is permeable to flour.”

Not being realistic is not a valid criticism of a theoretical situation if the theoretical situation is not meant to represent reality. I’ve made no claims of how it carries over to the real world

“Not realistic” isn’t my objection here so much as “moving the goalpost”. The original post (as I recall it from before the edit), made no claim that there was zero cost in specifying arbitrarily large/specific numbers, nor in participating in arbitrarily large numbers of swaps.

It’s been like that from the start. EDIT: I only added in extra clarification.

I certainly make no claims about the perfect quality of my memory. ;)

So, when we solve linear programming problems (say, with the simplex method), there are three possible outcomes: the problem is infeasible (there are no solutions that satisfy the constraints), the problem has at least one optimal value (which is found), or the problem is unbounded.

That is, if your “perfect theoretical rationality” requires there to not be the possibility of unbounded solutions, then your perfect theoretical rationality won’t work and cannot include simple things like LP problems. So I’m not sure why you think this version of perfect theoretical rationality is interesting, and am mildly surprised and disappointed that this was your impression of rationality.

“Cannot include simple things like LP problems”—Well, linear programming problems are simply a more complex version of the number choosing game. In fact, the number choosing game is equivalent to linear programming maximising x with x>0. So, if you want to criticise my definition of rationality for not being able to solve basic problems, you should be criticising it for not being able to solve the number choosing game!

I wouldn’t say it makes this uninteresting though, as while it may seem obvious to you that perfect rationality as defined by utility maximisation is impossible, as you have experience with linear programming, it isn’t necessarily obvious to everyone else. In fact, if you read the comments, you’ll see that many commentators are unwilling to accept this solution and keep trying to insist on there being some way out.

You seem to be arguing that there

mustbe some solution that can solve these problems. I’ve already proven that this cannot exist, but if you disagree, what is your solution then?EDIT: Essentially, what you’ve done is take something “absurd” (that there is no perfect rationality for the number choosing game), reduce it to something less absurd (that there’s no perfect rationality for linear programming) and then declared that you’ve found a reductio ad absurdum. That’s not how it is supposed to work!I think you’re misunderstanding me. I’m saying that there are problems where the right action is to mark it “unsolvable, because of X” and then move on. (Here, it’s “unsolvable because of unbounded solution space in the increasing direction,” which is true in both the “pick a big number” and “open boundary at 100″ case.)

Sure, someone who is objecting that this problem is ‘solvable’ is not using ‘solvable’ the way I would. But someone who is objecting that this problem is ‘unfair’ because it’s ‘impossible’ is starting down the correct path.

I think you have this in reverse. I’m saying “the result you think is absurd is normal in the general case, and so is normal in this special case.”

But if we view this as an actual (albeit unrealistic/highly theoretical) situation rather than a math problem we are still stuck with the question of which action to take. A perfectly rational agent can realize that the problem has no optimal solution and mark it as unsolvable, but afterwards they still have to pick a number, so which number should they pick?

There is no such thing as an actual unrealistic situation.

They do not have to pick a number, because the situation is not real. To say “but suppose it was” is only to repeat the original hypothetical question that the agent has declared unsolved. If we stipulate that the agent is so logically omniscient as to never need to abandon a problem as unsolved, that does not tell us, who are not omniscient, what that hypothetical agent’s hypothetical choice in that hypothetical situation would be.

The whole problem seems to me on a level with “can God make a weight so heavy he can’t lift it?”

UPDATED: If asked whether the problem is solvable, a perfectly rational agent would reply that it isn’t.

If asked what action to take, then the perfectly rational agent is stuck, and therefore finds out it isn’t perfect. Those are two distinct questions. I suppose it all comes down to how you define rationality though.

So, besides the issue of what I will call earlier work, CCC and others have already mentioned that your scenario would allow non-converging expected values as in the St Petersburg paradox. By the usual meaning of utility, which you’ll note is not arbitrary but equivalent to certain attractive axioms, your scenario contradicts itself.

I’ve seen two main solutions offered for the general problem. If we just require bounded utility, there might be something left of the OP—but only with assumptions that appear physically impossible and thus highly suspicious. (Immediately after learning your argument contradicted itself is a bad time to tell us what you think is logically possible!) I tend towards the other option, which says the people complaining about physics are onto something fundamental concerning the probabilities of ever-vaster utilities. This would disintegrate the OP entirely.

“Non-converging expected values”—you can’t conclude that the scenario is contradictory just because your tools don’t work.

As already noted, we can consider the problem where you name any number less than 100, but not 100 itself and gain that much utility, which avoids the whole non-convergence problem.

“This would disintegrate the OP entirely”—as already stated in other comments, claims that my situation aren’t realistic would be a good criticism if I was claiming that the results could be directly applied to the real universe.

Why? It’s a problem without a solution. Would a perfect rational agent say the problem of finding a negative integer that’s greater than 2 is solvable?

Sorry, that was a typo. It was meant to say “isn’t” rather than “is”

The issue here isn’t that rationality is impossible. The issue here is that you’re letting an undefined abstract concept do all your heavy lifting, and taking it places it cannot meaningfully be.

Utilitarianism: Defining “Good” is hard. Math is easy, let X stand in for “Good”, and we’ll maximize X, thereby maximizing “Good”.

So let’s do some substitution. Let’s say apples are good. Would you wait forever for an apple? No? What if make it so you live forever? No, you’d get bored? What if we make it so that you don’t get bored waiting? No, you have other things that have more value to you? Well, we’ll put you in a (science fiction words) closed time loop, so that no matter how long you spend trading the apple back and forth, you’ll come out without having lost anything? And so on and so forth, until all the countless potential objections are eliminated.

Keep going until all that’s left is one extra apple, and the rational thing to do is to wait forever for an apple you’ll never end up with. One by one, you’ve eliminated every reason -not- to wait forever—why should it surprise you that waiting forever is the correct thing to do, when you’ve gone to some much trouble to make sure that it is the correct thing to do?

Your “What’s the highest number game” is, well, a “What’s the highest number game”. Let’s put this in concrete terms: Whoever names the highest number gets $1,000. There are now two variants of the game: In the first variant, you get an infinite number of turns. I think it’s obvious this is identical to the Apple swapping game. In the second variant, you get exactly one turn to name a number. Apply all the constraints of the Apple swapping game, such that there is no cost to the player for taking longer. Well, the obvious strategy now is to keep repeating the number “9” until you’ve said it more times than your opponent. And we’re back to the apple swapping game. There’s no cost to continuing.

What makes all this seem to break rationality? Because we don’t live in a universe without costs, and our brains are hardwired to consider costs. If you find yourself in a universe without costs, where you can obtain an infinite amount of utility by repeating the number “9” forever, well, keep repeating the number “9″ forever, along with everybody else in the universe. It’s not like you’ll ever get bored or have something more important to do.

“Keep going until all that’s left is one extra apple, and the rational thing to do is to wait forever for an apple you’ll never end up with”—that doesn’t really follow. You have to get the Apple and exit the time loop at some point or you never get anything.

“If you find yourself in a universe without costs, where you can obtain an infinite amount of utility by repeating the number “9” forever, well, keep repeating the number “9″ forever, along with everybody else in the universe.”—the scenario specifically requires you to terminate in order to gain any utility.

But apparently you are not losing utility over time? And holding utility over time isn’t of value to me, otherwise my failure to terminate early is costing me the utility I didn’t take at that point in time? If there’s a lever compensating for that loss of utility then I’m actually gaining the utility I’m turning down anyway!

Basically the only reason to stop at time t1 would be that you will regret not having had the utility available at t1 until t2, when you decide to stop.

“Basically the only reason to stop at time t1 would be that you will regret not having had the utility available at t1 until t2, when you decide to stop.”—In this scenario, you receive the utility when you stop speaking. You can speak for an arbitrarily long amount of time and it doesn’t cost you any utility as you are compensated for any utility that it would cost, but if you never stop speaking you never gain any utility.

Then the “rational” thing is to never stop speaking. It’s true that by never stopping speaking I’ll never gain utility but by stopping speaking early I miss out on future utility.

The behaviour of speaking forever seems irrational, but you have deliberately crafted a scenario where my only goal is to get the highest possible utility, and the only way to do that is to just keep speaking. If you suggest that someone who got some utility after 1 million years is “more rational” than someone still speaking at 1 billion years then you are adding a value judgment not apparent in the original scenario.

Infinite utility is not a possible utility in the scenario and therefore the behaviour of not stopping is not a highest possible utility. Continue to speak is an improvement only given that you do stop at some time. If you continue by not stopping ever you get 0 utility which is lower than speaking a 2 digit number.

But time doesn’t end. The criteria of assessment is

1)I only care about getting the highest number possible

2)I am utterly indifferent to how long this takes me

3)The only way to generate this value is by speaking this number (or, at the very least, any other methods I might have used instead are compensated explicitly once I finish speaking).

If your argument is that Bob, who stopped at Grahams number, is more rational than Jim, who is still speaking, then you’ve changed the terms. If my goal is to beat Bob, then I just need to stop at Graham’s number plus one.

At any given time, t, I have no reason to stop, because I can expect to earn more by continuing. The only reason this looks irrational is we are imagining things which the scenario rules out: time costs or infinite time coming to an end.

The argument “but then you never get any utility” is true, but that doesn’t matter, because I last forever. There is no end of time in this scenario.

If your argument is that in a universe with infinite time, infinite life and a magic incentive button then all everyone will do is press that button forever then you are correct, but I don’t think you’re saying much.

python code of

doesn’t generate a runtime exception when ran

similiarly

doesn’t assign to utility more than once

in contrast

does assign to utility more than once. With finite iterations these two would be quite interchangeable but with non-terminating iterations its not. The iteration doesn’t need to terminate for this to be true.

Say you are in a market and you know someone who sells wheat for $5 and someone who buys it for $10 and someone who sells wine for $7 and suppose that you care about wine. If you have a strategy that only consists of buying and selling wheat you don’t get any wine. There needs to be a “cashout” move of buying wine atleast once. Now think of a situation that when you buy wine you need to hand over your wheat dealing licence. Well a wheat licence means arbitrary amounts of wine so irrational to ever trade wheat license away for a finite amount of wine right? But then you end up with a wine “maximising strategy” that does so by not ever buying wine.

Indeed. And that’s what happens when you give a maximiser perverse incentives and infinity in which to gain them.

This scenario corresponds precisely to pseudocode of the kind

newval<-1

oldval<-0

while newval>oldval

{

oldval<-newval

newval<-newval+1

}

Which never terminates. This is only irrational if you want to terminate (which you usually do), but again, the claim that the maximiser never obtains value doesn’t matter because you are essentially placing an outside judgment on the system.

Basically, what I believe you (and the op) are doing is looking at two agents in the numberverse.

Agent one stops at time 100 and gains X utility Agent two continues forever and never gains any utility.

Clearly, you think, agent one has “won”. But how? Agent two has never failed. The numberverse is eternal, so there is no point at which you can say it has “lost” to agent one. If the numberverse had a non zero probability of collapsing at any point in time then Agent two’s strategy would instead be more complex (and possibly uncomputable if we distribute over infinity), but as we are told that agent one and two exist in a changeless universe and their only goal is to obtain the most utility then we can’t judge either to have won. In fact agent two’s strategy only prevents it from losing, and it can’t win.

That is, if we imagine the numberverse full of agents, any agent which chooses to stop will lose in a contest of utility, because the remaining agents can always choose to stop and obtain their far greater utility. So the rational thing to do in this contest is to never stop.

Sure, that’s a pretty bleak lookout, but as I say, if you make a situation artificial enough you get artificial outcomes.

What you are saying would be optimising in a universe where the agent gets the utility as it says the number. Then the average utility of a ungoer would be greater than that of a idler.

However if the utility is dished out after the number has been spesified then an idler and a ongoer have exactly the same amount of utility and ought to be as optimal. 0 is not a optimum of this game so an agent that results in 0 utility is not an optimiser. If you take an agent that is an optimiser in other context then it ofcourse might not be an optimiser for this game.

There is also the problem that choosing the continue doesn’t yield the utilty with certainty only “almost always”. The ongoer strategy hits precicely in the hole in this certainty when no payout happens. I guess you may be able to define a game where concurrently with their actions. But this reeks of “the house” having premonition on what the agent is going to do instead of inferring its from its actions. if the rules are “first actions and THEN payout” you need to be able to do your action to get a payout.

In the ongoing version I could think of rules that an agent that has said “9.9999...” to 400 digits would receive 0.000.(401 zeroes)..9 utility on the next digit. However if the agents get utility assigned only once there won’t be a “standing so far”. However this behaviour would then be the perfectly rational thing to do as there would be a uniquely determined digit to keep on saying. I am suspecting the trouble is mixing the ongoing and the dispatch version to each other inconsistently.

“However if the utility is dished out after the number has been spesified then an idler and a ongoer have exactly the same amount of utility and ought to be as optimal. 0 is not a optimum of this game so an agent that results in 0 utility is not an optimiser. If you take an agent that is an optimiser in other context then it ofcourse might not be an optimiser for this game.”

The problem with this logic is the assumption that there is a “result” of 0. While it’s certainly true that an “idler” will obtain an actual value at some point, so we can assess how they have done, there will never be a point in time that we can assess the ongoer. If we change the criteria and say that we are going to assess at a point in time then the ongoer can simply stop then and obtain the highest possible utility. But time never ends, and we never mark the ongoer’s homework, so to say he has a utility of 0 at the end is nonsense, because there is, by definition, no end to this scenario.

Essentially, if you include infinity in a maximisation scenario, expect odd results.

And the infinite time you have to spend to get that apple, multiplied by the zero cost of the time, is...?

Your mortality bias is showing. “You have to wait an infinite amount of time” is only a meaningful objection when that costs you something.

How would you rate your maths ability?

Better than your philosophic ability.

I can give you solutions for all your sample problems. The apple-swapping problem is a prisoner’s dilemma; agree to split the utilon and get out. The biggest-number problem can be easily resolved by stepping outside the problem framework with a simple pyramid scheme (create enough utilons to create X more entities who can create utility; each entity then creates enough utility to make X entities plus pay its creator three times its creation cost. Creator then spends two thirds of those utilons creating new entities, and the remaining third on itself. Every entity engages in this scheme, ensuring exponentially-increasing utility for everybody. Adjust costs and payouts however you want, infinite utility is infinite utility.) There are sideways solutions for just about any problem.

The problem isn’t that any of your little sample problems don’t have solutions, the problem is that you’ve already carefully eliminated all the solutions you can think of, and will keep eliminating solutions until nobody can think of a solution—if I suggested the pyramid scheme, I’m sure you’d say I’m not allowed to create new entities using my utilons, because I’m breaking what your thought experiment was intended to convey and just showing off.

I bypassed all of that and got to the point—you’re not criticizing rationality for its failure to function in this universe, you’re criticizing rationality for its behavior in radically difference universes and the failure of that behavior to conform to basic sanity-checks that only make sense in the universe you yourself happen to occupy.

Rationality belongs to the universe. In a bizarre and insane universe, rational behavior is bizarre and insane, as it should be.

Sorry, I was being rude then.

The problem is: 1) 0 times infinity is undefined not 0 2) You are talking about infinity as something that can be reached, when it is only something that can be approached.

These are both very well known mathematical properties.

“If I suggested the pyramid scheme, I’m sure you’d say I’m not allowed to create new entities using my utilons”—If you read Richard Kennawy’s comment—you’ll see that utilions are not what you think that they are.

“The apple-swapping problem is a prisoner’s dilemma; agree to split the utilon and get out.”—You may want to read this link. “Likewise, people who responds to the Trolley problem by saying that they would call the police are not talking about the moral intuitions that the Trolley problem intends to explore. There’s nothing wrong with you if those problems are not interesting to you. But fighting the hypothetical by challenging the premises of the scenario is exactly the same as saying, “I don’t find this topic interesting for whatever reason, and wish to talk about something I am interested in.”″

Correct. Now, observe that’s you’ve created multiple problems with massive “Undefined” where any optimization is supposed to take place, and then claimed you’ve proven that optimization is impossible.

No, I am not. I never assume anybody ends up with the apple/utilon, for example. There’s just never a point where it makes sense to stop, so you should never stop. If this doesn’t make sense to you and offends your sensibilities, well, quit constructing nonsensical scenarios that don’t match the reality you understand.

They’re not anything at all, which was my point about you letting abstract things do all your heavy lifting for you.

I do believe I already addressed the scenarios you raised.

Very closely related: Stuart Armstrong’s Naturalism versus unbounded (or unmaximisable) utility options from about three years ago.

I think all this amounts to is: there can be situations in which there is no optimal action, and therefore if we insist on defining “rational” to mean “always taking the optimal action” then no agent can be perfectly “rational” in that sense. But I don’t know of any reason to adopt that definition. We can still say, e.g., that one course of action is

more rationalthan another, even in situations where no course of action ismost rational.“We can still say, e.g., that one course of action is more rational than another, even in situations where no course of action is most rational.”—True.

“But I don’t know of any reason to adopt that definition”—perfect rationality means to me more rational than any other agent. I think that is a reasonable definition.

Seeing as this is an entire article about nitpicking and mathematical constructs...

Surely that should be “at least as rational as any other agent”?

Thanks for this comment. I agree, but can’t be bothered editing.

From my perspective, there’s no contradiction here—or at least, the contradiction is contained within a hidden assumption, much in the same way that the “unstoppable force versus immovable object” paradox assumes the contradiction. An “unstoppable force” cannot logically exist in the same universe as an “immovable object”, because the existence of one contradicts the existence of the other by definition. Likewise, you cannot have a “utility maximizer” in a universe where there is no “maximum utility”—and since you basically equate “being rational” with “maximizing utility” in your post, your argument begs the question.

Ok, lets say you are right that there does not exist perfect theoretical rationality in your hypothetical game context with all the assumptions that helps to keep the whole game standing. Nice. So what?

Then we can ask whether there are

any other situationswhere perfect theoretical rationality isnotpossible. Because we are already aware that it depends on the rules of the game (instead of assuming automatically that it is always possible).Exploring the boundary between the games where perfect theoretical rationality is possible, and the games where perfect theoretical rationality is impossible, could lead to some interesting theoretical results. Maybe.

It is useful to be able to dismiss any preconceptions that perfect decisionmakers can exist, or even be reasoned about. I think this is a very elegant way of doing that.

No. It just says that perfect decisionmakers can’t exist in a world that violates basic physics by allowing people to state even bigger numbers without spending additional time. It doesn’t say that perfect decisionmakers can’t exist in a world that operates under the physics under which our world operates.

The fact that you can constructe possible world in which there are no perfect decisionmakers isn’t very interesting.

“World that violates basic physics”—well the laws of physics are different in this scenario, but I keep the laws of logic the same, which is something.

“The fact that you can constructe possible world in which there are no perfect decisionmakers isn’t very interesting.”

Maybe. This is just part 1 =P.

Spoilers, haha.

I was actually reading this post and I was trying to find a solution to the coalition problem where Eliezer wonders how rational agents can solve a problem with the potential for an infinite loop, which lead me to what I’ll call the Waiting Game, where you can wait n units of time and gain n utility for any finite n, which then led me to this post.

Suppose instead that the game is “gain n utility”. No need to speak the number, wait n turns, or even to wait for a meat brain to make a decision or comprehend the number.

I posit that a perfectly rational, disembodied agent would decide to select an n such that there exists no n higher. If there is a possible outcome that such an agent prefers over all other possible outcomes, then

by the definition of utilitysuch an n exists.Not quite. There is no reason inherent in the definition that utility has to be bounded.

I’m not convinced. It takes massive amounts of evidence to convince me that the offers in each of your games are sincere and accurate. In particular it takes an infinite amount of evidence to prove that your agents can keep handing out increasing utility/tripling/whatever. When something incredible seems to happen, follow the probability.

I’m reminded of the two-envelope game, where seemingly the player can get more and more money(/utility) by swapping envelopes back and forth. Of course the solution is clear if you assume (any!) prior on the money in the envelopes, and the same is happening if we start thinking about the powers of your game hosts.

“It takes massive amounts of evidence to convince me that the offers in each of your games are sincere and accurate.”—Again, this only works if you assume we are modelling the real world, not perfect celestial beings with perfect knowledge. I have made no claims about whether perfect theoretical rationality can exist in theory in a world with certain “realism” constraints, just that if logic is the only constraint, perfect rationality doesn’t exist in general.

I must admit that I am now confused about the goal of your post. The words ‘perfect celestial beings with perfect knowledge’ sound like they mean something, but I’m not sure if we are trying to attach the same meaning to these words. To most people ‘unlimited’ means something like ‘more than a few thousand’, i.e. really large, but for your paradoxes you need actual mathematical unboundedness (or for the example with the 100, arbitrary accuracy). I’d say that if the closest counterexample to the existence of ‘rationality’ is a world where beings are no longer limited by physical constraints (otherwise this would provide reasonable upper bounds on this utility?) on either side of the scale (infinitely high utility along with infinitely high accuracy, so no atoms?), where for some reason one of such beings goes around distributing free utils and the other has infinitely much evidence that this offer is sincere, we’re pretty safe. Or am I misunderstanding something?

I think the bottom line is that ‘unbounded’, instead of ‘really frickin large’, is a tough bar to pass and it should not carelessly be assumed in hypotheticals.

Well, the idea behind “perfect celestial beings” kind of is to ignore physical constraints.

“I think the bottom line is that ‘unbounded’, instead of ‘really frickin large’, is a tough bar to pass and it should not carelessly be assumed in hypotheticals”—Why? I haven’t actually claimed the the non-existence of perfect rationality within the hypothetical leads to any real world consequences as of yet. Arguing against an argument I haven’t made does nothing.

This seems like another in a long line of problems that come from assuming unbounded utility functions.

Edit:The second game sounds a lot like the St. Petersburg paradox.

Thanks for bringing this up. That isn’t quite the issue here though. Imagine that you can name any number less than 100 and you gain that much utility, but you can’t name 100 itself. Furthermore, there is a device that compensates you for any time spent speaking with something worth equivalent utility. So whether you name 99 or 99.9 or 99.99… there’s always another agent more rational than you.

Once you make that change, you’re getting into coastline paradox territory. I don’t think that necessarily is a paradox related specifically to decision theory—it’s more of a problem with our math system and the trouble with representing infintesimals.

It’s not a problem with the math system. It is part of the situation that you aren’t allowed to say 100 minus delta where delta is infinitesimally small. In fact, we can restrict it further and rule that the gamemaker will only accept the number if you list out the digits (and the decimal point if there is one). What’s wrong with perfect rationality not existing? On the other side of the question, on what basis do we believe that perfect rationality does exist?

I actually don’t believe that perfect rationality does exist—but in this case, I think the whole concept of “perfect” is flawed for this problem. You can use the same argument to prove that there’s no perfect cartographer, no perfect shotputter, no perfect (insert anything where you’re trying to get as close as you can to a number without touching it).

As I said, I don’t think it’s proving anything special about rationality—it’s just that this a problem taht we don’t have good language to discuss.

“You can use the same argument to prove that there’s no perfect cartographer, no perfect shotputter, no perfect (insert anything where you’re trying to get as close as you can to a number without touching it).”—Why is that a problem? I don’t think that I am proving too much. Do you have an argument that a perfect shotputter or perfect cartographer does exist?

“As I said, I don’t think it’s proving anything special about rationality”—I claim that if you surveyed the members of Less Wrong, at least 20% would claim that perfect theoretical rationality exists (my guess for actual percentage would be 50%). I maintain that in light of these results, this position isn’t viable.

“We don’t have good language to discuss.”—Could you clarify what the problem with language is?

What is perfect rationality in the context of an unbounded utility function?

Consider the case where utility approaches 100. The utility function isn’t bounded, so the issue is something else.

It’s still some weird definitions of perfection when you’re dealing with infinities or infinitesimals.

Maybe it is weird, but nothing that can fairly be called perfection exists in this scenario, even if this isn’t a fair demand.

There exists an irrational number which is 100 minus delta where delta is infinitesimally small. In my celestial language we call it “Bob”. I choose Bob. Also I name the person who recognizes that the increase in utility between a 9 in the googleplex decimal place and a 9 in the googleplex+1 decimal place is not worth the time it takes to consider its value, and who therefore goes out to spend his utility on blackjack and hookers displays greater rationality than the person who does not.

Seriously, though, isn’t this more of an infinity paradox rather than an indictment on perfect rationality? There are areas where the ability to mathematically calculate breaks down, ie naked singularities, Uncertainty Principle, as well as infinity. Isn’t this more the issue at hand: that we can’t be perfectly rational where we can’t calculate precisely?

I didn’t specify in the original problem how the number has to be specified, which was a mistake. There is no reason why the gamemaker can’t choose to only award utility for numbers provided in decimal notation, just as any other competition has rules.

“Also I name the person who recognizes that the increase in utility between a 9 in the googleplex decimal place and a 9 in the googleplex+1 decimal place is not worth the time it takes to consider its value”—we are assuming either a) an abstract situation where there is zero cost of any kind of naming extra digits or b) the gamemaker compensates the individual for the extra time and effort required to say longer numbers.

If there is a problem here, it certainly isn’t that we can’t calculate precisely. For each number, we know

exactlyhow much utility it gives us.EDIT: Further 10-delta is not normally considered a number. I imagine that some people might include x as a number, but they aren’t defining the game, so number means what mathematicians in our society typically mean by (real) number.I’m just not convinced that you’re saying anything more than “Numbers are infinite” and finding a logical paradox within. You can’t state the highest number because it doesn’t exist. If you postulate a highest utility which is equal in value to the highest number times utility 1 then you have postulated a utility which doesn’t exist. I can not chose that which doesn’t exist. That’s not a failure of rationality on my part any more than Achilles inability to catch the turtle is a failure of his ability to divide distances.

I see I made Bob unnecessarily complicated. Bob = 99.9 Repeating (sorry don’t know how to get a vinculum over the .9) This is a number. It exists.

It is a number, it is also known as 100, which we are explicitly not allowed to pick (0.99 repeating = 1 so 99.99 repeating = 100).

In any case, I think casebash successfully specified a problem that doesn’t have any optimal solutions (which is definitely interesting) but I don’t think that is a problem for perfect rationality anymore than problems that have more than one optimal solution are a problem for perfect rationality.

I was born a non-Archimedean and I’ll die a non-Archimedean.

“0.99 repeating = 1” I only accept that kind of talk from people with the gumption to admit that the quotient of any number divided by zero is infinity. And I’ve got college calculus and 25 years of not doing much mathematical thinking since then to back me up.

I’ll show myself out.

I’m kind of defining perfect rationality as the ability to maximise utility (more or less). If there are multiple optimal solutions, then picking any one maximises utility. If there is no optimal solution, then picking none maximises utility. So this is problematic for perfect rationality as defined as utility maximisation, but if you disagree with the definition, we can just taboo “perfect rationality” and talk about utility maximisation instead. In either case, this is something people often assume exists without even realising that they are making an assumption.

That’s fair, I tried to formulate a better definition but couldn’t immediately come up with anything that sidesteps the issue (without explicitly mentioning this class of problems).

When I taboo perfect rationality and instead just ask what the correct course of action is, I have to agree that I don’t have an answer. Intuitive answers to questions like “What would I do if I actually found myself in this situation?” and “What would the average intelligent person do?” are unsatisfying because they seem to rely on implicit costs to computational power/time.

On the other hand I can also not generalize this problem to more practical situations (or find a similar problem without optimal solution that would be applicable to reality) so there might not be any practical difference between a perfectly rational agent and an agent that takes the optimal solution if there is one and explodes violently if there isn’t one. Maybe the solution is to simply exclude problems like this when talking about rationality, unsatisfying as it may be.

In any case, it is an interesting problem.

This statement is not necessarily true when there is no optimal solution because the solutions are part of an infinite set of solutions. That is, it is not true in exactly the situation described in your problem.

Sorry, that was badly phrased. It should have been: “If there is no optimal solution, then no matter what solution you pick you won’t be able to maximise utility”

Regardless of what number you choose, there will be another agent who chooses a higher number than you and hence who does better at the task of utility optimising than you do. If “perfectly rational” means perfect at optimising utility (which is how it is very commonly used), then such a perfect agent does not exist. I can see the argument for lowing the standards of “perfect” to something achievable, but lowering it to a finite number would result in agents being able to outperform a “perfect” agent, which would be equally confusing.

Perhaps the solution is to taboo the word “rational”. It seems like you agree that there does not exist an agent that scores maximally. People often talk about utility-maximising agents, which assumes it is possible to have an agent which maximises utility, which isn’t true for some situations. That the assumption I am trying to challenge regardless of whether we label it perfect rationality or something else.

Let’s taboo “perfect”, and “utility” as well. As I see it, you are looking for an agent who is capable of choosing The Highest Number. This number does not exist. Therefore it can not be chosen. Therefore this agent can not exist. Because numbers are infinite. Infinity paradox is all I see.

Alternately, letting “utility” back in, in a universe of finite time, matter, and energy, there does exist a maximum finite utility which is the sum total of the time, matter, and energy in the universe. There will be an number which corresponds to this. Your opponent can choose a number higher than this but he will find the utility he seeks does not exist.

Why can’t my utility function be:

0 if I don’t get ice cream

1 if I get vanilla ice cream

infinity if I get chocolate ice cream

?

I.e. why should we forbid a utility function that returns infinity for certain scenarios, except insofar that it may lead to the types of problems that the OP is worrying about?

I was bringing the example into the presumed finite universe in which we live, where Maximum Utility = The Entire Universe. If we are discussing a finite-quantity problem than infinite quantity is ipso facto ruled out.

I think Nebu was making the point that while we normally use utility to talk about a kind of abstract gain, computers can be programmed with an arbitrary utility function. We would generally put certain restraints on it so that the computer/robot would behave consistently, but those are the only limitation. So even if there does not exist such a thing as infinite utility, a rational agent may still be required to solve for these scenarios.

I guess I’m asking “Why would a finite-universe necessarily dictate a finite utility score?”

In other words, why can’t my utility function be:

0 if you give me the entire universe minus all the ice cream.

1 if you give me the entire universe minus all the chocolate ice cream.

infinity if I get chocolate ice cream, regardless of how much chocolate ice cream I receive, and regardless of whether the rest of the universe is included with it.

“You are looking for an agent who is capable of choosing The Highest Number”—the agent wants to maximise utility, not to pick the highest number for its own sake, so that is misrepresenting my position. If you want to taboo utility, let’s use the word “lives saved” instead. Anyway, you say “Therefore this agent (the perfect life maximising agent) can not exist”, which is

exactlywhat I was concluding. Concluding the exact same thing as I concluded, supports my argument, it doesn’t contradict it like you seem to think it does.“Alternately, letting “utility” back in, in a universe of finite time, matter, and energy, there does exist a maximum finite utility”—my argument is that there does not exist perfect rationality within the imagined infinite universe. I said nothing about the actual, existing universe.

Sorry, I missed that you postulated an infinite universe in your game.

I don’t believe I am misrepresenting your position. “Maximizing utility” is achieved by-, and therefore can be defined as- “choosing the highest number”. The wants of the agent need not be considered. “Choosing the highest number” is an example of “doing something impossible”. I think your argument breaks down to “An agent who can do the impossible can not exist.” or “It is impossible to do the impossible”. I agree with this statement, but I don’t think it tells us anything useful. I think, but I haven’t thought it out fully, that it is the concept of infinity that is tripping you up.

What you’ve done is take my argument and transform it into an equivalent obvious statement. That isn’t a counter-argument. In fact, in mathematics, it is a method of proving a theorem.

If you read the other comments, then you’ll see that other people disagree with what I’ve said (and in a different manner than you), so I’m not just stating something obvious that everyone already knows and agrees with.

“What you’ve done is take my argument and transform it into an equivalent obvious statement. That isn’t a counter-argument. In fact, in mathematics, it is a method of proving a theorem. If you read the other comments, then you’ll see that other people disagree with what I’ve said” You’re welcome? Feel free to make use of my proof in your conversations with those guys. It looks pretty solid to me.

If a Perfect Rational Agent is one who can choose Maximum Finite Utility. And Utility is numerically quantifiable and exists in infinite quantities. And the Agent must choose the quantity of Utility by finite number. Then no such agent can exist. Therefore a Perfect Rational Agent does not exist in all possible worlds.

I suppose I’m agreeing but unimpressed. Might could be this is the wrong website for me. Any thought experiment involving infinity does run the risk of sounding dangerously close to Theology to my ears. Angels on pinheads and such. I’m not from around here and only dropped in to ask a specific question elsewhere. Cheers.

“Lives saved” is finite within a given light cone.

A very specific property of our universe, but not universes in general.

Just as an aside, no there isn’t. Infinitesimal non-zero numbers can be defined, but they’re “hyperreals”, not irrationals.

An update to this postIt appears that this issue has been discussed before in the thread Naturalism versus unbounded (or unmaximisable) utility options. The discussion there didn’t end up drawing the conclusion that perfect rationality doesn’t exist, so I believe this current thread adds something new.

Instead, the earlier thread considers the Heaven and Hell scenario where you can spend X days in Hell to get the opportunity to spend 2X days in Heaven. Most of the discussion on that thread was related to the limit of how many days an agent count so as to exit at some point. Stuart Armstrong also comes up with the same solution for demonstrating that this problem isn’t related to unbounded utility.

Qiaochu Yaun summarises one of the key takeaways: “This isn’t a paradox about unbounded utility functions but a paradox about how to do decision theory if you expect to have to make infinitely many decisions. Because of the possible failure of the ability to exchange limits and integrals, the expected utility of a sequence of infinitely many decisions can’t in general be computed by summing up the expected utility of each decision separately.”

Cudos to Andreas Giger for noticing what most of the commentators seemed to miss: “How can utility be maximised when there is no maximum utility? The answer of course is that it can’t.” This is incredibly close to stating that perfect rationality doesn’t exist, but it wasn’t explicitly stated, only implied.

Further, Wei Dai’s comment on a randomised strategy that obtains infinite expected utility is an interesting problem that will be addressed in my next post.

Okay, so if by ‘perfect rationality’ we mean “ability to solve problems that don’t have a solution”, then I agree, perfect rationality is not possible. Not sure if that was your point.

I’m not asking you, for example, to make a word out of the two letters Q and K, or to write a program that will determine if an arbitrary program halts.

Where rationality fails if that there is always another person who scores higher than you and there was nothing stopping you from scoring the same score or higher. Such a program is more rational than you in that situation and there is another program more rational than them until infinity. That there is no maximally rational program, only successively more rational programs is a completely accurate way of characterising that situation

Seems like you are asking me to (or at least judging me as irrational for failing to) say a finite number such that I could not have said a higher number despite having unlimited time and resources. That is an impossible task.

I’m arguing against perfect rationality as defined as the ability to choose the option that maximises the agents utility. I don’t believe that this at all an unusual way of using this term. But regardless, let’s taboo perfect rationality and talk about utility maximisation. There is no utility maximiser for this scenario because there is no maximum utility that can be obtained. That’s all that I’m saying, nothing more nothing less. Yet, people often assume that such a perfect maximiser (aka perfectly rational agent) exists without even realising that they are making an assumption.

Oh. In that case, I guess I agree.

For some scenarios that have

unboundedutility there is no such thing as an utilitymaximizer.I think the scenario requires unbounded utility and unlimited resources to acquire it.

I think the key is infinite vs finite universes. Any conceivable finite universe can be arranged in a finite number of states, one, or perhaps several of which, could be assigned maximum utility. You can’t do this in universes involving infinity. So if you want perfect rationality, you need to reduce your infinite universe to just the stuff you care about. This is doable in some universes, but not in the ones you posit.

In our universe, we can shave off the infinity, since we presumably only care about our light cone.

Moving to Discussion.

I’m curious, do you disagree with the post?

I believe that the point being made is a) overwhelming supported by logic, or at the very least a logically consistent alternate viewpoint b) important to rationality (by preventing people trying to solve problems with no solution) c) overlooked in previous discussion or at least underdeveloped.

Because of this I took the socially risky gambit of moving a low voted (positive at the time) post to main.

This appears to be strongly related to the St. Petersburg Paradox—except that the prize is in utility instead of cash, and the player gets to control the coin (this second point significantly changes the situation).

To summarise the paradox—imagine a pot containing $2 and a perfectly fair coin. The coin is tossed repeatedly. Every time it lands tails, the pot is doubled; when it eventually lands heads, the player wins the entire pot. (With a fair coin, this leads to an infinite expected payoff—of course, giving the player

controlof the coin invalidates the expected-value calculation).Pre-existing extensive discussion probably references (or even talks about) the St. Petersburg Paradox—that might be a good starting point to find it.

Define “dominant decision” as an action that no other option would result in bigger utility.

Then we could define an agent to be perfect if it chooses the dominant decision out of its options whenever it exists.

We could also define a dominant agent whos choice is always the dominant decision.

a dominant agent can’t play the number naming game whereas a perfect agent isn’t constrained to pick a unique one.

You might be assuming that when options have utility values that are not equal then there is a dominant decision. For finite option palettes this migth be the case.

Define a “sucker” option to be a an option with a lower utility value than a some other possible choice.

A dominant decision is never a sucker option but a perfect agent migth end up choosing a sucker option. In the number naming game every option is a sucker option.

Thus “winning” is different from “not losing”.

I would argue that a perfect agent can never choose a “sucker” option (edit:) and still be a perfect agent. It follows straight from my definition. Of course, if you use a different definition, you’ll obtain a different result.

Thus why the dominant agent can’t play the number naming game as it can’t choose any of the options.

If it’s unwinnable by design, can it strictly be called a game?

It isn’t so much that fact that you don’t obtain the (non-existant) maximum that is important, it’s the fact that another agent beats you when nothing was stopping you beating the agent.

Is utility zero-sum in this scenario? If I’m hyper-ultra-happy, and my neighbor is extra-super-über-mega-happy, that does not necessarily mean he

beatme.Okay, let me restate it. It’s the fact that a different version of yourself that choose the same number as your neighbour would have done better than what you did. Nothing to do with zero-sum.

Why not just postulate a universe where A>B>C>A and ask the decision maker to pick the letter with the highest value? What we think of as rational doesn’t necessarily work in other universes.

Postulating a universe like that is to postulate different laws of logic. I don’t think most people expect rationality to work in universes with different laws of logic.

What I postulated though, maintains the same laws of logic, but postulates different abilities, such as the ability to instantly conceive of and communicate arbitrarily large numbers. This is the kind of universe that rationality should still be able to function in, because rationality only depends on logic (plus some kind of goal that is taken axiomatically).

Further, if you don’t want to accept these abilities, we can imagine a ,magical device that compensates for any time/effort required in picking a larger number.

It’s all connected. You probably need different laws of logic to get a magical device or to allow for people to “instantly conceive of and communicate arbitrarily large numbers.” See EY’s Universal Fire where he wrote “If you stepped into a world where matches failed to strike, you would cease to exist as organized matter. Reality is laced together a lot more tightly than humans might like to believe.”

The laws of logic don’t prohibit minds with infinite states from existing, nor do they prohibit waves with infinite frequencies existing, nor eyes that can detect infinite variation in frequency. These aren’t properties of our world, but they don’t contradict logic. “It’s all connected somehow, but I can’t show how”—seems a bit like magical thinking.

The problem goes away if you add finiteness in any of a bunch of different places: restrict agents to only output decisions of bounded length, or to only follow strategies of bounded length, or expected utilities are constrained to finitely many distinct levels. (Making utility a bounded real number doesn’t work, but only because there are infinitely many distinct levels close to the bound).

The problem also goes away if you allow agents to output a countable sequence of successively better decisions, and define an optimal sequence as one such that for any possible decision, a decision at least that good appears somewhere in the sequence. This seems like the most promising approach.

Except that isn’t the problem and this post isn’t intended to address practicalities, so the criticism that this is unrealistic is irrelevant.

I would like to extract the meaning of your thought experiment, but it’s difficult because the concepts therein are problematic, or at least I don’t think they have quite the effect you imagine.

If I were asked (by whom?) to play this game, in the first place I would only be able to attach some probability less than 1 to the idea that the master of the game is actually capable of granting me arbitrarily astronomical utility, and likely to do so. A tenet of the “rationality” that you are calling into question is that 0 and 1 are not probabilities, so if you postulate absolute certainty in your least convenient possible world, your thought experiment becomes very obscure.

E.g. what about a thought experiment in a world where 2+2=5, and also 2+2=4 as well; I might entertain such a thought experiment, but (absent some brilliant insight which would need to be supplied in addition) I would not attach importance to it, in comparison to thought experiments that take place in a world more comprehensible and similar to our own.

Now when I go ahead and attach a probability less than 1—even if it be an extremely high probability—to the idea that the game works just as described, I would become seriously confused by this game because the definition of a utility function is:

A utility function assigns numerical values (“utilities”) to outcomes, in such a way that outcomes with higher utilities are always preferred to outcomes with lower utilities.yet my utility function would, according to my own (meta-...) reflection, with a separate high probability, differ from the utility function that the game master claims I have.

To resolve the confusion in question, I would have to (or would in other terms) resolve confusions that have been described clearly on LessWrong and are considered to be the point at which the firm ground of 21st century human rationality meets speculation. So yes, our concept of rationality has admitted limits; I don’t believe your thought experiment adds a new problematic that isn’t implied in the Sequences.

Bearing in mind that my criticism of your thought experiment

as describedstands, I’ll add that a short story I once read comes to mind. In the story, a modern human finds himself in a room in which the walls are closing in; in the centre of the room is a model with some balls and cup-shaped holders, and in the corner a skeleton of a man in knight’s armour. Before he is trapped and suffers the fate of his predecessor, he successfully rearranges the balls into a model of the solar system, gaining utility because he has demonstrated his intelligence (or the scientific advancement of his species) as the alien game master in question would have wished.If I were presented with a game of this kind, my first response would be to negotiate with the game master if possible and ask him pertinent questions, based on the type of entity he appears to be. If I found that it were in my interests to name a very large number, depending on context I would choose from the following responses:

I have various memories of contemplating the vastness of existence. Please read the most piquant such memory, which I am sure is still encoded in my brain, and interpret it as a number. (Surely “99999...” is only one convenient way of expressing a number or magnitude)

“The number of greatest magnitude that (I, you, my CEV...) (can, would...) (comprehend, deem most fitting...)”

May I use Google? I would like to say “three to the three...” in Knuth’s up-arrow notation, but am worried that I will misspell it and thereby fail according to the nature of your game.

Googleplex

“Now when I go ahead and attach a probability less than 1—even if it be an extremely high probability—to the idea that the game works just as described”—You are trying to apply realistic constraints to a hypothetical situation that is not intended to be realistic nor where there are any claims that the results carry over to the real world (as of yet). Taking down an argument I haven’t made doesn’t accomplish anything.

The gamesmaster has no desire to engage with any of your questions or your attempts to avoid directly naming a number. He simply tells you to just name a number.

Your thought experiment, as you want it to be interpreted, is too unrealistic for it to imply a new and surprising critique of Bayesian rationality in our world. However, the title of your post implies (at least to me) that it does form such a critique.

If we interpret the thought experiment as happening in a world similar to our own—which I think is more interesting than an incomprehensible world where the 2nd law of thermodynamics does not exist and the Kolmogorov axioms don’t hold by definition—I would be surprised that such a gamesmaster would view Arabic numerals as the only or best way to communicate an arbitrarily large number. This seems, to me, like a primitive human thought that’s very limited in comparison to the concepts available to a superintelligence which can read a human’s source code and take measurements of the neurons and subatomic particles in his brain. As a human playing this game I would, unless told otherwise in no uncertain terms, try to think outside the limited-human box, both because I believe this would allow me to communicate numbers of greater magnitude and because I would expect the gamesmaster’s motive to include something more interesting, and humane and sensible, than testing my ability to recite digits for an arbitrary length of time.

There’s a fascinating tension in the idea that the gamesmaster is an FAI, because he would bestow upon me arbitrary utility, yet he might be so unhelpful as to have me recite a number for billions of years or more. And what if my utility function includes (timeless?) preferences that interfere with the functioning of the gamesmaster or the game itself?

“However, the title of your post”—titles need to be short so they can’t convey all the complexity of the actual situation.

“Which I think is more interesting”—To each their own.

Let’s assume that the being that is supposed to find a strategy for this scenario operates in a universe whose laws of physics can be specified mathematically. Given this scenario, it will try to maximize the number it outputs. Its output cannot possibly surpass the maximum finite number that can be specified using a string no longer than its universes specification, so it need not try to surpass it, but it might come pretty close. Therefore, for each such universe, there is a best rational actor.

Edit: No, wait. Umm, you might want to find the error in the above reasoning yourself before reading on. Consider the universe with an actor for every natural number that always outputs that number. The above argument says that no actor from that universe could output a bigger number than can be specified using a string no longer than the laws of physics of the universe, but that only goes if the laws of physics include a pointer to that actor—to extract the number 100 from that universe, we need to know that we want to look at the hundredth actor. But your game didn’t require that: Inside the universe, each actor knows that it is itself without any global pointers, and so there can be an infinite hierarchy of better-than-the-previous rational actors in a finitely specified universe.

Any finite universe will have a best such actor, but is even our universe finite? Besides, this was purposefully set in an infinite universe.

Finitely specified universe, not finite universe. That said, until the edit I had failed to realize that the diagonalization argument I used to disallow an infinite universe to contain an infinite hierarchy of finite actors doesn’t work.

For the Unlimited Swap game, are you implicitly assuming that the time spent swapping back and forth has some small negative utility?

No. There’s no utility lost. But if you both wait forever, no-one gets any utility.

Can you define “forever” in this scenario? I thought it was infinite, so there is no such thing.

You can communicate any finite number instantly. Or, in the second version, you can’t, but you are compensated for any time, but you receive the utility when you halt.

You are right, theory is overrated. Just because you don’t have a theoretical justification for commencing an action doesn’t mean that the action isn’t the right action to take if you want to try to “win.” Of course, it is very possible to be in a situation where “winning” is inherently impossible, in which case you could still (rationally) attempt various strategies that seem likely to make you better off than you would otherwise be...

As a practicing attorney, I’ve frequently encountered real-life problems similar to the above. For example, in a negotiation on behalf of a client, there is often what’s called a “bargaining zone” that represents a range of options for possible “deals” that both parties are theoretically willing to accept. Any given “deal” would be Pareto Efficient, and any “deal” within the “bargaining zone,” if it takes place, would make both parties to the negotiation better off than they were before. However, it is possible to strike a superior deal for your client if you are more aggressive and push the terms into the “upper” range of the bargaining zone. On the other hand, you don’t typically know the extent of the “bargaining zone” before you begin negotiations. If you are TOO aggressive and push outside of the range of the other party’s acceptable options, the other party/counsel might get frustrated with you and call off the negotiations entirely, in which case you will lose the deal for everyone and make your client angry with you.

To the extent “winning” is possible here, the strategy for attorneys on both sides is to push the terms of the “deal” as close as possible to the “edge” of what the other will accept without pushing too far and getting the talks called off. Although there are reasonable strategies to the process, very often there isn’t a theoretical “optimally rational strategy” for “winning” a negotiation—you just have to play the game and make your strategic decisions based on new information as it becomes available.

There is an optimal strategy for negotiation. It requires estimating the negotiation zone of the other party and the utility of various outcomes (including failure of negotiation).

Then it’s just a strategy that maximizes the sum of the probability of each outcome times the utility thereto.

The hard parts aren’t the P(X1)U(X1) sums, it’s getting the P(X1) and U(X1) in the first place.

My gut response to the unbounded questions is that a perfectly rational agent would already know (or have a good guess as to) the maximum utility that it could conceivably expect to use within the limit of the expected lifespan of the universe.

There is also an economic objection; at some point it seems right to expect the value of every utilon to decrease in response to the addition of more utilons into the system.

In both objections I’m approaching the same thing from different angles: the upper limit on the “unbounded” utility in this case depends on how much the universe can be improved. The question of how to achieve maximum utility in those scenarios is malformed similarly to that of asking the end state of affairs after completing certain supertasks.

More context is needed.I suspect the same is also true for the Unlimited Swap scenario.The point of utilons is to scale linearly, unlike, say dollars. Maybe there’s a maximum utility that can be obtained, but they never scale non-linearly. The task where you can name any number below 100, but not 100 itself, avoids these issues though.

I don’t understand your objection to the Unlimited Swap scenario, but isn’t it plausible that a perfectly rational agent might not exist?

That task still has the issue that the agent incurs some unstated cost (probably time) to keep mashing on the 9 key (or whatever input method). At some point, the gains are nominal and the agent would be better served collecting utility in the way it usually does. Same goes for the Unlimited Swap scenario: the agent could better spend its time by instantly taking the 1 utilon and going about its business as normal, thus avoiding a stalemate (condition where nobody gets any utility) with 100% certainty.

Is it plausible that a perfectly rational agent might not exist? Certainly. But I hardly think these thought exercises prove that one is not possible. Rather, they suggest that when working with limited information we need a sane stopping function to avoid stalemate. Some conditions have to be “good enough”… I suppose I object to the concept of “infinite patience”.

Everything exists in contextTrue, everything does exist in context. And the context being considered here, is not the real world, but the behaviour in a purely theoretically constructed world. I have made no claims that it corresponds to the real world as of yet, so claiming that it doesn’t correspond to the real world is not a valid criticism.

My criticism is that you have either set up a set of scenarios with insufficient context to answer the question of how to obtain maximum utility, or deliberately constructed these scenarios such that attempting to obtain maximum utility leads to the Actor spending an infinite amount of time while failing to ever complete the task and actually collect. You stated that until the specification of the number, or the back-and-forth game was complete no utility was gained. I responded that the solution is to not play the game, but for the actor to grab as much utility as it could get within a certain

finitetime limit according to its stopping function and go about its business.If it does not, then what is the point? How does such an exercise help us to be “less wrong”? The point of constructing beliefs about Rational Actors is to be able to predict how they would behave

so we can emulate that behavior. By choosing to explore a subject in this context, you are implicitly making the claim that you believe itdoescorrespond to the real world in some way. Furthermore, your choice to qualify your statement with “as of yet” reinforces that implication. So I ask you to state your claim so we may examine itin full context.“Insufficient context”—the context is perfectly well defined. How tired do I get considering large numbers? You don’t get tired at all! What is the opportunity cost of considering large number? There is no opportunity cost at all. And so on. It’s all very well defined.

“Responded that the solution is to not play the game, but for the actor to grab as much utility as it could get within a certain finite time limit according to its stopping function and go about its business.”—except that’s not a single solution, but multiple solutions, depending on which number you stop at.

“If it does not, then what is the point?”—This is only part 1. I plan to write more on this subject eventually. As an analogy, a reader of a book series can’t go to an author and demand that they release volume 2

right nowso that they can understand part 1 in its full context. My objective here is only to convince people of this abstract theoretical point, because I suspect that I’ll need it later (but I don’t know for certain).So you have deliberately constructed a scenario, then defined “winning” as something forbidden by the scenario. Unhelpful.

You have specified multiple games. I have defined a finite set of solutions for each Actor that can all be stated as “use the stopping function”. If your Actor has no such function, it is not rational because it can get stuck by problems with the potential to become unbounded. Remember, the Traveling Salesman must eventually

sellsomething or all that route planning is meaningless. This sort of thing is exactly what a stopping function isfor, but you seem to have written them out of the hypothetical universe for some (as yet unspecified) reason.Incorrect. People do it all the time, and it is now easier than ever. Moreover, I object to the comparison of your essay with a book. This context is more like a conversation than a publication. Please get to the point.

You have done nothing but remove criteria for stopping functions from unbounded scenarios. I don’t believe that is convincing anybody of anything. I suspect the statement

“not every conceivable game in every conceivable universe allows for a stopping function that does not permit somebody else to do better”would be given a non-negligible probability by most of us already. That statement seems to be what you have been arguing, and seems to coincide with your title.Friendly Style Note:I (just now) noticed that you have made some major changes to the article. It might be helpful to isolate those changes structurally to make them more visually obvious. Remember, we may not be rereading the full text very often, so a timestamp might be nice too. :)You’ll be pleased to know that I found a style of indicating edits that I’m happy with. I reaslised that if I make the word edited subscript then it is much less obnoxious, so I’ll be using this technique on future posts.

That sounds like it will be much easier to read. Thank you for following up!

There is no need to re-read the changes to the article. The changes just incorporate things that I’ve also written in the comments to reduce the chance of new commentators coming into the thread with misunderstandings I’ve clarified in the comments.

“So you have deliberately constructed a scenario, then defined “winning” as something forbidden by the scenario. Unhelpful.”—As long as the scenario does not explicitly punish rationality, it is perfectly valid to expect a perfectly rational agent to outperform any other agent.

“Remember, the Traveling Salesman must eventually sell something or all that route planning is meaningless”—I completely agree with this, not stopping is irrational as you gain 0 utility. My point was that you can’t just say, “A perfectly rational agent will choose an action in this set”. You have to specify which action (or actions) an agent could choose whilst being perfectly rational.

“You have done nothing but remove criteria for stopping functions from unbounded scenarios”—And that’s a valid situation to hand off to any so-called “perfectly rational agent”. If it gets beaten, then it isn’t deserving of that name.

I have been operating under my memory of the original premise. I re-read the article to refresh that memory and found the changes. I would simply have been happier if there was an ETA section or something. No big deal, really.

Not so: you have generated infinite options such that there is no selection that can fulfill that expectation. Any agent that tries to do so cannot be perfectly rational since the goal as defined is impossible.

Exactly, if you accept the definition of a perfectly rational agent as a perfect utility maximiser, then there’s no utility maximiser as there’s always another agent that obtains more utility, so there is no perfectly rational agent. I don’t think that this is a particularly unusual way of using the term “perfectly rational agent”.

In this context, I do not accept that definition: you cannot maximize an unbounded function. A Perfectly Rational Agent would know that.

And it would still get beaten by a more rational agent, that would be beaten by a still more rational agent and so on until infinity. There’s a non-terminating set of increasingly rational agents, but no final “most rational” agent.

If the PRA isn’t

tryingto “maximize” an unbounded function, it can’t very well get “beaten” by another agent who chooses x+n because they didn’t have the same goal. I reject, therefore, that an agent that obeys its stopping function in an unbounded scenario may be called any more or less “rational”based on that reason onlythan any other agent that does the same, regardless of the utility it may not have collected.By removing all constraints, you have made comparing results meaningless.

So an agent that chooses only 1 utility could still be a perfectly rational agent in your books?

Might be. Maybe that agent’s utility function is actually bounded at 1 (it’s not trying to maximize, after all). Perhaps it wants 100 utility, but already has firm plans to get the other 99. Maybe it chose a value at random from the range of all positive real numbers (distributed such that the probability of choosing X grows proportional to X) and pre-committed to the results, thus guaranteeing a stopping condition with unbounded expected return. Since it was missing out on unbounded utility in any case, getting

literally anyis better than none, but the difference between x and y is not really interesting.(humorously) Maybe it just has better things to do than measuring its *ahem* stopping function against the other agents.

You’re doing infinity wrong.

alwaysspecify it as a limit. “as X approaches zero, Y grows to infinity”. In your case, X is the cost of calculating a bigger number. The “more rational” agent simply is the one that can identify and communicate the bigger number in time to play the game. Taken that way, it doesn’t disprove perfect rationality, just perfect calculation.Another way to look at it is “always include costs”. Even theoretical perfect rationality is about tradeoffs, not about the results of an impossible calculation.

Could you clarify why you think that I am doing infinity wrong? I’m not actually using infinity, just stating that you aren’t allowed to say infinity, but can only choose a finite number.

As stated in the article, I’m considering the theoretical case where either a) there are no costs to identifying and communicating arbitrarily large numbers (as stated, we are considering celestial being not real physical beings) or b) we are considering real beings, but where any costs related to the effort of identifying a larger number are offset by a magical device

I already admitted that the real world is not like this due to aspects such as calculation costs. I find the idea of a purposeful theoretical model being wrong due to real constraints odd. If someone puts out a theoretical situation as modelling the real world, then that might be a valid critique, but when someone is specifically imagining a world that behaves differently from ours there is no requirement for it to be “real”.

All I am claiming is that within at least one theoretical world (which I’ve provided) perfect rationality does not exist. Whether or not this has any bearing on the real world was not discussed and is left to the reader to speculate on.

You’re doing it wrong by trying to use a limit (good) without specifying the function (making it meaningless).

This is the hidden infinity in your example. There can’t be zero cost. When you evaluate the marginal value of a further calculation, you take expected benefit divided by expected cost. oops, infinity!

Alternately—you hypothesize that any agent would actually stop calculating and pick a number. Why not calculate further? If it’s costless, keep going. I’m not sure in your scenario which infinity wins: infinitely small cost of calculation or infinite time to calculate. Either way, it’s not about whether perfect rationality exists, it’s about which infinity you choose to break first.

If you keep going forever then you never realise any gains, even if it is costless, so that isn’t the rational solution.

“This is the hidden infinity in your example. There can’t be zero cost. When you evaluate the marginal value of a further calculation, you take expected benefit divided by expected cost. oops, infinity!”—so let’s suppose I give an agent a once-off opportunity to gain 100 utility for 0 cost. The agent tries to evaluate if it should take this opportunity and fails because there is no cost and it ends up with an infinity. I would argue that such an agent is very far away from rational if it can’t handle this simple situation.

“You’re doing it wrong by trying to use a limit (good) without specifying the function (making it meaningless)”—Sorry, it still isn’t clear what you are getting at here. I’m not trying to use a limit. You are the one who is insisting that I need to use a limit to evaluate this situation. Have you considered that there might actually be other ways of evaluating the situation? The situation is well specified. State any number and receive that much utility. If you want a utility function, u(x)=x is it. If you’re looking for another kind of function, well what kind of function are you looking for then? Simply stating that I haven’t specified a function isn’t very clear unless you answer this question.

If it takes time, that’s a cost. In your scenario, an agent can keep going forever instantly, whatever that means. That’s the nonsense you need to resolve to have a coherent problem. Add in a time limit and calculation rate, and you’re back to normal rationality. As the time limit or rate approach infinity, so does the utility.

“Add in a time limit and calculation rate, and you’re back to normal rationality”—I am intentionally modelling a theoretical construct, not reality. Claims that my situation isn’t realistic aren’t valid, as I have never claimed that this theoretical situation does correspond to reality. I have purposefully left this question open.

Ai-yah. That’s fine, but please then be sure to caveat your conclusion with “in this non-world...” rather than generalizing about nonexistence of something.

The perfectly rational agent considers all possible different world-states, determines the utility of each of them, and states “X”, where X is the utility of the perfect world.

For the number “X+epsilon” to have been a legal response, the agent would have had to been mistaken about their utility function or what the possible worlds were.

Therefore X is the largest real number.

Note that this is a

constructive proof, and any attempt at counterexample should attempt to prove that the specific X discovered by a perfectly rational omniscient abstract agent with a genie. If the general solution is true, it will be trivially true for one number.That’s not how maths works.