My initial inclination is to introduce Xn as the space of events on turn n, and define Xa:b:=b∏i=aXi and then you can express it as ∑σ∈Xk+2:k+nPn(xk+1,σ|x0...xk) .
The notation for the sum operator is unclear. I’d advise writing the sum as i=k+2,...,k+n and using an i subscript inside the sum so it’s clearer what is being substituted where.
Wasn’t there a fairness/continuity condition in the original ADT paper that if there were two “agents” that converged to always taking the same action, then the embedder would assign them the same value? (more specifically, if Et(|At−Bt|)<δ, then Et(|Et(At)−Et(Bt)|)<ϵ ) This would mean that it’d be impossible to have Et(Et(ADTt,ϵ)) be low while Et(Et(straightt)) is high, so the argument still goes through.
Although, after this whole line of discussion, I’m realizing that there are enough substantial differences between the original formulation of ADT and the thing I wrote up that I should probably clean up this post a bit and clarify more about what’s different in the two formulations. Thanks for that.
in the ADT paper, the asymptotic dominance argument is about the limit of the agent’s action as epsilon goes to 0. This limit is not necessarily computable, so the embedder can’t contain the agent, since it doesn’t know epsilon. So the evil problem doesn’t work.
Agreed that the evil problem doesn’t work for the original ADT paper. In the original ADT paper, the agents are allowed to output distributions over moves. I didn’t like this because it implicitly assumes that it’s possible for the agent to perfectly randomize, and I think randomization is better modeled by a (deterministic) action that consults an environmental random-number generator, which may be correlated with other things.
What I meant was that, in the version of argmax that I set up, if A is the two constant policies “take blank box” and “take shiny box”, then for the embedder F where the opponent runs argmax to select which box to fill, the argmax agent will converge to deterministically randomizing between the two policies, by the logical inductor assigning very similar expected utility to both options such that the inductor can’t predict which action will be chosen. And this occurs because the inductor outputting more of “take the blank box” will have F(shiny) converge to a higher expected value (so argmax will learn to copy that), and the inductor outputting more of “take the shiny box” will have F(blank) converge to a higher expected value (so argmax will learn to copy that).
The optimality proof might be valid. I didn’t understand which specific step you thought was wrong.
So, the original statement in the paper was
It must then be the case that limt→∞Et[Ft(At)−Ft(Bt)]>η for every A∈[A], B∉[A]. Let A be the first element of [A] in A. Since every class will be seperated by at least η in the limit, sadtη(F,A) will eventually be a distribution over just [A]. And since A∼A′ for every A, A′∈[A], by the definition of soft_argmax it must be the case that limt→∞[|sadtη(F,A)t−At|]=0.
The issue with this is the last sentence. It’s basically saying “since the two actions A and A′ get equal expected utility in the limit, the total variation distance between a distribution over the two actions, and one of the actions, limits to zero”, which is false
And it is specifically disproved by the second counterexample, where there are two actions that both result in 1 utility, so they’re both in the same equivalence class, but a probabilistic mixture between them (as sadtη converges to playing, for all η) gets less than 1 utility.
Consider the following embedder. According to this embedder, you will play chicken against ADT-epsilon who knows who you are. When ADT-epsilon considers this embedder, it will always pass the reality filter, since in fact ADT-epsilon is playing against ADT-epsilon. Furthermore, this embedder gives NeverSwerveBot a high utility. So ADT-epsilon expects a high utility from this embedder, through NeverSwerveBot, and it never swerves.
You’ll have to be more specific about “who knows what you are”. If it unpacks as “opponent only uses the embedder where it is up against [whatever policy you plugged in]“, then NeverSwerveBot will have a high utility, but it will get knocked down by the reality filter, because if you converge to never swerving, Et(Ut) will converge to 0, and the inductor will learn that straight=argmaxFt=ADTt so it will converge to assigning equal expected value to F(straight) andF(ADT), and E(F(straight)) converges to 1.
If it unpacks as “opponent is ADT-epsilon”, and you converge to never swerving, then argmaxing will start duplicating the swerve strategy instead of going straight. In both cases, the argument fails.
I got an improved reality-filter that blocks a certain class of environments that lead conjecture 1 to fail, although it isn’t enough to deal with the provided chicken example and lead to a proof of conjecture 1. (the t subscripts will be suppressed for clarity)
Instead of the reality-filter for E being |E(E(ADT))−E(U)|<ϵ
it is now
This doesn’t just check whether reality is recovered on average, it also checks whether all the “plausible conditionals” line up as well. Some of the conditionals may not be well-formed, as there may be conditioning on low-or-zero probability events, but these are then multiplied by a very small number, so no harm is done.
This has the nice property that for all “plausibly chosen embedders” F that have a probability sufficiently far away from 0, all embedders E and E′ that pass this reality filter have the property that E(E(ADT)|ADT=amF)≃tE(E′(ADT)|ADT=amF)
So all embedders that pass the reality filter will agree on the expected utility of selecting a particular embedder that isn’t very unlikely to be selected.
I figured out what feels slightly off about this solution. For events like “I have a long memory and accidentally dropped a magnet on it”, it intuitively feels like describing your spot in the environment and the rules of your environment is much lower K-complexity than finding a turing machine/environment that starts by giving you the exact (long) scrambled sequence of memories that you have, and then resumes normal operating.
Although this also feels like something nearby is actually desired behavior. If you rewrite the tape to be describing some other simple environment, you would intuitively expect the AIXI to act as if it’s in the simple environment for a brief time before gaining enough information to conclude that things have changed and rederive the new rules of where it is.
Not quite. If taking bet 9 is a prerequisite to taking bet 10, then AIXI won’t take bet 9, but if bet 10 gets offered whether or not bet 9 is accepted, then AIXI will be like “ah, future me will take the bet, and wind up with 10+ϵ in the heads world and −20+2ϵ in the tails world. This is just a given. I’ll take this +15/-15 bet as it has net positive expected value, and the loss in the heads world is more than counterbalanced by the reduction in the magnitude of loss for the tails world”
Something else feels slightly off, but I can’t quite pinpoint it at this point. Still, I guess this solves my question as originally stated, so I’ll PM you for payout. Well done!
(btw, you can highlight a string of text and hit crtl+4 to turn it into math-mode)
Yup, I meant counterfactual mugging. Fixed.
I think I remember the original ADT paper showing up on agent foundations forum before a writeup on logical EDT with exploration, and my impression of which came first was affected by that. Also, the “this is detailed in this post” was referring to logical EDT for exploration. I’ll edit for clarity.
I actually hadn’t read that post or seen the idea anywhere before writing this up. It’s a pretty natural resolution, so I’d be unsurprised if it was independently discovered before. Sorry about being unable to assist.
The extra penalty to describe where you are in the universe corresponds to requiring sense data to pin down *which* star you are near, out of the many stars, even if you know the laws of physics, so it seems to recover desired behavior.
Giles Edkins coded up a thing which lets you plug in numbers for a 2-player, 2-move game payoff matrix and it automatically displays possible outcomes in utility-space. It may be found here. The equilibrium points and strategy lines were added later in MS Paint.
The basic reason for the dependency relation to care about oracle queries from strategies is that, when you have several players all calling the oracle on each other, there’s no good way to swap out the oracle calls with the computation. The trick you describe does indeed work, and is a reason to not call any more turing machines than you need to, but there’s several things it doesn’t solve. For instance, if you are player 1, and your strategy depends on oracle calls to player 2 and 3, and the same applies to the other two players, you may be able to swap out an oracle call to player two with player two’s actual code (which calls players 1 and 3), but you can’t unpack any more oracle calls into their respective computations without hitting an infinite regress.
I’m not sure what you mean by fixing the utility function occurring before fixing the strategy. In the problem setup of a game, you specify a utility function machine and a strategy machine for everyone, and there isn’t any sort of time or order on this (there’s just a set of pairs of probabilistic oracle machines) and you can freely consider things such as “what happens when we change some player’s strategies/utility function machines”
Ah, the formal statement was something like “if the policy A isn’t the argmax policy, the successor policy B must be in the policy space of the future argmax, and the action selected by policy A is computed so the relevant equality holds”
Yeah, I am assuming fast feedback that it is resolved on day n .
What I meant was that the computation isn’t extremely long in the sense of description length, not in the sense of computation time. Also, we aren’t doing policy search over the set of all turing machines, we’re doing policy search over some smaller set of policies that can be guaranteed to halt in a reasonable time (and more can be added as time goes on)
Also I’m less confident in conditional future-trust for all conditionals than I used to be, I’ll try to crystallize where I think it goes wrong.