Suppose an ideal agent is using Solomonoff induction to predict it’s inputs. The models which have the agent located very far away, at positions with enormously huge spatial distance, have to encode this distance into the model somehow, to be able to predict input that you are getting. That makes them very huge (all of them) and they all combined have incredibly tiny contribution to algorithmic probability.
If you are to do confused Solomonoff induction whereby you seek ‘explanation’ rather than a proper model—seek anything that contains the agent somewhere inside of it—then the whole notion just breaks down and you do not get anything useful out, you just get iterator over all possible (or if you skip the low level fundamental problem, you run into some form of big-universe issue where you hit ‘why bother if there’s a copy of me somewhere far away’ and ‘what is the meaning of measurement if there’s some version of me measuring something wrong’, but ultimately if you started from scratch you wouldn’t even get to that point as you’d never be able to form any even remotely useful world model).
A model for your observations consists (informally) of a model for the universe and then coordinates within the universe which pinpoint your observations, at least in the semantics of Solomonoff induction. So in an infinite universe, most observations must be very complicated, since the coordinates must already be quite complicated. Solomonoff induction naturally defines a roughly-uniform measure over observers in each possible universe, which very slightly discounts observers as they get farther away from distinguished landmarks. The slight discounting makes large universes unproblematic.
I wrote about these things at some point, here, though that was when I was just getting into these things and it now looks silly even to current me. But that’s still the only framework I know for reasoning about big universes, splitting brains, and the born probabilities.
Consequentialist decision making on “small” mathematical structures seems relatively less perplexing (and far from entirely clear), but I’m very much confused about what happens when there are too “many” instances of decision’s structure or in the presence of observations, and I can’t point to any specific “framework” that explains what’s going on (apart from the general hunch that understanding math better clarifies these things, and it does so far).
If X has a significant probability of existing, but you don’t know at all how to reason about X, how confident can you be that your inability to reason about X isn’t doing tremendous harm? (In this case, X = big universes, splitting brains, etc.)
The large world issues seem kind of confused.
Suppose an ideal agent is using Solomonoff induction to predict it’s inputs. The models which have the agent located very far away, at positions with enormously huge spatial distance, have to encode this distance into the model somehow, to be able to predict input that you are getting. That makes them very huge (all of them) and they all combined have incredibly tiny contribution to algorithmic probability.
If you are to do confused Solomonoff induction whereby you seek ‘explanation’ rather than a proper model—seek anything that contains the agent somewhere inside of it—then the whole notion just breaks down and you do not get anything useful out, you just get iterator over all possible (or if you skip the low level fundamental problem, you run into some form of big-universe issue where you hit ‘why bother if there’s a copy of me somewhere far away’ and ‘what is the meaning of measurement if there’s some version of me measuring something wrong’, but ultimately if you started from scratch you wouldn’t even get to that point as you’d never be able to form any even remotely useful world model).
I don’t know what you mean by ‘large world issues’.
Why is the agent’s distance from you relevant to predicting its inputs? Why does a large distance imply huge complexity?
A model for your observations consists (informally) of a model for the universe and then coordinates within the universe which pinpoint your observations, at least in the semantics of Solomonoff induction. So in an infinite universe, most observations must be very complicated, since the coordinates must already be quite complicated. Solomonoff induction naturally defines a roughly-uniform measure over observers in each possible universe, which very slightly discounts observers as they get farther away from distinguished landmarks. The slight discounting makes large universes unproblematic.
I wrote about these things at some point, here, though that was when I was just getting into these things and it now looks silly even to current me. But that’s still the only framework I know for reasoning about big universes, splitting brains, and the born probabilities.
I get by with none...
Are you sure?
Consequentialist decision making on “small” mathematical structures seems relatively less perplexing (and far from entirely clear), but I’m very much confused about what happens when there are too “many” instances of decision’s structure or in the presence of observations, and I can’t point to any specific “framework” that explains what’s going on (apart from the general hunch that understanding math better clarifies these things, and it does so far).
If X has a significant probability of existing, but you don’t know at all how to reason about X, how confident can you be that your inability to reason about X isn’t doing tremendous harm? (In this case, X = big universes, splitting brains, etc.)