Yes, by “locally outpace” I simply meant outpace at some non-global scale, there will of course be some tighter upper bound for that scale when it comes to real world agents
What I’m saying is that there is no upper bound for real world agents, the scale of “locally” in this weird sense can be measured in eons and galaxies.
Yes, there’s no upper bound for what counts as “local” (except global), but there is an upper bound for the scale at which agents’ predictions can outpace the territory (eg humans can’t predict everything in the galaxy)
The relevance of extracting/formulating something “local” is that prediction by smaller maps within it remains possible, ignoring the “global” solar flares and such. So that is a situation that could be set up so that a smaller agent predicts everything eons in the future at galaxy scale. Perhaps a superintelligence predicts human process of reflection, that is it’s capable of perfectly answering specific queries before the specific referenced event would take place in actuality, while the computer is used to run many independent possibilities in parallel. So the superintelligence couldn’t enumerate them all in advance, but it could quickly chase and overtake any given one of them.
Even a human would be capable of answering such questions if nothing at all is happening within this galaxy scale computer, and the human is paused for eons after making the prediction that nothing will be happening. (I don’t see what further “first sense” of locality or upper bound that is distinct from this could be relevant.)
I intended ‘local’ (aka not global) to be a necessary but not sufficient condition for predictions made by smaller maps within it to be possible (cuz global predictions runs into problems of embedded agency)
I’m mostly agnostic about what the other necessary conditions are & what the sufficient conditions are
Yes, by “locally outpace” I simply meant outpace at some non-global scale, there will of course be some tighter upper bound for that scale when it comes to real world agents
What I’m saying is that there is no upper bound for real world agents, the scale of “locally” in this weird sense can be measured in eons and galaxies.
Yes, there’s no upper bound for what counts as “local” (except global), but there is an upper bound for the scale at which agents’ predictions can outpace the territory (eg humans can’t predict everything in the galaxy)
I meant upper bound in the second sense
The relevance of extracting/formulating something “local” is that prediction by smaller maps within it remains possible, ignoring the “global” solar flares and such. So that is a situation that could be set up so that a smaller agent predicts everything eons in the future at galaxy scale. Perhaps a superintelligence predicts human process of reflection, that is it’s capable of perfectly answering specific queries before the specific referenced event would take place in actuality, while the computer is used to run many independent possibilities in parallel. So the superintelligence couldn’t enumerate them all in advance, but it could quickly chase and overtake any given one of them.
Even a human would be capable of answering such questions if nothing at all is happening within this galaxy scale computer, and the human is paused for eons after making the prediction that nothing will be happening. (I don’t see what further “first sense” of locality or upper bound that is distinct from this could be relevant.)
I intended ‘local’ (aka not global) to be a necessary but not sufficient condition for predictions made by smaller maps within it to be possible (cuz global predictions runs into problems of embedded agency)
I’m mostly agnostic about what the other necessary conditions are & what the sufficient conditions are