Interesting, hadn’t seen Hollerith’s posts before. I came to a similar conclusion about AIXI’s behavior as exemplifying a final attractor in intelligent systems with long planning horizons.
If the horizon is long enough (infinite), the single behavioral attractor is maximizing computational power and applying it towards extensive universal simulation/prediction.
This relates to simulism and the SA, as any superintelligences/gods can thus be expected to create many simulated universes, regardless of their final goal evaluation criteria.
In fact, perhaps the final goal criteria applies to creating new universes with the desired properties.
Interesting, hadn’t seen Hollerith’s posts before. I came to a similar conclusion about AIXI’s behavior as exemplifying a final attractor in intelligent systems with long planning horizons.
If the horizon is long enough (infinite), the single behavioral attractor is maximizing computational power and applying it towards extensive universal simulation/prediction.
This relates to simulism and the SA, as any superintelligences/gods can thus be expected to create many simulated universes, regardless of their final goal evaluation criteria.
In fact, perhaps the final goal criteria applies to creating new universes with the desired properties.