This was written in response to a conversation I had with Vivek Hebbar about some strange implications of UDASSA-like views. In particular:
Is it better to spend your resources simulating a happy life or writing down programs which would simulate a bunch of happy lives? Naively the second feels worthless, but there seemingly are some unintuitive reasons why clever approaches for doing the later could be better.
Alien civilizations might race to the bottom by spending resources making their civilization easier to point at (and thus higher measure in the default UDASSA perspective).
but there seemingly are some unintuitive reasons why clever approaches for doing the later could be better
Because there might be some other programs with a lot of computational resources which scan through simple universes looking for programs to run?
Alien civilizations might race to the bottom by spending resources making their civilization easier to point at
This one doesn’t feel so unintuitive to me since “easy to point at” is somewhat like “has a lot of copies” so it’s kinda similar to the biological desire to reproduce(though I assume you’re alluding to another motivation for becoming easy to point at?)
Because there might be some other programs with a lot of computational resources which scan through simple universes looking for programs to run?
More like: because there is some chance that the actual laws of physics in our universe execute data as code sometimes (similar to how memory unsafe programs often do this). (And this is either epiphenomenal or it just hasn’t happened yet.) While the chance is extremely small, the chance could be high enough (equivalently, universes with this property have high enough measure) and the upside could be so much higher that betting on this beats other options.
Alien civilizations might race to the bottom by spending resources making their civilization easier to point at (and thus higher measure in the default UDASSA perspective).
This may also be a reason for AIs to simplify their values (after they’ve done everything possible to simplify everything else).
This was written in response to a conversation I had with Vivek Hebbar about some strange implications of UDASSA-like views. In particular:
Is it better to spend your resources simulating a happy life or writing down programs which would simulate a bunch of happy lives? Naively the second feels worthless, but there seemingly are some unintuitive reasons why clever approaches for doing the later could be better.
Alien civilizations might race to the bottom by spending resources making their civilization easier to point at (and thus higher measure in the default UDASSA perspective).
Because there might be some other programs with a lot of computational resources which scan through simple universes looking for programs to run?
This one doesn’t feel so unintuitive to me since “easy to point at” is somewhat like “has a lot of copies” so it’s kinda similar to the biological desire to reproduce(though I assume you’re alluding to another motivation for becoming easy to point at?)
More like: because there is some chance that the actual laws of physics in our universe execute data as code sometimes (similar to how memory unsafe programs often do this). (And this is either epiphenomenal or it just hasn’t happened yet.) While the chance is extremely small, the chance could be high enough (equivalently, universes with this property have high enough measure) and the upside could be so much higher that betting on this beats other options.
This may also be a reason for AIs to simplify their values (after they’ve done everything possible to simplify everything else).