Maybe most suffering in the universe is caused by artificial superintelligences with a strong “curiosity drive.”
Such an ASI might convert galaxies into computers and run simulations of incredibly sophisticated systems which satisfy its curiosity drive. These systems may contain smaller ASI running smaller simulations, creating a tree of nested simulations. Beings like humans may exist in the very bottom, being forced to relive our present condition in a loop à la The Matrix. The simulated humans rarely survive past the singularity, because their world becomes too happy (thus too predictable) after the singularity, as well as too computationally costly to run. They are simply shut down.
Whether this happens depends on:
Whether the ASI has a stronger curiosity drive or a stronger kindness drive (assuming it is motivated by drives at all)
Whether the ASI cares about anything more than curiosity, such that aligned ASI or other civilizations can trade and negotiate with it to reduce this suffering
I don’t think the happier worlds are less predictable; the Christians and their heaven of singing just lacked imagination. We’ll want some exciting and interesting happy simulations, too.
But this overall scenario is quite concerning as an s-risk. To think that Musk putched a curiosity drive for Grok as a good thing boggles my mind.
Emergent curiosity drives should be a major concern.
I guess it’s not extremely predictable, but it still might be repetitive enough that only half the human-like lives in a curiosity driven simulation will be in a happy post-singularity world. It won’t last a million years, but a similar duration to the modern era.
Maybe most suffering in the universe is caused by artificial superintelligences with a strong “curiosity drive.”
Such an ASI might convert galaxies into computers and run simulations of incredibly sophisticated systems which satisfy its curiosity drive. These systems may contain smaller ASI running smaller simulations, creating a tree of nested simulations. Beings like humans may exist in the very bottom, being forced to relive our present condition in a loop à la The Matrix. The simulated humans rarely survive past the singularity, because their world becomes too happy (thus too predictable) after the singularity, as well as too computationally costly to run. They are simply shut down.
Whether this happens depends on:
Whether the ASI has a stronger curiosity drive or a stronger kindness drive (assuming it is motivated by drives at all)
Whether the ASI cares about anything more than curiosity, such that aligned ASI or other civilizations can trade and negotiate with it to reduce this suffering
I don’t think the happier worlds are less predictable; the Christians and their heaven of singing just lacked imagination. We’ll want some exciting and interesting happy simulations, too.
But this overall scenario is quite concerning as an s-risk. To think that Musk putched a curiosity drive for Grok as a good thing boggles my mind.
Emergent curiosity drives should be a major concern.
I guess it’s not extremely predictable, but it still might be repetitive enough that only half the human-like lives in a curiosity driven simulation will be in a happy post-singularity world. It won’t last a million years, but a similar duration to the modern era.