It is hypothesized that digital minds could in theory be super-beneficiaries. This could happen because digital hardware runs millions of times faster than biological neurons, along with several other optimizations.
Super-beneficiaries pose many ethical challenges :
How to evaluate whether an advanced AI is sentient and to what degree ?
How to share resources between biological and digital minds ?
Should digital minds be given rights ? Which rights would make sense for such beings ?
How to prevent arbitrary discrimination ?
How to integrate digital minds with humans in society in a way that is beneficial for both ?
From a utilitarian perspective, these super-beneficiaries would have a strong claim over resources. There may still be moral reasons to aim for not letting the entirety of the resources to super-beneficiaries, such as to “hedge against moral error, to appropriately reflect moral pluralism, to account for game-theoretic considerations, or simply as a matter of realpolitik”.
Deontological views are varied and may not care about enormous welfare. But such views are usually insensitive to scale and may not really care anyway about how large-scale resources like other galaxies are used. And some digital minds may still have superhuman moral status and claims over resources depending on the chosen deontological criteria for moral status.