I also separately don’t buy that it’s riskier to build AIs that are sentient
Interesting! Aside from the implications for human agency/power, this seems worse because of the risk of AI suffering—if we build sentient AIs we need to be way more careful about how we treat/use them.
Exactly. Bringing a new kind of moral patient into existence is a moral hazard, because once they exist, we will have obligations toward them, e.g. providing them with limited resources (like land), and giving them part of our political power via voting rights. That’s analogous to Parfit’s Mere Addition Paradox that leads to the repugnant conclusion, in this case human marginalization.
(How could “land” possibly be a limited resource, especially in the context of future AIs? The world doesn’t exist solely on the immutable surface of Earth...)
I mean, if you interpret “land” in a Georgist sense, as the sum of all natural resources of the reachable universe, then yes, it’s finite. And the fights for carving up that pie can start long before our grabby-alien hands have seized all of it. (The property rights to the Andromeda Galaxy can be up for sale long before our Von Neumann probes reach it.)
The salient referent is compute, sure, my point is that it’s startling to see what should in this context be compute within the future lightcone being (very indirectly) called “land”. (I do understand that this was meant as an example clarifying the meaning of “limited resources”, and so it makes perfect sense when decontextualized. It’s just not an example that fits that well when considered within this particular context.)
(I’m guessing the physical world is unlikely to matter in the long run other than as substrate for implementing compute. For that reason importance of understanding the physical world, for normative or philosophical reasons, seems limited. It’s more important how ethics and decision theory work for abstract computations, the meaningful content of the contingent physical computronium.)
Interesting! Aside from the implications for human agency/power, this seems worse because of the risk of AI suffering—if we build sentient AIs we need to be way more careful about how we treat/use them.
Exactly. Bringing a new kind of moral patient into existence is a moral hazard, because once they exist, we will have obligations toward them, e.g. providing them with limited resources (like land), and giving them part of our political power via voting rights. That’s analogous to Parfit’s Mere Addition Paradox that leads to the repugnant conclusion, in this case human marginalization.
(How could “land” possibly be a limited resource, especially in the context of future AIs? The world doesn’t exist solely on the immutable surface of Earth...)
I mean, if you interpret “land” in a Georgist sense, as the sum of all natural resources of the reachable universe, then yes, it’s finite. And the fights for carving up that pie can start long before our grabby-alien hands have seized all of it. (The property rights to the Andromeda Galaxy can be up for sale long before our Von Neumann probes reach it.)
The salient referent is compute, sure, my point is that it’s startling to see what should in this context be compute within the future lightcone being (very indirectly) called “land”. (I do understand that this was meant as an example clarifying the meaning of “limited resources”, and so it makes perfect sense when decontextualized. It’s just not an example that fits that well when considered within this particular context.)
(I’m guessing the physical world is unlikely to matter in the long run other than as substrate for implementing compute. For that reason importance of understanding the physical world, for normative or philosophical reasons, seems limited. It’s more important how ethics and decision theory work for abstract computations, the meaningful content of the contingent physical computronium.)
A population of AI agents could marginalize humans significantly before they are intelligent enough to easily (and quickly!) create more Earths.