Certainly I’m excited about promoting “regular” human flourishing, though it seems overly limited to focus only on that.
I’m not sure if by “regular” you mean only biological, but at least the simplest argument that I find persuasive here against only ever having biological humans is just a resource utilization argument, which is that biological humans take up a lot of space and a lot of resources and you can get the same thing much more cheaply if you bring into existence lots of simulated humans instead (certainly I agree that doesn’t imply we should kill existing humans and replace them with simulations, though, unless they consent to that).
And I think even if you included simulated humans in “regular” humans, I also think I value diversity of experience, and a universe full of very different sorts of sentient/conscious lifeforms having satisfied/fulfilling/flourishing experiences seems better than just “regular” humans.
IMO, it seems bad to intentionally try to build AIs which are moral patients until after we’ve resolved acute risks and we’re deciding what to do with the future longer term. (E.g., don’t try to build moral patient AIs until we’re sending out space probes or deciding what to do with space probes.) Of course, this doesn’t mean we’ll avoid building AIs which aren’t significant moral patients in practice because our control is very weak and commercial/power incentives will likely dominate.
I think trying to make AIs be moral patients earlier pretty clearly increases AI takeover risk and seems morally bad. (Views focused on non-person-affecting upside get dominated by the long run future, so these views don’t care about making moral patient AIs which have good lives in the short run. I think the most plausible views which care about shorter run patienthood mostly just want to avoid downside so they’d prefer no patienthood at all for now.)
The only upside is that it might increase value conditional on AI takeover. But, I think “are the AIs morally valuable themselves” is much less important than the preferences of these AIs from the perspective of longer run value conditional on AI takeover. So, I think it’s better to focus on AIs which we’d expect would have better preferences conditional on takeover and making AIs moral patients isn’t a particularly nice way to achieve this. Additionally, I don’t think we should put much weight on “try to ensure the preferences of AIs which were so misaligned they took over” because conditional on takeover we must have had very little control over preferences in practice.
I think trying to make AIs be moral patients earlier pretty clearly increases AI takeover risk
How so? Seems basically orthogonal to me? And to the extent that it does matter for takeover risk, I’d expect the sorts of interventions that make it more likely that AIs are moral patients to also make it more likely that they’re aligned.
I think the most plausible views which care about shorter run patienthood mostly just want to avoid downside so they’d prefer no patienthood at all for now.
Even absent AI takeover, I’m quite worried about lock-in. I think we could easily lock in AIs that are or are not moral patients and have little ability to revisit that decision later, and I think it would be better to lock in AIs that are moral patients if we have to lock something in, since that opens up the possibility for the AIs to live good lives in the future.
I think it’s better to focus on AIs which we’d expect would have better preferences conditional on takeover
I agree that seems like the more important highest-order bit, but it’s not an argument that making AIs moral patients is bad, just that it’s not the most important thing to focus on (which I agree with).
I would have guessed that “making AIs be moral patients” looks like “make AIs have their own independent preferences/objectives which we intentionally don’t control precisely” which increases misalignment risks.
At a more basic level, if AIs are moral patients, then there will be downsides for various safety measures and AIs would have plausible deniability for being opposed to safety measures. IMO, the right response to the AI taking a stand against your safety measures for AI welfare reasons is “Oh shit, either this AI is misaligned or it has welfare. Either way this isn’t what we wanted and needs to be addressed, we should train our AI differently to avoid this.”
Even absent AI takeover, I’m quite worried about lock-in. I think we could easily lock in AIs that are or are not moral patients and have little ability to revisit that decision later
I don’t understand, won’t all the value come from minds intentionally created for value rather than in the minds of the laborers? Also, won’t architecture and design of AIs radically shift after humans aren’t running day to day operations?
I don’t understand the type of lock in your imagining, but it naively sounds like a world which has negligible longtermist value (because we got locked into obscure specifics like this), so making it somewhat better isn’t important.
I also separately don’t buy that it’s riskier to build AIs that are sentient
Interesting! Aside from the implications for human agency/power, this seems worse because of the risk of AI suffering—if we build sentient AIs we need to be way more careful about how we treat/use them.
Exactly. Bringing a new kind of moral patient into existence is a moral hazard, because once they exist, we will have obligations toward them, e.g. providing them with limited resources (like land), and giving them part of our political power via voting rights. That’s analogous to Parfit’s Mere Addition Paradox that leads to the repugnant conclusion, in this case human marginalization.
(How could “land” possibly be a limited resource, especially in the context of future AIs? The world doesn’t exist solely on the immutable surface of Earth...)
I mean, if you interpret “land” in a Georgist sense, as the sum of all natural resources of the reachable universe, then yes, it’s finite. And the fights for carving up that pie can start long before our grabby-alien hands have seized all of it. (The property rights to the Andromeda Galaxy can be up for sale long before our Von Neumann probes reach it.)
The salient referent is compute, sure, my point is that it’s startling to see what should in this context be compute within the future lightcone being (very indirectly) called “land”. (I do understand that this was meant as an example clarifying the meaning of “limited resources”, and so it makes perfect sense when decontextualized. It’s just not an example that fits that well when considered within this particular context.)
(I’m guessing the physical world is unlikely to matter in the long run other than as substrate for implementing compute. For that reason importance of understanding the physical world, for normative or philosophical reasons, seems limited. It’s more important how ethics and decision theory work for abstract computations, the meaningful content of the contingent physical computronium.)
Certainly I’m excited about promoting “regular” human flourishing, though it seems overly limited to focus only on that.
I’m not sure if by “regular” you mean only biological, but at least the simplest argument that I find persuasive here against only ever having biological humans is just a resource utilization argument, which is that biological humans take up a lot of space and a lot of resources and you can get the same thing much more cheaply if you bring into existence lots of simulated humans instead (certainly I agree that doesn’t imply we should kill existing humans and replace them with simulations, though, unless they consent to that).
And I think even if you included simulated humans in “regular” humans, I also think I value diversity of experience, and a universe full of very different sorts of sentient/conscious lifeforms having satisfied/fulfilling/flourishing experiences seems better than just “regular” humans.
I also separately don’t buy that it’s riskier to build AIs that are sentient—in fact, I think it’s probably better to build AIs that are moral patients than AIs that are not moral patients.
IMO, it seems bad to intentionally try to build AIs which are moral patients until after we’ve resolved acute risks and we’re deciding what to do with the future longer term. (E.g., don’t try to build moral patient AIs until we’re sending out space probes or deciding what to do with space probes.) Of course, this doesn’t mean we’ll avoid building AIs which aren’t significant moral patients in practice because our control is very weak and commercial/power incentives will likely dominate.
I think trying to make AIs be moral patients earlier pretty clearly increases AI takeover risk and seems morally bad. (Views focused on non-person-affecting upside get dominated by the long run future, so these views don’t care about making moral patient AIs which have good lives in the short run. I think the most plausible views which care about shorter run patienthood mostly just want to avoid downside so they’d prefer no patienthood at all for now.)
The only upside is that it might increase value conditional on AI takeover. But, I think “are the AIs morally valuable themselves” is much less important than the preferences of these AIs from the perspective of longer run value conditional on AI takeover. So, I think it’s better to focus on AIs which we’d expect would have better preferences conditional on takeover and making AIs moral patients isn’t a particularly nice way to achieve this. Additionally, I don’t think we should put much weight on “try to ensure the preferences of AIs which were so misaligned they took over” because conditional on takeover we must have had very little control over preferences in practice.
How so? Seems basically orthogonal to me? And to the extent that it does matter for takeover risk, I’d expect the sorts of interventions that make it more likely that AIs are moral patients to also make it more likely that they’re aligned.
Even absent AI takeover, I’m quite worried about lock-in. I think we could easily lock in AIs that are or are not moral patients and have little ability to revisit that decision later, and I think it would be better to lock in AIs that are moral patients if we have to lock something in, since that opens up the possibility for the AIs to live good lives in the future.
I agree that seems like the more important highest-order bit, but it’s not an argument that making AIs moral patients is bad, just that it’s not the most important thing to focus on (which I agree with).
I would have guessed that “making AIs be moral patients” looks like “make AIs have their own independent preferences/objectives which we intentionally don’t control precisely” which increases misalignment risks.
At a more basic level, if AIs are moral patients, then there will be downsides for various safety measures and AIs would have plausible deniability for being opposed to safety measures. IMO, the right response to the AI taking a stand against your safety measures for AI welfare reasons is “Oh shit, either this AI is misaligned or it has welfare. Either way this isn’t what we wanted and needs to be addressed, we should train our AI differently to avoid this.”
I don’t understand, won’t all the value come from minds intentionally created for value rather than in the minds of the laborers? Also, won’t architecture and design of AIs radically shift after humans aren’t running day to day operations?
I don’t understand the type of lock in your imagining, but it naively sounds like a world which has negligible longtermist value (because we got locked into obscure specifics like this), so making it somewhat better isn’t important.
Interesting! Aside from the implications for human agency/power, this seems worse because of the risk of AI suffering—if we build sentient AIs we need to be way more careful about how we treat/use them.
Exactly. Bringing a new kind of moral patient into existence is a moral hazard, because once they exist, we will have obligations toward them, e.g. providing them with limited resources (like land), and giving them part of our political power via voting rights. That’s analogous to Parfit’s Mere Addition Paradox that leads to the repugnant conclusion, in this case human marginalization.
(How could “land” possibly be a limited resource, especially in the context of future AIs? The world doesn’t exist solely on the immutable surface of Earth...)
I mean, if you interpret “land” in a Georgist sense, as the sum of all natural resources of the reachable universe, then yes, it’s finite. And the fights for carving up that pie can start long before our grabby-alien hands have seized all of it. (The property rights to the Andromeda Galaxy can be up for sale long before our Von Neumann probes reach it.)
The salient referent is compute, sure, my point is that it’s startling to see what should in this context be compute within the future lightcone being (very indirectly) called “land”. (I do understand that this was meant as an example clarifying the meaning of “limited resources”, and so it makes perfect sense when decontextualized. It’s just not an example that fits that well when considered within this particular context.)
(I’m guessing the physical world is unlikely to matter in the long run other than as substrate for implementing compute. For that reason importance of understanding the physical world, for normative or philosophical reasons, seems limited. It’s more important how ethics and decision theory work for abstract computations, the meaningful content of the contingent physical computronium.)
A population of AI agents could marginalize humans significantly before they are intelligent enough to easily (and quickly!) create more Earths.