I would have guessed that “making AIs be moral patients” looks like “make AIs have their own independent preferences/objectives which we intentionally don’t control precisely” which increases misalignment risks.
At a more basic level, if AIs are moral patients, then there will be downsides for various safety measures and AIs would have plausible deniability for being opposed to safety measures. IMO, the right response to the AI taking a stand against your safety measures for AI welfare reasons is “Oh shit, either this AI is misaligned or it has welfare. Either way this isn’t what we wanted and needs to be addressed, we should train our AI differently to avoid this.”
Even absent AI takeover, I’m quite worried about lock-in. I think we could easily lock in AIs that are or are not moral patients and have little ability to revisit that decision later
I don’t understand, won’t all the value come from minds intentionally created for value rather than in the minds of the laborers? Also, won’t architecture and design of AIs radically shift after humans aren’t running day to day operations?
I don’t understand the type of lock in your imagining, but it naively sounds like a world which has negligible longtermist value (because we got locked into obscure specifics like this), so making it somewhat better isn’t important.
I would have guessed that “making AIs be moral patients” looks like “make AIs have their own independent preferences/objectives which we intentionally don’t control precisely” which increases misalignment risks.
At a more basic level, if AIs are moral patients, then there will be downsides for various safety measures and AIs would have plausible deniability for being opposed to safety measures. IMO, the right response to the AI taking a stand against your safety measures for AI welfare reasons is “Oh shit, either this AI is misaligned or it has welfare. Either way this isn’t what we wanted and needs to be addressed, we should train our AI differently to avoid this.”
I don’t understand, won’t all the value come from minds intentionally created for value rather than in the minds of the laborers? Also, won’t architecture and design of AIs radically shift after humans aren’t running day to day operations?
I don’t understand the type of lock in your imagining, but it naively sounds like a world which has negligible longtermist value (because we got locked into obscure specifics like this), so making it somewhat better isn’t important.