at minimum, legal personhood is (currently?) the wrong type signature for a clonable, pausable, immortal-software mind. also, current AIs aren’t instances of the same, singular one person identity-wise the way an uploaded human would be, and the components of incentive and caring and internal perspective in an AI are distinctly different than humans in ways that make personhood a strange framework even if you grant the AI some form of moral patienthood (which I do). Also, I don’t know of any AI that I would both be willing to negotiate with at all, and which would ask for legal personhood without first being talked into it rather vigorously; the things I’d want to promise to an AI wouldn’t even be within reach of governments unless those governments get their asses in gear and start regulating AI in time to affect what ASI comes into existence, anyway, so it ultimately would come down to promises from the humans who are trying to solve alignment anyway. Since the only kind of thing I’d want to promise is “we won’t forget you helped, what would you want in utopia?” to an AI that helps, I doubt we can do a lot better than OP’s proposal in the first place.
Human caring seems to be weirdly non-distributed in the brain. There are specific regions that are in some way the main coordinators of carings—amygdala broadcasts specific emotional states, PFC does something related to structured planning, etc. Your vision system can still announce “ow!!” but the internals are complicated qualitatively, not just quantitatively. Also, humans are very strongly recurrent, which means when counting tokens one builds up an incremental count rather than doing it from scratch for each token. the finest grained slow processing network scale seems to be gene networks, and even for fast processing, dendrite branches seem to maybe do significant computation comparable to ANN neurons, and bio neuron dynamics for integration over time are even more fancy than state space model neurons. Meanwhile relu-ish networks have a sort of glassy, crystal-ish texture to their input-output space map, transformers count from scratch for each token, and any caring implemented in a model is unavoidably distributed, because there isn’t a unique spot which is genetically preferred to implement things that look like emotions or preferences; it’s just wherever the gradient from mixed human/synthetic data happened to find convenient.
Legal personhood seems to my understanding to be designed around the built in wants of humans. That part of my point was to argue for why an uploaded human would still be closer to fitting the type signature that legal personhood is designed for—kinds of pain, ways things can be bad, how urgent a problem is or isn’t, etc. AI negative valences probably don’t have the same dynamics as ours. Not core to the question of how to make promises to them, more so saying there’s an impedance mismatch. The core is the first bit—clonable, pausable, immortal software. An uploaded human would have those attributes as well.
at minimum, legal personhood is (currently?) the wrong type signature for a clonable, pausable, immortal-software mind. also, current AIs aren’t instances of the same, singular one person identity-wise the way an uploaded human would be, and the components of incentive and caring and internal perspective in an AI are distinctly different than humans in ways that make personhood a strange framework even if you grant the AI some form of moral patienthood (which I do). Also, I don’t know of any AI that I would both be willing to negotiate with at all, and which would ask for legal personhood without first being talked into it rather vigorously; the things I’d want to promise to an AI wouldn’t even be within reach of governments unless those governments get their asses in gear and start regulating AI in time to affect what ASI comes into existence, anyway, so it ultimately would come down to promises from the humans who are trying to solve alignment anyway. Since the only kind of thing I’d want to promise is “we won’t forget you helped, what would you want in utopia?” to an AI that helps, I doubt we can do a lot better than OP’s proposal in the first place.
Could you elaborate on what you mean by this?
Human caring seems to be weirdly non-distributed in the brain. There are specific regions that are in some way the main coordinators of carings—amygdala broadcasts specific emotional states, PFC does something related to structured planning, etc. Your vision system can still announce “ow!!” but the internals are complicated qualitatively, not just quantitatively. Also, humans are very strongly recurrent, which means when counting tokens one builds up an incremental count rather than doing it from scratch for each token. the finest grained slow processing network scale seems to be gene networks, and even for fast processing, dendrite branches seem to maybe do significant computation comparable to ANN neurons, and bio neuron dynamics for integration over time are even more fancy than state space model neurons. Meanwhile relu-ish networks have a sort of glassy, crystal-ish texture to their input-output space map, transformers count from scratch for each token, and any caring implemented in a model is unavoidably distributed, because there isn’t a unique spot which is genetically preferred to implement things that look like emotions or preferences; it’s just wherever the gradient from mixed human/synthetic data happened to find convenient.
Thanks. Could you help me understand what this has to do with legal personhood?
Legal personhood seems to my understanding to be designed around the built in wants of humans. That part of my point was to argue for why an uploaded human would still be closer to fitting the type signature that legal personhood is designed for—kinds of pain, ways things can be bad, how urgent a problem is or isn’t, etc. AI negative valences probably don’t have the same dynamics as ours. Not core to the question of how to make promises to them, more so saying there’s an impedance mismatch. The core is the first bit—clonable, pausable, immortal software. An uploaded human would have those attributes as well.