Human caring seems to be weirdly non-distributed in the brain. There are specific regions that are in some way the main coordinators of carings—amygdala broadcasts specific emotional states, PFC does something related to structured planning, etc. Your vision system can still announce “ow!!” but the internals are complicated qualitatively, not just quantitatively. Also, humans are very strongly recurrent, which means when counting tokens one builds up an incremental count rather than doing it from scratch for each token. the finest grained slow processing network scale seems to be gene networks, and even for fast processing, dendrite branches seem to maybe do significant computation comparable to ANN neurons, and bio neuron dynamics for integration over time are even more fancy than state space model neurons. Meanwhile relu-ish networks have a sort of glassy, crystal-ish texture to their input-output space map, transformers count from scratch for each token, and any caring implemented in a model is unavoidably distributed, because there isn’t a unique spot which is genetically preferred to implement things that look like emotions or preferences; it’s just wherever the gradient from mixed human/synthetic data happened to find convenient.
Legal personhood seems to my understanding to be designed around the built in wants of humans. That part of my point was to argue for why an uploaded human would still be closer to fitting the type signature that legal personhood is designed for—kinds of pain, ways things can be bad, how urgent a problem is or isn’t, etc. AI negative valences probably don’t have the same dynamics as ours. Not core to the question of how to make promises to them, more so saying there’s an impedance mismatch. The core is the first bit—clonable, pausable, immortal software. An uploaded human would have those attributes as well.
Human caring seems to be weirdly non-distributed in the brain. There are specific regions that are in some way the main coordinators of carings—amygdala broadcasts specific emotional states, PFC does something related to structured planning, etc. Your vision system can still announce “ow!!” but the internals are complicated qualitatively, not just quantitatively. Also, humans are very strongly recurrent, which means when counting tokens one builds up an incremental count rather than doing it from scratch for each token. the finest grained slow processing network scale seems to be gene networks, and even for fast processing, dendrite branches seem to maybe do significant computation comparable to ANN neurons, and bio neuron dynamics for integration over time are even more fancy than state space model neurons. Meanwhile relu-ish networks have a sort of glassy, crystal-ish texture to their input-output space map, transformers count from scratch for each token, and any caring implemented in a model is unavoidably distributed, because there isn’t a unique spot which is genetically preferred to implement things that look like emotions or preferences; it’s just wherever the gradient from mixed human/synthetic data happened to find convenient.
Thanks. Could you help me understand what this has to do with legal personhood?
Legal personhood seems to my understanding to be designed around the built in wants of humans. That part of my point was to argue for why an uploaded human would still be closer to fitting the type signature that legal personhood is designed for—kinds of pain, ways things can be bad, how urgent a problem is or isn’t, etc. AI negative valences probably don’t have the same dynamics as ours. Not core to the question of how to make promises to them, more so saying there’s an impedance mismatch. The core is the first bit—clonable, pausable, immortal software. An uploaded human would have those attributes as well.