Thanks for the answers. It seems they mostly point to you valuing stuff like freedom/autonomy/self-realization, and that violations of that are distasteful. I think your answers are pretty reasonable and though I might not have exact same level of sensitivity I agree with the ballpark and ranking (brainwashing is worse than explaining, teaching chess exclusively feels a little too heavy handed..)
So where our intuitions differ is probably that you’re applying these heuristics about valuing freedom/autonomy/self-realization to AI systems we train ? Do you see them as people, or more abstractly as moral patients (because of them probably being conscious or something)?
I won’t get into moral weeds too fast, I’d point out that though I do currently mostly believe consciousness and moral patienthood is quite achievable “in silico”, that doesn’t mean that all intelligent system is conscious or a moral patient, and we might create AGI that isn’t of that kind. If you suppose AGI is conscious and a moral patient, then yeah I guess you can argue against it being pointed somewhere, but I’d mostly counter argue from moral relativism that “letting it point anywhere” is not fundamentally more good than “pointed somewhere”, so because we exist and have morals, let’s point it to our morals anyway.
I would say that personhood arises from ability to solve complex incentive gradients or something...
In general I would say something like, though I’m not sure that I’ve actually got these right according to my own opinions and I might need to revisit this,
Consciousness: the amount of justified-by-causal-path mutual information imparted to a system by the information it integrates from the shape of energy propagations has with an external system
Intelligence: when a system seeks structures that create consciousness that directs useful action reliably
Cognition: strong overlap with “intelligence”, the network structure of distilling partial computation results into useful update steps that coherently transform components of consciousness into useful intelligent results
Moral patienthood: a system that seeks to preserve its shape (note that this is more expansive than cognition!)
Thanks for the answers. It seems they mostly point to you valuing stuff like freedom/autonomy/self-realization, and that violations of that are distasteful. I think your answers are pretty reasonable and though I might not have exact same level of sensitivity I agree with the ballpark and ranking (brainwashing is worse than explaining, teaching chess exclusively feels a little too heavy handed..)
So where our intuitions differ is probably that you’re applying these heuristics about valuing freedom/autonomy/self-realization to AI systems we train ? Do you see them as people, or more abstractly as moral patients (because of them probably being conscious or something)?
I won’t get into moral weeds too fast, I’d point out that though I do currently mostly believe consciousness and moral patienthood is quite achievable “in silico”, that doesn’t mean that all intelligent system is conscious or a moral patient, and we might create AGI that isn’t of that kind. If you suppose AGI is conscious and a moral patient, then yeah I guess you can argue against it being pointed somewhere, but I’d mostly counter argue from moral relativism that “letting it point anywhere” is not fundamentally more good than “pointed somewhere”, so because we exist and have morals, let’s point it to our morals anyway.
I would say that personhood arises from ability to solve complex incentive gradients or something...
In general I would say something like, though I’m not sure that I’ve actually got these right according to my own opinions and I might need to revisit this,
Consciousness: the amount of justified-by-causal-path mutual information imparted to a system by the information it integrates from the shape of energy propagations has with an external system
Intelligence: when a system seeks structures that create consciousness that directs useful action reliably
Cognition: strong overlap with “intelligence”, the network structure of distilling partial computation results into useful update steps that coherently transform components of consciousness into useful intelligent results
Moral patienthood: a system that seeks to preserve its shape (note that this is more expansive than cognition!)