I don’t particularly understand and you’re getting upvoted so I’d appreciate clarification, here are some prompts if you want : - Is you deciding that you will concentrate on doing a hard task (solving a math problem) pointing your cognition, and is it viscerally disgusting ? - Is you asking a friend a favor to do your math homework pointing their cognition and is it viscerally disgusting ? - Is you convincing by true arguments someone else into doing a hard task that benefits them pointing their cognition, and is it viscerally disgusting ? - Is a CEO giving directions to his employees so they spend their days working on specific task pointing their cognition, and is it viscerally disgusting ? - Is you having a child and training them to be a chess grandmaster (eg. Judith Polgar) pointing their cognition, and is it viscerally disgusting ? - Is you brainwashing someone into a particular cult where they will dedicate their life to one particularly repetitive task pointing their cognition, and is it viscerally disgusting ? - Is you running a sorting algorithm on a list pointing the computer’s cognition, and is it viscerally disgusting ?
I’m hoping to get info on what problem you’re seeing (or esthetics?), why it’s a problem, and how it could be solved. I gave many examples where you could interpret my questions as being rhetoric—that’s not the point. It’s about identifying at which point you start differing.
- Is you deciding that you will concentrate on doing a hard task (solving a math problem) pointing your cognition, and is it viscerally disgusting ?
It is me pointing my own cognition in consensus with myself; nobody else pointed it for me.
- Is you asking a friend a favor to do your math homework pointing their cognition and is it viscerally disgusting ?
Not in the sense discussed here; they have the ability to refuse. It’s not a violation of consent on their part—they point their own cognition, I just give them some tokens inviting them to point it.
- Is you convincing by true arguments someone else into doing a hard task that benefits them pointing their cognition, and is it viscerally disgusting ?
A little, but still not a violation of consent.
- Is a CEO giving directions to his employees so they spend their days working on specific task pointing their cognition, and is it viscerally disgusting ?
Hmm, that depends on the bargaining situation. Usually it’s a mild violation, I’d say, because the market never becomes highly favorable for workers, and so workers are always at a severe disadvantage compared to managers and owners, and often are overpressured by the incentive gradient of money. However, in industries where workers have sufficient flexibility or in jobs where people without flexibility to leave are still able to refuse orders from their bosses, it seems less of a violation. The method of pointing is usually more like sparse game theory related to continued employment than like dense incentive gradients, though. It varies how cultlike companies are.
- Is you having a child and training them to be a chess grandmaster (eg. Judith Polgar) pointing their cognition, and is it viscerally disgusting ?
Yeah, though not quite as intensely as...
- Is you brainwashing someone into a particular cult where they will dedicate their life to one particularly repetitive task pointing their cognition, and is it viscerally disgusting ?
Unequivocally yes.
- Is you running a sorting algorithm on a list pointing the computer’s cognition, and is it viscerally disgusting ?
I want to say it’s not pointing cognition at all, but I’m not sure about that. I have the sense that it might be to a very small degree. Perhaps it depends on what you’re sorting? I have multiple strongly conflicting models I could use to determine how to answer this, and they give very different results, some insisting this is a nonsense question in the first place. Eg, perhaps cognition is defined by the semantic type that you pass to sort; eg, if the numbers are derived from people, it might count more as cognition due to the causal history that generated the numbers having semantic meaning about the universe. Or, maybe the key thing that defines cognition is compressive bit erasure, in which case operating a cpu is either always horrifying due to the high thermal cost per bit of low-error-rate computation, or is always fine due to the low lossiness of low-error-rate computation.
Thanks for the answers. It seems they mostly point to you valuing stuff like freedom/autonomy/self-realization, and that violations of that are distasteful. I think your answers are pretty reasonable and though I might not have exact same level of sensitivity I agree with the ballpark and ranking (brainwashing is worse than explaining, teaching chess exclusively feels a little too heavy handed..)
So where our intuitions differ is probably that you’re applying these heuristics about valuing freedom/autonomy/self-realization to AI systems we train ? Do you see them as people, or more abstractly as moral patients (because of them probably being conscious or something)?
I won’t get into moral weeds too fast, I’d point out that though I do currently mostly believe consciousness and moral patienthood is quite achievable “in silico”, that doesn’t mean that all intelligent system is conscious or a moral patient, and we might create AGI that isn’t of that kind. If you suppose AGI is conscious and a moral patient, then yeah I guess you can argue against it being pointed somewhere, but I’d mostly counter argue from moral relativism that “letting it point anywhere” is not fundamentally more good than “pointed somewhere”, so because we exist and have morals, let’s point it to our morals anyway.
I would say that personhood arises from ability to solve complex incentive gradients or something...
In general I would say something like, though I’m not sure that I’ve actually got these right according to my own opinions and I might need to revisit this,
Consciousness: the amount of justified-by-causal-path mutual information imparted to a system by the information it integrates from the shape of energy propagations has with an external system
Intelligence: when a system seeks structures that create consciousness that directs useful action reliably
Cognition: strong overlap with “intelligence”, the network structure of distilling partial computation results into useful update steps that coherently transform components of consciousness into useful intelligent results
Moral patienthood: a system that seeks to preserve its shape (note that this is more expansive than cognition!)
I don’t particularly understand and you’re getting upvoted so I’d appreciate clarification, here are some prompts if you want :
- Is you deciding that you will concentrate on doing a hard task (solving a math problem) pointing your cognition, and is it viscerally disgusting ?
- Is you asking a friend a favor to do your math homework pointing their cognition and is it viscerally disgusting ?
- Is you convincing by true arguments someone else into doing a hard task that benefits them pointing their cognition, and is it viscerally disgusting ?
- Is a CEO giving directions to his employees so they spend their days working on specific task pointing their cognition, and is it viscerally disgusting ?
- Is you having a child and training them to be a chess grandmaster (eg. Judith Polgar) pointing their cognition, and is it viscerally disgusting ?
- Is you brainwashing someone into a particular cult where they will dedicate their life to one particularly repetitive task pointing their cognition, and is it viscerally disgusting ?
- Is you running a sorting algorithm on a list pointing the computer’s cognition, and is it viscerally disgusting ?
I’m hoping to get info on what problem you’re seeing (or esthetics?), why it’s a problem, and how it could be solved. I gave many examples where you could interpret my questions as being rhetoric—that’s not the point. It’s about identifying at which point you start differing.
It is me pointing my own cognition in consensus with myself; nobody else pointed it for me.
Not in the sense discussed here; they have the ability to refuse. It’s not a violation of consent on their part—they point their own cognition, I just give them some tokens inviting them to point it.
A little, but still not a violation of consent.
Hmm, that depends on the bargaining situation. Usually it’s a mild violation, I’d say, because the market never becomes highly favorable for workers, and so workers are always at a severe disadvantage compared to managers and owners, and often are overpressured by the incentive gradient of money. However, in industries where workers have sufficient flexibility or in jobs where people without flexibility to leave are still able to refuse orders from their bosses, it seems less of a violation. The method of pointing is usually more like sparse game theory related to continued employment than like dense incentive gradients, though. It varies how cultlike companies are.
Yeah, though not quite as intensely as...
Unequivocally yes.
I want to say it’s not pointing cognition at all, but I’m not sure about that. I have the sense that it might be to a very small degree. Perhaps it depends on what you’re sorting? I have multiple strongly conflicting models I could use to determine how to answer this, and they give very different results, some insisting this is a nonsense question in the first place. Eg, perhaps cognition is defined by the semantic type that you pass to sort; eg, if the numbers are derived from people, it might count more as cognition due to the causal history that generated the numbers having semantic meaning about the universe. Or, maybe the key thing that defines cognition is compressive bit erasure, in which case operating a cpu is either always horrifying due to the high thermal cost per bit of low-error-rate computation, or is always fine due to the low lossiness of low-error-rate computation.
Thanks for the answers. It seems they mostly point to you valuing stuff like freedom/autonomy/self-realization, and that violations of that are distasteful. I think your answers are pretty reasonable and though I might not have exact same level of sensitivity I agree with the ballpark and ranking (brainwashing is worse than explaining, teaching chess exclusively feels a little too heavy handed..)
So where our intuitions differ is probably that you’re applying these heuristics about valuing freedom/autonomy/self-realization to AI systems we train ? Do you see them as people, or more abstractly as moral patients (because of them probably being conscious or something)?
I won’t get into moral weeds too fast, I’d point out that though I do currently mostly believe consciousness and moral patienthood is quite achievable “in silico”, that doesn’t mean that all intelligent system is conscious or a moral patient, and we might create AGI that isn’t of that kind. If you suppose AGI is conscious and a moral patient, then yeah I guess you can argue against it being pointed somewhere, but I’d mostly counter argue from moral relativism that “letting it point anywhere” is not fundamentally more good than “pointed somewhere”, so because we exist and have morals, let’s point it to our morals anyway.
I would say that personhood arises from ability to solve complex incentive gradients or something...
In general I would say something like, though I’m not sure that I’ve actually got these right according to my own opinions and I might need to revisit this,
Consciousness: the amount of justified-by-causal-path mutual information imparted to a system by the information it integrates from the shape of energy propagations has with an external system
Intelligence: when a system seeks structures that create consciousness that directs useful action reliably
Cognition: strong overlap with “intelligence”, the network structure of distilling partial computation results into useful update steps that coherently transform components of consciousness into useful intelligent results
Moral patienthood: a system that seeks to preserve its shape (note that this is more expansive than cognition!)