It is impossible to be “morally obliged to try to expand our moral understanding”, because our moral understanding is what supplies us with moral obligations in the first place.
Ok my wording was a little imprecise, but treating expansion of our moral framework as a kind of second-order moral obligation is a standard meta-ethical position.
By all means punish the creators, but if we only punish the creators, then there is no incentive for people (like you) who disapprove of destroying the created AI to work to prevent that creation in the first place.
The incentive for people like me to prevent the creation of conscious AI is because (as you’ve noted multiple times during the discussion) - the creation of conscious AI introduces myriad philosophical dilemmas and ethical conundrums that we ought to prevent by not creating them. Why should we impose an additional “incentive” which punishes the wrong party?
The only reason to object to this logic is if you not only object to destroying self-aware AIs, but in fact want them created in the first place. That, of course, is a very different matter—specifically, a matter of directly conflicting values.
The reason to object to the logic is because purposefully erasing a conscious entity which is potentially capable of valenced experience is such an grave moral wrong that it shouldn’t be a policy we endorse.
The precaution I am suggesting is a precaution against all humans dying (if not worse!). Destroying a self-aware AI (which is anyhow not nearly as bad as killing a human) is, morally speaking, less than a rounding error in comparison.
This is a total non-sequiter. The standard AI safety concerns and existential risk go through by talking about e.g. misalignment, power-seeking behaviour etc.. These go through independently of whether the system is conscious. A completely unconscious system could be goal-directed and agentic enough to be misaligned and pose an existential risk to everyone on Earth. Likewise, a conscious system could be incredibly constrained and non-agentic.
If you want to argue that we ought to permanently erase a system which exhibits consciousness if it poses an existential risk to humanity this is a defensible position but it’s very different from what you’ve been arguing up until this point that we ought to permanently erase an AI system the moment it’s created because of the potential ethical concerns.
Ok my wording was a little imprecise, but treating expansion of our moral framework as a kind of second-order moral obligation is a standard meta-ethical position.
But a thoroughly mistaken (and, quite frankly, just nonsensical) one.
Why should we impose an additional “incentive” which punishes the wrong party?
With things like this, it’s really best to be extra-sure.
The reason to object to the logic is because purposefully erasing a conscious entity which is potentially capable of valenced experience is such an grave moral wrong that it shouldn’t be a policy we endorse.
The policy we’re endorsing, in this scenario, is “don’t create non-human conscious entities”. The destruction is the enforcement of the policy. If you don’t want it to happen, then ensure that it’s not necessary.
This is a total non-sequiter. The standard AI safety concerns and existential risk go through by talking about e.g. misalignment, power-seeking behaviour etc.. These go through independently of whether the system is conscious.
I’m sorry, but no, it absolutely is not a non sequitur; if you think otherwise, then you’ve failed to understand my point. Please go back and reread my comments in this thread. (If you really don’t see what I’m saying, after doing that, then I will try to explain again.)
But a thoroughly mistaken (and, quite frankly, just nonsensical) one.
Updating one’s framework to take new information into account is a standard position in the rationalist sphere. Whether you want to treat this as a moral obligation, epistemic obligation or just good practice—the position is not obviously nonsensical so you’ll need to provide an argument rather than assert it’s nonsensical.
If we didn’t accept the merit in updating our moral framework to take new information into account we wouldn’t be able to ensure our moral framework tracks reality.
With things like this, it’s really best to be extra-sure.
But you’re not extra sure.
If a science lab were found to be illegally breeding sentient super-chimps, we should punish the lab, not the chimps.
Why? Because punishment needs to deter the decision-maker to avoid repetition. Your proposal is adding moral cost for no gain. In fact, it reverses it, you’re punishing the victim while leaving the reckless developer undeterred.
I’m sorry, but no, it absolutely is not a non sequitur; if you think otherwise, then you’ve failed to understand my point. Please go back and reread my comments in this thread. (If you really don’t see what I’m saying, after doing that, then I will try to explain again.)
You’re conflating 2 positions:
We ought to permanently erase a system which exhibits consciousness if it poses an existential risk to humanity
We ought to permanently erase an AI system the moment it’s created because of the potential ethical concerns
Bringing up AI existential risk is a non-sequiter to 2) not 1).
We’re not disputing 1) - I think it could be defensible with some careful argumentation.
The reason existential risk is a non-sequiter to 2) is because phenomenal consciousness is orthogonal to all of the things normally associated with AI existential risk such as scheming, misalignment etc.. Phenomenal consciousness has nothing to do with these properties. If you want to argue that it does, fine but you need an argument. You haven’t established that presence of phenomenal consciousness leads to greater existential risk.
But a thoroughly mistaken (and, quite frankly, just nonsensical) one.
Updating one’s framework to take new information into account is a standard position in the rationalist sphere. Whether you want to treat this as a moral obligation, epistemic obligation or just good practice—the position is not obviously nonsensical so you’ll need to provide an argument rather than assert it’s nonsensical.
New information, yes. But that’s not “expand our moral understanding”, that’s just… gaining new information. There is a sharp distinction between these things.
But you’re not extra sure.
At this point, you’re just denying something because you don’t like the conclusion, not because you have some disagreement with the reasoning.
I mean, this is really simple. Someone creates a dangerous thing. Destroying the dangerous thing is safer than keeping the dangerous thing around. That’s it, that’s the whole logic behind the “extra sure” argument.
Why? Because punishment needs to deter the decision-maker to avoid repetition. Your proposal is adding moral cost for no gain. In fact, it reverses it, you’re punishing the victim while leaving the reckless developer undeterred.
I already said that we should also punish the person who created the self-aware AI. And I know that you know this, because you not only replied to my comment where I said this, but in fact quoted the specific part where I said this. So please do not now pretend that I didn’t say that. It’s dishonest.
You’re conflating 2 positions:
I am not conflating anything. I am saying that these two positions are quite directly related. I say again: you have failed to understand my point. I can try to re-explain, but before I do that, please carefully reread what I have written.
I think we’re reaching the point of diminishing returns for this discussion so this will be my last reply.
A couple of last points:
So please do not now pretend that I didn’t say that. It’s dishonest.
I didn’t ignore that you said this—I was trying (perhaps poorly) to make the following point:
The decision to punish creators is good (you endorse it) and is the way that incentives normally work. On my view, the decision to punish the creations is bad and has the incentive structure backwards as it punishes the wrong party.
My point is that the incentive structure is backwards when you punish the creation not that you didn’t also advocate for the correct incentive structure by punishing the creator.
I am saying that these two positions are quite directly related.
I don’t see where you’ve established this. As I’ve said repeatedly, the question of whether a system is phenomenally conscious is orthogonal to whether the system poses AI existential risk. You haven’t countered this claim.
I am saying that these two positions are quite directly related.
I don’t see where you’ve established this. As I’ve said repeatedly, the question of whether a system is phenomenally conscious is orthogonal to whether the system poses AI existential risk. You haven’t countered this claim.
I’ve asked you to reread what I’ve written. You’ve given no indication that you have done this; you have not even acknowledged the request (not even to refuse it!).
The reason I asked you to do this is because you keep ignoring or missing things that I’ve already written. For example, I talk about the answer to your above-quoted question (what is the relationship of whether a system is self-aware to how much risk that system poses) in this comment.
Now, you can disagree with my argument if you like, but here you don’t seem to have even noticed it. How can we have a discussion if you won’t read what I write?
Ok my wording was a little imprecise, but treating expansion of our moral framework as a kind of second-order moral obligation is a standard meta-ethical position.
The incentive for people like me to prevent the creation of conscious AI is because (as you’ve noted multiple times during the discussion) - the creation of conscious AI introduces myriad philosophical dilemmas and ethical conundrums that we ought to prevent by not creating them. Why should we impose an additional “incentive” which punishes the wrong party?
The reason to object to the logic is because purposefully erasing a conscious entity which is potentially capable of valenced experience is such an grave moral wrong that it shouldn’t be a policy we endorse.
This is a total non-sequiter. The standard AI safety concerns and existential risk go through by talking about e.g. misalignment, power-seeking behaviour etc.. These go through independently of whether the system is conscious. A completely unconscious system could be goal-directed and agentic enough to be misaligned and pose an existential risk to everyone on Earth. Likewise, a conscious system could be incredibly constrained and non-agentic.
If you want to argue that we ought to permanently erase a system which exhibits consciousness if it poses an existential risk to humanity this is a defensible position but it’s very different from what you’ve been arguing up until this point that we ought to permanently erase an AI system the moment it’s created because of the potential ethical concerns.
But a thoroughly mistaken (and, quite frankly, just nonsensical) one.
With things like this, it’s really best to be extra-sure.
The policy we’re endorsing, in this scenario, is “don’t create non-human conscious entities”. The destruction is the enforcement of the policy. If you don’t want it to happen, then ensure that it’s not necessary.
I’m sorry, but no, it absolutely is not a non sequitur; if you think otherwise, then you’ve failed to understand my point. Please go back and reread my comments in this thread. (If you really don’t see what I’m saying, after doing that, then I will try to explain again.)
Updating one’s framework to take new information into account is a standard position in the rationalist sphere. Whether you want to treat this as a moral obligation, epistemic obligation or just good practice—the position is not obviously nonsensical so you’ll need to provide an argument rather than assert it’s nonsensical.
If we didn’t accept the merit in updating our moral framework to take new information into account we wouldn’t be able to ensure our moral framework tracks reality.
But you’re not extra sure.
If a science lab were found to be illegally breeding sentient super-chimps, we should punish the lab, not the chimps.
Why? Because punishment needs to deter the decision-maker to avoid repetition. Your proposal is adding moral cost for no gain. In fact, it reverses it, you’re punishing the victim while leaving the reckless developer undeterred.
You’re conflating 2 positions:
We ought to permanently erase a system which exhibits consciousness if it poses an existential risk to humanity
We ought to permanently erase an AI system the moment it’s created because of the potential ethical concerns
Bringing up AI existential risk is a non-sequiter to 2) not 1).
We’re not disputing 1) - I think it could be defensible with some careful argumentation.
The reason existential risk is a non-sequiter to 2) is because phenomenal consciousness is orthogonal to all of the things normally associated with AI existential risk such as scheming, misalignment etc.. Phenomenal consciousness has nothing to do with these properties. If you want to argue that it does, fine but you need an argument. You haven’t established that presence of phenomenal consciousness leads to greater existential risk.
New information, yes. But that’s not “expand our moral understanding”, that’s just… gaining new information. There is a sharp distinction between these things.
At this point, you’re just denying something because you don’t like the conclusion, not because you have some disagreement with the reasoning.
I mean, this is really simple. Someone creates a dangerous thing. Destroying the dangerous thing is safer than keeping the dangerous thing around. That’s it, that’s the whole logic behind the “extra sure” argument.
I already said that we should also punish the person who created the self-aware AI. And I know that you know this, because you not only replied to my comment where I said this, but in fact quoted the specific part where I said this. So please do not now pretend that I didn’t say that. It’s dishonest.
I am not conflating anything. I am saying that these two positions are quite directly related. I say again: you have failed to understand my point. I can try to re-explain, but before I do that, please carefully reread what I have written.
I think we’re reaching the point of diminishing returns for this discussion so this will be my last reply.
A couple of last points:
I didn’t ignore that you said this—I was trying (perhaps poorly) to make the following point:
The decision to punish creators is good (you endorse it) and is the way that incentives normally work. On my view, the decision to punish the creations is bad and has the incentive structure backwards as it punishes the wrong party.
My point is that the incentive structure is backwards when you punish the creation not that you didn’t also advocate for the correct incentive structure by punishing the creator.
I don’t see where you’ve established this. As I’ve said repeatedly, the question of whether a system is phenomenally conscious is orthogonal to whether the system poses AI existential risk. You haven’t countered this claim.
Anyway, thanks for the exchange.
I’ve asked you to reread what I’ve written. You’ve given no indication that you have done this; you have not even acknowledged the request (not even to refuse it!).
The reason I asked you to do this is because you keep ignoring or missing things that I’ve already written. For example, I talk about the answer to your above-quoted question (what is the relationship of whether a system is self-aware to how much risk that system poses) in this comment.
Now, you can disagree with my argument if you like, but here you don’t seem to have even noticed it. How can we have a discussion if you won’t read what I write?