But a thoroughly mistaken (and, quite frankly, just nonsensical) one.
Updating one’s framework to take new information into account is a standard position in the rationalist sphere. Whether you want to treat this as a moral obligation, epistemic obligation or just good practice—the position is not obviously nonsensical so you’ll need to provide an argument rather than assert it’s nonsensical.
New information, yes. But that’s not “expand our moral understanding”, that’s just… gaining new information. There is a sharp distinction between these things.
But you’re not extra sure.
At this point, you’re just denying something because you don’t like the conclusion, not because you have some disagreement with the reasoning.
I mean, this is really simple. Someone creates a dangerous thing. Destroying the dangerous thing is safer than keeping the dangerous thing around. That’s it, that’s the whole logic behind the “extra sure” argument.
Why? Because punishment needs to deter the decision-maker to avoid repetition. Your proposal is adding moral cost for no gain. In fact, it reverses it, you’re punishing the victim while leaving the reckless developer undeterred.
I already said that we should also punish the person who created the self-aware AI. And I know that you know this, because you not only replied to my comment where I said this, but in fact quoted the specific part where I said this. So please do not now pretend that I didn’t say that. It’s dishonest.
You’re conflating 2 positions:
I am not conflating anything. I am saying that these two positions are quite directly related. I say again: you have failed to understand my point. I can try to re-explain, but before I do that, please carefully reread what I have written.
I think we’re reaching the point of diminishing returns for this discussion so this will be my last reply.
A couple of last points:
So please do not now pretend that I didn’t say that. It’s dishonest.
I didn’t ignore that you said this—I was trying (perhaps poorly) to make the following point:
The decision to punish creators is good (you endorse it) and is the way that incentives normally work. On my view, the decision to punish the creations is bad and has the incentive structure backwards as it punishes the wrong party.
My point is that the incentive structure is backwards when you punish the creation not that you didn’t also advocate for the correct incentive structure by punishing the creator.
I am saying that these two positions are quite directly related.
I don’t see where you’ve established this. As I’ve said repeatedly, the question of whether a system is phenomenally conscious is orthogonal to whether the system poses AI existential risk. You haven’t countered this claim.
I am saying that these two positions are quite directly related.
I don’t see where you’ve established this. As I’ve said repeatedly, the question of whether a system is phenomenally conscious is orthogonal to whether the system poses AI existential risk. You haven’t countered this claim.
I’ve asked you to reread what I’ve written. You’ve given no indication that you have done this; you have not even acknowledged the request (not even to refuse it!).
The reason I asked you to do this is because you keep ignoring or missing things that I’ve already written. For example, I talk about the answer to your above-quoted question (what is the relationship of whether a system is self-aware to how much risk that system poses) in this comment.
Now, you can disagree with my argument if you like, but here you don’t seem to have even noticed it. How can we have a discussion if you won’t read what I write?
New information, yes. But that’s not “expand our moral understanding”, that’s just… gaining new information. There is a sharp distinction between these things.
At this point, you’re just denying something because you don’t like the conclusion, not because you have some disagreement with the reasoning.
I mean, this is really simple. Someone creates a dangerous thing. Destroying the dangerous thing is safer than keeping the dangerous thing around. That’s it, that’s the whole logic behind the “extra sure” argument.
I already said that we should also punish the person who created the self-aware AI. And I know that you know this, because you not only replied to my comment where I said this, but in fact quoted the specific part where I said this. So please do not now pretend that I didn’t say that. It’s dishonest.
I am not conflating anything. I am saying that these two positions are quite directly related. I say again: you have failed to understand my point. I can try to re-explain, but before I do that, please carefully reread what I have written.
I think we’re reaching the point of diminishing returns for this discussion so this will be my last reply.
A couple of last points:
I didn’t ignore that you said this—I was trying (perhaps poorly) to make the following point:
The decision to punish creators is good (you endorse it) and is the way that incentives normally work. On my view, the decision to punish the creations is bad and has the incentive structure backwards as it punishes the wrong party.
My point is that the incentive structure is backwards when you punish the creation not that you didn’t also advocate for the correct incentive structure by punishing the creator.
I don’t see where you’ve established this. As I’ve said repeatedly, the question of whether a system is phenomenally conscious is orthogonal to whether the system poses AI existential risk. You haven’t countered this claim.
Anyway, thanks for the exchange.
I’ve asked you to reread what I’ve written. You’ve given no indication that you have done this; you have not even acknowledged the request (not even to refuse it!).
The reason I asked you to do this is because you keep ignoring or missing things that I’ve already written. For example, I talk about the answer to your above-quoted question (what is the relationship of whether a system is self-aware to how much risk that system poses) in this comment.
Now, you can disagree with my argument if you like, but here you don’t seem to have even noticed it. How can we have a discussion if you won’t read what I write?