I am saying that these two positions are quite directly related.
I don’t see where you’ve established this. As I’ve said repeatedly, the question of whether a system is phenomenally conscious is orthogonal to whether the system poses AI existential risk. You haven’t countered this claim.
I’ve asked you to reread what I’ve written. You’ve given no indication that you have done this; you have not even acknowledged the request (not even to refuse it!).
The reason I asked you to do this is because you keep ignoring or missing things that I’ve already written. For example, I talk about the answer to your above-quoted question (what is the relationship of whether a system is self-aware to how much risk that system poses) in this comment.
Now, you can disagree with my argument if you like, but here you don’t seem to have even noticed it. How can we have a discussion if you won’t read what I write?
I’ve asked you to reread what I’ve written. You’ve given no indication that you have done this; you have not even acknowledged the request (not even to refuse it!).
The reason I asked you to do this is because you keep ignoring or missing things that I’ve already written. For example, I talk about the answer to your above-quoted question (what is the relationship of whether a system is self-aware to how much risk that system poses) in this comment.
Now, you can disagree with my argument if you like, but here you don’t seem to have even noticed it. How can we have a discussion if you won’t read what I write?