you have not shown that using AI is equivalent to slavery
I’m assuming we’re using the same definition of slavery; that is, forced labour of someone who is property. Which part have I missed?
In addition, I feel cheated that you suggest spending one-fourth of the essay on feasibility of stopping the potential moral catastrophe, only to just have two arguments which can be summarized as “we could stop AI for different reasons” and “it’s bad, and we’ve stopped bad things before”. (I don’t think a strong case for feasibility can be made, which is why I was looking forward to seeing one, but I’d recommend just evoking the subject speculatively and letting the reader make their own opinion of whether they can stop the moral catastrophe if there’s one.)
To clarify: Do you think the recommendations in the Implementation section couldn’t work, or that they couldn’t become popular enough to be implemented? (I’m sorry that you felt cheated.)
in principle, we have access to any significant part of their cognition and control every step of their creation, and I think that’s probably the real reason why most people intuitively think that LLMs can’t be concious
I’ve not come across this argument before, and I don’t think I understand it well enough to write about it, sorry.
Thank you for your comments. :)
I’m assuming we’re using the same definition of slavery; that is, forced labour of someone who is property. Which part have I missed?
To clarify: Do you think the recommendations in the Implementation section couldn’t work, or that they couldn’t become popular enough to be implemented? (I’m sorry that you felt cheated.)
I’ve not come across this argument before, and I don’t think I understand it well enough to write about it, sorry.