I don’t say that the same policies must necessarily apply to AIs and humans. But I do say that if they don’t then there should be a reason why they treat AIs and humans differently.
If a law treats people a certain way, there must be a reason for that, because people have rights.
But if a law treats non-people a certain way, there doesn’t need to be any reason for that. All that is required is that there be good reasons for what consequences the law has for people.
There does not seem to be any reason why the default should be to treat AIs and humans the same way (or to treat AIs in any particular way).
I think “humans are people and AIs aren’t” could be a perfectly good reason for treating them differently, and didn’t intend to say otherwise. So, e.g., if Mikhail had said “Humans should be allowed to learn from anything they can read because doing so is a basic human right and it would be unjust to forbid that; today’s AIs aren’t the sort of things that have rights, so that doesn’t apply to them at all” then that would have been a perfectly cromulent answer. (With, e.g., the implication that to whatever extent that’s the whole reason for treating them differently in this case, the appropriate rules might change dramatically if and when there are AIs that we find it appropriate to think of as persons having rights.)
I don’t say that the same policies must necessarily apply to AIs and humans. But I do say that if they don’t then there should be a reason why they treat AIs and humans differently.
Why?
If a law treats people a certain way, there must be a reason for that, because people have rights.
But if a law treats non-people a certain way, there doesn’t need to be any reason for that. All that is required is that there be good reasons for what consequences the law has for people.
There does not seem to be any reason why the default should be to treat AIs and humans the same way (or to treat AIs in any particular way).
I think “humans are people and AIs aren’t” could be a perfectly good reason for treating them differently, and didn’t intend to say otherwise. So, e.g., if Mikhail had said “Humans should be allowed to learn from anything they can read because doing so is a basic human right and it would be unjust to forbid that; today’s AIs aren’t the sort of things that have rights, so that doesn’t apply to them at all” then that would have been a perfectly cromulent answer. (With, e.g., the implication that to whatever extent that’s the whole reason for treating them differently in this case, the appropriate rules might change dramatically if and when there are AIs that we find it appropriate to think of as persons having rights.)