Fooling people into thinking they’re talking to a human when they’re actually talking to an AI should be banned for its own sake, independent of X-risk concerns.
I feel like your argument here is a little bit disingenuous about what is actually being proposed.
Consider the differences between the following positions:
1A: If you advertise food as GMO-free, it must contain no GMOs.
1B: If your food contains GMOs, you must actively mark it as ‘Contains GMOs’.
2A: If you advertise your product as being ‘Made in America’, it must be made in America.
2B: If your product is made in China, you must actively mark it as ‘Made in China’.
3A: If you advertise your art as AI-free, it must not be AI art.
3B: If you have AI art, you must actively mark it as ‘AI Art’.
(Coincidentally, I support 1A, 2A and 3A, but oppose 1B, 2B and 3B).
For example, if an RPG rulebook contains AI art, should the writer/publisher have to actively disclose it? Does ‘putting AI-generated art in the rulebook’ count as ‘fooling people into thinking this was drawn by a human’? Or is this only a problem if the publisher has advertised a policy of not using AI art, which they are now breaking?
It sounds to me like what’s actually being proposed in the OP is 3B. The post says:
As in, if an AI wrote these words, you need it to be clear to a human reading the words that an AI wrote those words, or created that image.
Your phrasing makes it sound to me very much like you are trying to defend the position 3A.
If you support 3A and not 3B, I agree with you entirely, but think that it sounds like we both disagree with Zvi on this.
If you support 3B as well as 3A, I think phrasing the disagreement as being about ‘fooling people into thinking they’re talking to a human’ is somewhat misleading.
I think the synthesis here is that most people don’t know that much about AI capabilities, and so if you are interacting with an AI in a situation that might lead you to reasonably believe you were interacting with a human, then that counts. For example, many live chat functions on company websites open with “You are now connected with Alice” or some such. On phone calls, hearing a voice that doesn’t clearly sounds like a bot voice also counts. It wouldn’t have to be elaborate—they could just change “You are now connected with Alice” to “You are now connected with HelpBot.”
It’s a closer question if they just take away “you are now connected with Alice”, but there exist at least some situations where the overall experience would lead a reasonable consumer to assume they were interacting with a human.
I feel like your argument here is a little bit disingenuous about what is actually being proposed.
Consider the differences between the following positions:
1A: If you advertise food as GMO-free, it must contain no GMOs.
1B: If your food contains GMOs, you must actively mark it as ‘Contains GMOs’.
2A: If you advertise your product as being ‘Made in America’, it must be made in America.
2B: If your product is made in China, you must actively mark it as ‘Made in China’.
3A: If you advertise your art as AI-free, it must not be AI art.
3B: If you have AI art, you must actively mark it as ‘AI Art’.
(Coincidentally, I support 1A, 2A and 3A, but oppose 1B, 2B and 3B).
For example, if an RPG rulebook contains AI art, should the writer/publisher have to actively disclose it? Does ‘putting AI-generated art in the rulebook’ count as ‘fooling people into thinking this was drawn by a human’? Or is this only a problem if the publisher has advertised a policy of not using AI art, which they are now breaking?
It sounds to me like what’s actually being proposed in the OP is 3B. The post says:
Your phrasing makes it sound to me very much like you are trying to defend the position 3A.
If you support 3A and not 3B, I agree with you entirely, but think that it sounds like we both disagree with Zvi on this.
If you support 3B as well as 3A, I think phrasing the disagreement as being about ‘fooling people into thinking they’re talking to a human’ is somewhat misleading.
I think the synthesis here is that most people don’t know that much about AI capabilities, and so if you are interacting with an AI in a situation that might lead you to reasonably believe you were interacting with a human, then that counts. For example, many live chat functions on company websites open with “You are now connected with Alice” or some such. On phone calls, hearing a voice that doesn’t clearly sounds like a bot voice also counts. It wouldn’t have to be elaborate—they could just change “You are now connected with Alice” to “You are now connected with HelpBot.”
It’s a closer question if they just take away “you are now connected with Alice”, but there exist at least some situations where the overall experience would lead a reasonable consumer to assume they were interacting with a human.