Content generation. Where do we draw the line?

If you want to be affected by other people, if you want to live in a culture with other people, then I believe 4 statements below are true:

  • You can’t generate all of your “content”.

This would basically mean living in the Matrix without other people. Consumerist solipsism. Everything you see is generated, but not by other people.

  • You can’t 100% control what content you (can) consume.

For example, if you read someone’s story, you let the author control what you experience. If you don’t let anybody control what you experience, you aren’t affected by other people.

  • “Symbols” (people’s output) in the culture shouldn’t have an absolutely arbitrary value.

Again, this is about control. If you want to be affected by other people, by their culture, you can’t have 100% control over the value of their output.

“Right now I feel like consuming human made content. But any minute I may decide that AI generated content is more valuable and switch to that”—attitudes like this may make the value of people’s output completely arbitrary, decided by you on a whim.

  • Other people should have some control over their image and output.

If you want to be affected by other people, you can’t have 100% control over their image and output. If you create countless variations of someone’s personality and exploit/​milk them to death, it isn’t healthy culturewise.

You’re violating and destroying boundaries that allow the other person’s personality to exist. (inside of your mind or inside the culture)

...

But where do we draw the line?

I’m not sure you can mix the culture of content generation/​”AI replacement” and the human culture. I feel that with every step weakening the principles above the damage to the human culture will grow exponentially.

The lost message

Imagine a person you don’t know. You don’t care about them. Even worse, you’re put off by what they’re saying, you don’t want to listen. Or maybe you just isn’t interested in the “genre” of their message.

But that person may still have a valuable message for you. And that person still has a chance to reach you:

  1. They share their message with other people.

  2. The message becomes popular in the culture.

  3. You notice the popularity. You check out the message again. Or someone explains it to you.

But if any person can switch to AI generated content at any minute, transmitting the message may become infinitely harder or outright impossible.

“But AI can generate a better message for me! Even the one I wouldn’t initially like!”

Then we’re back at the square one: you don’t want to be affected by other people.

Rights to exist

Consciousness and personality don’t exist in a vacuum, they need a medium to be expressed. For example, text messages or drawings.

When you say “hey, I can generate your output in any medium!”

You say “I can deny you existence, I can lock you out of the world”.

I’m not sure it’s a good idea/​fun future.

...

So, I don’t really see where this “content generation” is going in the long run. Or in the very short run (GPT-4 plus DALL-E or “DALL-E 2″ for everyone)

“Do you care that your favorite piece of art/​piece of writing was made by a human?” is the most irrelevant question that you have to consider. Prior questions are: do you care about people in general? do you care about people’s ability to communicate? do you care about people’s basic rights to be seen and express themselves?

If “yes”, where do you draw the line and how do you make sure that it’s a solid line? I don’t care if you think “DALL-E good” or “DALL-E bad”, I care where you draw the line. What condition needs to break for you to say “wait, it’s not what I wanted, something bad is happening”?

If my arguments miss something, it doesn’t matter: just tell me where you draw the line. What would you not like to generate, violate, control? What would be the deal breaker for you?