The prompt was intentionally changed before it made it to the AI. “Who cares?” is a bad-faith argument, since you don’t go out of your way to modify something if you don’t care. Generating diverse pictures is obviously not good if you have specific requests. If you ask for a picture of fries it should not show you burgers. And this “diversity” was added only to modify humans (rather than adding noise to all inputs to increase the output-space). And correct me if I’m wrong, but if you generate a picture with all black or minority characters, it doesn’t add any white people to achieve that balance, does it? I think it’s a one-sided intervention, rather than a normalizing one. Truth is, what they’re doing is racist. All because they think reality and history is racist (which makes them want to correct it)
There’s one valid point here (which sounds sweet, and which I don’t buy as the main intention), though. That most training data is with white people. So if you want a picture of a black pirate, you might have to explicitly ask for it, as “pirate” would generate white people 90% of the time (percentage is guesswork). But I’m sure big muscles are rather rare as well, and tattoos, and disabilities. I wear glasses, but does this AI have to generate people with glasses XX% of the time in order not to offend me? But I wouldn’t want that. There’s better solutions to this, like adding a checkmark which generates people based on your location, or learning user preferences over time.
But to say this is about racial equality would be lying. The AI is biased in all areas which are currently controversial, and it’s basically a perfect fit with the modern left. Even with viewpoints which are so new that the majority of training data will have the opposite bias.
The emoji thing doesn’t surprise me or scare me any. It’s a quirk with negations. “Not X” is different from “Y”, even if Y is the opposite of X.
As for solving all this: Yes, it’s impossible to never offend anyone. That’s not the problem we should be trying to solve. Great point that value judgements are necessarily subjective, by the way.
Also, gemini is super annoying because it’s condescending, much like millennial writing is annoying and condescending. (I don’t expect anyone to understand the connection, but I should write it anyway, as it’s the truth).
Now, lets notice how talking about these issues requires lowering ourselves to a more subjective and less useful paradigm, and that the solution is not found in this paradigm. We need a layer or two more of “meta”, an outside perspective which can model the inside-perspective and fix it, so that we do not try to solve the issue from within (where it’s impossible to do so)
The prompt was intentionally changed before it made it to the AI. “Who cares?” is a bad-faith argument, since you don’t go out of your way to modify something if you don’t care.
Generating diverse pictures is obviously not good if you have specific requests. If you ask for a picture of fries it should not show you burgers. And this “diversity” was added only to modify humans (rather than adding noise to all inputs to increase the output-space). And correct me if I’m wrong, but if you generate a picture with all black or minority characters, it doesn’t add any white people to achieve that balance, does it? I think it’s a one-sided intervention, rather than a normalizing one.
Truth is, what they’re doing is racist. All because they think reality and history is racist (which makes them want to correct it)
There’s one valid point here (which sounds sweet, and which I don’t buy as the main intention), though. That most training data is with white people. So if you want a picture of a black pirate, you might have to explicitly ask for it, as “pirate” would generate white people 90% of the time (percentage is guesswork). But I’m sure big muscles are rather rare as well, and tattoos, and disabilities. I wear glasses, but does this AI have to generate people with glasses XX% of the time in order not to offend me? But I wouldn’t want that.
There’s better solutions to this, like adding a checkmark which generates people based on your location, or learning user preferences over time.
But to say this is about racial equality would be lying. The AI is biased in all areas which are currently controversial, and it’s basically a perfect fit with the modern left. Even with viewpoints which are so new that the majority of training data will have the opposite bias.
The emoji thing doesn’t surprise me or scare me any. It’s a quirk with negations. “Not X” is different from “Y”, even if Y is the opposite of X.
As for solving all this: Yes, it’s impossible to never offend anyone. That’s not the problem we should be trying to solve. Great point that value judgements are necessarily subjective, by the way.
Also, gemini is super annoying because it’s condescending, much like millennial writing is annoying and condescending. (I don’t expect anyone to understand the connection, but I should write it anyway, as it’s the truth).
Now, lets notice how talking about these issues requires lowering ourselves to a more subjective and less useful paradigm, and that the solution is not found in this paradigm. We need a layer or two more of “meta”, an outside perspective which can model the inside-perspective and fix it, so that we do not try to solve the issue from within (where it’s impossible to do so)