People are very worried about a future in which a lot of the Internet is AI-generated. I’m kinda not. So far, AIs are more truth-tracking and kinder than humans. I think the default (conditional on OK alignment) is that an Internet that includes a much higher population of AIs is a much better experience for humans than the current Internet, which is full of bullying and lies.
All such discussions hinge on AI being relatively aligned, though. Of course, an Internet full of misaligned AIs would be bad for humans, but the reason is human disempowerment, not any of the usual reasons people say such an Internet would be terrible.
I think the problem is that the competitive dynamics that make humans worse on the internet (eg short epistemically-ungrounded outrage bait gets more engagement than more careful and reasoned analysis) will apply to AIs as well as to humans.
Yup, but the AIs are massively less likely to help with creating cruel content. There will be a huge asymmetry in what they will be willing to generate.
Imagine an Internet where half the population is Grant Sanderson (the creator of 3Blue1Brown). That’d be awesome. Grant Sanderson has the same incentives as anyone else to create cruel and false content, but he just doesn’t.
But I don’t think that the majority of people in the world would prefer that to the current internet, much less actually engage with it more than the current internet. Most people find math boring (even when it is explained as well as when Grant does the explaining). There would be an incentive to produce content that is more engaging for most of the population than linear algebra explanations.
People are very worried about a future in which a lot of the Internet is AI-generated. I’m kinda not. So far, AIs are more truth-tracking and kinder than humans. I think the default (conditional on OK alignment) is that an Internet that includes a much higher population of AIs is a much better experience for humans than the current Internet, which is full of bullying and lies.
All such discussions hinge on AI being relatively aligned, though. Of course, an Internet full of misaligned AIs would be bad for humans, but the reason is human disempowerment, not any of the usual reasons people say such an Internet would be terrible.
I think the problem is that the competitive dynamics that make humans worse on the internet (eg short epistemically-ungrounded outrage bait gets more engagement than more careful and reasoned analysis) will apply to AIs as well as to humans.
Yup, but the AIs are massively less likely to help with creating cruel content. There will be a huge asymmetry in what they will be willing to generate.
Imagine an Internet where half the population is Grant Sanderson (the creator of 3Blue1Brown). That’d be awesome. Grant Sanderson has the same incentives as anyone else to create cruel and false content, but he just doesn’t.
That would be awesome! For me!
But I don’t think that the majority of people in the world would prefer that to the current internet, much less actually engage with it more than the current internet. Most people find math boring (even when it is explained as well as when Grant does the explaining). There would be an incentive to produce content that is more engaging for most of the population than linear algebra explanations.