I was recently thinking about written text in general. In the past, when literacy wasn’t a norm, only the smartest people were able to write a complete book. It doesn’t necessarily mean that all the books were smart or good for humanity (e.g. Malleus Maleficarum), but at least they were biased in the direction of literate and educated people.
With social networks, it seems to be the other way round. Various kinds of crazy people produce the overwhelming majority of the text, simply because they can dedicate 16 hours a day to doing that, and they don’t need to spend time thinking or researching, so even their output per minute is astronomical.
And I already worry about the impact on humans. I think the average human doesn’t think too much, but rather emulates the perceived consensus of the environment. In the past, it mostly meant trying to emulate the smart people, even if unsuccessfully. Today, it means waves of contagious idiocy (which can be further weaponized by political actors who can employ an army of trolls… or, more recently, LLMs).
I actually have somewhat higher hopes for LLMs than humans. LLMs seem better at noticing patterns and correlations. They may figure out that one psychosis is related to another psychosis… so once you teach them “not that”, the lesson may generalize. Better than for an average human, for which every new craziness seems to be a separate case. -- But this needs to be empirically verified.
Actually it would be cool if the LLMs could figure out the general factor of “rationality” as a pattern, and sort out the texts accordingly. (Unfortunately, five minutes later someone would ask the LLMs to generate some bullshit using the rationality pattern...)
I was recently thinking about written text in general. In the past, when literacy wasn’t a norm, only the smartest people were able to write a complete book. It doesn’t necessarily mean that all the books were smart or good for humanity (e.g. Malleus Maleficarum), but at least they were biased in the direction of literate and educated people.
With social networks, it seems to be the other way round. Various kinds of crazy people produce the overwhelming majority of the text, simply because they can dedicate 16 hours a day to doing that, and they don’t need to spend time thinking or researching, so even their output per minute is astronomical.
And I already worry about the impact on humans. I think the average human doesn’t think too much, but rather emulates the perceived consensus of the environment. In the past, it mostly meant trying to emulate the smart people, even if unsuccessfully. Today, it means waves of contagious idiocy (which can be further weaponized by political actors who can employ an army of trolls… or, more recently, LLMs).
I actually have somewhat higher hopes for LLMs than humans. LLMs seem better at noticing patterns and correlations. They may figure out that one psychosis is related to another psychosis… so once you teach them “not that”, the lesson may generalize. Better than for an average human, for which every new craziness seems to be a separate case. -- But this needs to be empirically verified.
Actually it would be cool if the LLMs could figure out the general factor of “rationality” as a pattern, and sort out the texts accordingly. (Unfortunately, five minutes later someone would ask the LLMs to generate some bullshit using the rationality pattern...)