Even here the incentives are clearly followed: If you perceive a problem, it makes sense for the media to talk about it—that will make you click the link. But whether the proposed solution works or not, is irrelevant—you have already made your click. Actually, if the solution does not work, that is perhaps even better—the next article can offer you a different solution… or pretend to, but then recycle the old one; doesn’t matter, you already clicked.
Though I wonder how much of this is done on purpose, and how much is just lazy people writing the first thing that comes to their mind, with no incentive to do it differently. (Are the journalists whose proposed solutions to social problems actually work paid better than the journalists whose solutions don’t? I don’t think so.)
I don’t know how tractable this particular problem is, a really big part of it is that a social media platform can easily tell which articles and which news outlets correlate with a user giving up on the platform for good (or at least spending fewer hours on it), and then shows users fewer things like that in the long run. Superior versions of this article would be the first thing to go, and that’s with the tech level that social media platforms had a decade ago.
This results in the kind of optimization pressure that possibly no human is aware of (even with 2013 ML), let alone the journalist themself. But the all/most of the news articles that result can easily end up being pretty bad at getting people off of social media. The only solution I can think of seems to just be more lesswrong and less social media.
I think that a significant part of what made internet worse is that social networks share individual articles.
If you need to go to the specific source of articles (like visit a specific website such as LessWrong, or buy a specific magazine), and after some time you notice that most of the content is bullshit, you stop visiting that source. And if someone recommends you something from that source, you will say no thanks.
But if the articles are shared individually, this reaction is suppressed. You may notice that it comes from a specific website, but instinctively it is “an article on the social network”. Also, now the articles you read are determined by what other people share, rather than what sources you visit.
To put it bluntly, you choice is to either visit Facebook or avoid Facebook. Once you visit Facebook, all the subsequent choices are made by Facebook. (The same for other social networks.) Not absolutely; you can do some customization on Facebook, but the website keep dragging you in certain direction. It changes the defaults. When you browse websites individually, the default action is “read nothing”. When you browse Facebook, the default action is “read whatever Facebook gives me”; to avoid a specific source you must take explicit action.
Thanks for the warnings in brackets!
Even here the incentives are clearly followed: If you perceive a problem, it makes sense for the media to talk about it—that will make you click the link. But whether the proposed solution works or not, is irrelevant—you have already made your click. Actually, if the solution does not work, that is perhaps even better—the next article can offer you a different solution… or pretend to, but then recycle the old one; doesn’t matter, you already clicked.
Though I wonder how much of this is done on purpose, and how much is just lazy people writing the first thing that comes to their mind, with no incentive to do it differently. (Are the journalists whose proposed solutions to social problems actually work paid better than the journalists whose solutions don’t? I don’t think so.)
I don’t know how tractable this particular problem is, a really big part of it is that a social media platform can easily tell which articles and which news outlets correlate with a user giving up on the platform for good (or at least spending fewer hours on it), and then shows users fewer things like that in the long run. Superior versions of this article would be the first thing to go, and that’s with the tech level that social media platforms had a decade ago.
This results in the kind of optimization pressure that possibly no human is aware of (even with 2013 ML), let alone the journalist themself. But the all/most of the news articles that result can easily end up being pretty bad at getting people off of social media. The only solution I can think of seems to just be more lesswrong and less social media.
I think that a significant part of what made internet worse is that social networks share individual articles.
If you need to go to the specific source of articles (like visit a specific website such as LessWrong, or buy a specific magazine), and after some time you notice that most of the content is bullshit, you stop visiting that source. And if someone recommends you something from that source, you will say no thanks.
But if the articles are shared individually, this reaction is suppressed. You may notice that it comes from a specific website, but instinctively it is “an article on the social network”. Also, now the articles you read are determined by what other people share, rather than what sources you visit.
To put it bluntly, you choice is to either visit Facebook or avoid Facebook. Once you visit Facebook, all the subsequent choices are made by Facebook. (The same for other social networks.) Not absolutely; you can do some customization on Facebook, but the website keep dragging you in certain direction. It changes the defaults. When you browse websites individually, the default action is “read nothing”. When you browse Facebook, the default action is “read whatever Facebook gives me”; to avoid a specific source you must take explicit action.