[Question] What’s the actual evidence that AI marketing tools are changing preferences in a way that makes them easier to predict?

I’ve encountered this claim multiple times over the years (most recently on this AXRP episode), but I can’t trace its origins (it doesn’t seem to be on Wikipedia). Quoting Evan from the episode:

And so if you think about, for example, an online learning setup, maybe you’re imagining something like a recommendation system. So it’s trying to recommend you YouTube videos or something. One of the things that can happen in this sort of a setup is that, well, it can try to change the distribution to make its task easier in the future. You know, if it tries to give you videos which will change your views in a particular way such that it’s easier to satisfy your views in the future, that’s a sort of non-myopia that could be incentivized just by the fact that you’re doing this online learning over many steps.

And if you think about something, especially what can happen in this sort of situation is, let’s say I have a—Or another situation this can happen is let’s say I’m just trying to train the model to satisfy humans’ preferences or whatever. It can try to modify the humans’ preferences to be easier to satisfy.

Furthermore, there’s a world a difference between deliberately optimising for modifying preferences in order to make them easier to predict, vs preferences changing as a byproduct of the AI getting better at predicting them and thus converging on what to advertise. This matters for what predicted features of strategies an AI is likely to pick out of strategy space when new options are introduced.