But as they do, users will favour the products that give them the best experience
This is one point I find difficult to believe, or at least difficult to find likely. Most people, who are not unusually savvy, already give much more credence to ads, and much less credence to the degree to which they are actually affected by ads, than they should be. Why should that reverse as ads get even better at manipulating us? Why should I expect people to start demonstrating the level of long-term thinking and short-term impulse control and willingness to look weird to their peers that such a shift would need? It’s not like we have a great track record of collectively managing this for other addictive but harmful stimuli. whether informational, social, or biochemical.
I’m also not sold on this specific part, and I’m really curious about what things support the idea. One reason I don’t think it’s good to rely on this as the default expectation though, is that I’m skeptical about humans’ abilities to even know what the “best experience” is in the first place. I wrote a short rambly post touching on, in some part, my worries about online addiction: https://www.lesswrong.com/posts/rZLKcPzpJvoxxFewL/converging-toward-a-million-worlds
Basically, I buy into the idea that there are two distinct value systems in humans. One subconscious system where the learning is mostly from evolutionary pressures, and one conscious/executive system that cares more about “higher-order values” which I unfortunately can’t really explicate. Examples of the former: craving sweets, addiction to online games with well engineered artificial fulfillment. Example of the latter: wanting to work hard, even when it’s physically demanding or mentally stressful, to make some type of positive impact for broader society.
And I think today’s modern ML systems are asymmetrically exploiting the subconscious value system at the expense of the conscious/executive value system. Even knowing all this, I really struggle to overcome instances of akrasia, controlling my diet, not drowning myself in entertainment consumption, etc. I feel like there should be some kind of attempt to level the playing field, so to speak, with which value system is being allowed to thrive. At the very least, transparency and knowledge about this phenomena to people who are interacting with powerful recommender (or just general) ML systems, and in the optimal, allowing complete agency and control over what value system you want to prioritize, and to what extent.
This is one point I find difficult to believe, or at least difficult to find likely. Most people, who are not unusually savvy, already give much more credence to ads, and much less credence to the degree to which they are actually affected by ads, than they should be. Why should that reverse as ads get even better at manipulating us? Why should I expect people to start demonstrating the level of long-term thinking and short-term impulse control and willingness to look weird to their peers that such a shift would need? It’s not like we have a great track record of collectively managing this for other addictive but harmful stimuli. whether informational, social, or biochemical.
I’m also not sold on this specific part, and I’m really curious about what things support the idea. One reason I don’t think it’s good to rely on this as the default expectation though, is that I’m skeptical about humans’ abilities to even know what the “best experience” is in the first place. I wrote a short rambly post touching on, in some part, my worries about online addiction: https://www.lesswrong.com/posts/rZLKcPzpJvoxxFewL/converging-toward-a-million-worlds
Basically, I buy into the idea that there are two distinct value systems in humans. One subconscious system where the learning is mostly from evolutionary pressures, and one conscious/executive system that cares more about “higher-order values” which I unfortunately can’t really explicate. Examples of the former: craving sweets, addiction to online games with well engineered artificial fulfillment. Example of the latter: wanting to work hard, even when it’s physically demanding or mentally stressful, to make some type of positive impact for broader society.
And I think today’s modern ML systems are asymmetrically exploiting the subconscious value system at the expense of the conscious/executive value system. Even knowing all this, I really struggle to overcome instances of akrasia, controlling my diet, not drowning myself in entertainment consumption, etc. I feel like there should be some kind of attempt to level the playing field, so to speak, with which value system is being allowed to thrive. At the very least, transparency and knowledge about this phenomena to people who are interacting with powerful recommender (or just general) ML systems, and in the optimal, allowing complete agency and control over what value system you want to prioritize, and to what extent.
Possibly relevant: Siren worlds and the perils of over-optimised search