For me, a crux about the impact of AI on education broadly is how our appetite for entertainment behaves at the margins close to entertainment saturation.
Possibility 1: it will always be very tempting to direct our attention to the most entertaining alternative, even at very high levels of entertainment
Possibility 2: there is some absolute threshold of entertainment above which we become indifferent between unequally entertaining alternatives
If Possibility 1 holds, I have a hard time seeing how any kind of informational or educational content, which is constrained by having to provide information or education, will ever compete with slop, which is totally unconstrained and can purely optimize for grabbing your attention.
If Possibility 2 holds, and we get really good at making anything more entertaining (this seems like a very doable hill to climb as it directly plays into the kinds of RL behaviors we are economically rewarded for monitoring and encouraging already) then I’d be very optimistic that a few years from now we can simply make super entertaining education or news, and lots of us might consume that if it gets us our entertainment “fill’ plus life benefits to boot.
I suppose there are varying degrees of the strength of the statement.
Strong form: sufficiently compelling entertainment is irresistible for almost anyone (and of course it may disguise itself as different things to seduce different people, etc.)
Medium form: it’s not theoretically irresistible, and if you’re really willful about it you can resist it, but people will large will (perhaps by choice, ultimately) not resist it, much as they (we?) have not resisted dedicating an increasing fraction of their time to digital entertainment so far.
Weak form: it’ll be totally easy to resist, and a significant fraction of people will.
I guess I implicitly subscribe to the medium form.
If the majority has some pattern of behavior, that isn’t necessarily even a risk factor for a given person getting sucked into that pattern of behavior. So I’m objecting to the framing (emanating through use of the word “we”) suggesting that a property of behavior of some group has significant ability to affect individuals who are aware of that property, bypassing their judgement about endorsement of that property.
Indeed! I meant “we” as a reference to the collective group of which we are all members of, without requiring that every individual in the group (i.e. you or I) share in every aspect of the general behavior of the group.
To be sure, I would characterize this as a risk factor even if you (or I) will not personally fall prey to this ourselves, in the same way that it’s a risk factor if the IQ of the median human drops by 10 points, which this effectively might be equivalent to (net-of-distractions).
For me, a crux about the impact of AI on education broadly is how our appetite for entertainment behaves at the margins close to entertainment saturation.
Possibility 1: it will always be very tempting to direct our attention to the most entertaining alternative, even at very high levels of entertainment
Possibility 2: there is some absolute threshold of entertainment above which we become indifferent between unequally entertaining alternatives
If Possibility 1 holds, I have a hard time seeing how any kind of informational or educational content, which is constrained by having to provide information or education, will ever compete with slop, which is totally unconstrained and can purely optimize for grabbing your attention.
If Possibility 2 holds, and we get really good at making anything more entertaining (this seems like a very doable hill to climb as it directly plays into the kinds of RL behaviors we are economically rewarded for monitoring and encouraging already) then I’d be very optimistic that a few years from now we can simply make super entertaining education or news, and lots of us might consume that if it gets us our entertainment “fill’ plus life benefits to boot.
Which is it?
This kind of “we” seems to deny self-reflection and agency.
I suppose there are varying degrees of the strength of the statement.
Strong form: sufficiently compelling entertainment is irresistible for almost anyone (and of course it may disguise itself as different things to seduce different people, etc.)
Medium form: it’s not theoretically irresistible, and if you’re really willful about it you can resist it, but people will large will (perhaps by choice, ultimately) not resist it, much as they (we?) have not resisted dedicating an increasing fraction of their time to digital entertainment so far.
Weak form: it’ll be totally easy to resist, and a significant fraction of people will.
I guess I implicitly subscribe to the medium form.
If the majority has some pattern of behavior, that isn’t necessarily even a risk factor for a given person getting sucked into that pattern of behavior. So I’m objecting to the framing (emanating through use of the word “we”) suggesting that a property of behavior of some group has significant ability to affect individuals who are aware of that property, bypassing their judgement about endorsement of that property.
Indeed! I meant “we” as a reference to the collective group of which we are all members of, without requiring that every individual in the group (i.e. you or I) share in every aspect of the general behavior of the group.
To be sure, I would characterize this as a risk factor even if you (or I) will not personally fall prey to this ourselves, in the same way that it’s a risk factor if the IQ of the median human drops by 10 points, which this effectively might be equivalent to (net-of-distractions).