My wild guess would be that there is a significant local persuasion overhang in the sense that if someone set up an RL context (like, any scaled up persuasive context, e.g. bots on various social platforms with clear feedback), there would be fast gains, maybe to the point of being really disruptive in certain classes of contexts. (There is another theory which states that this has already happened.) I think you’d then hit an asymptote below being relevant to most important contexts.
(Because today’s systems would not be able to follow along with how humans change their stances in response to things like this. For example, image generation can easily fool people, but for most people there would only be a brief period during which that person would send money based mainly on receiving a realistic image which, if real, would make them want to send money. They’d just learn to not do that.)
My wild guess would be that there is a significant local persuasion overhang in the sense that if someone set up an RL context (like, any scaled up persuasive context, e.g. bots on various social platforms with clear feedback), there would be fast gains, maybe to the point of being really disruptive in certain classes of contexts. (There is another theory which states that this has already happened.) I think you’d then hit an asymptote below being relevant to most important contexts.
(Because today’s systems would not be able to follow along with how humans change their stances in response to things like this. For example, image generation can easily fool people, but for most people there would only be a brief period during which that person would send money based mainly on receiving a realistic image which, if real, would make them want to send money. They’d just learn to not do that.)
I guess a different framing here that’s consistent with your claims in the first paragraph is that the current overhang isn’t very high.
Yeah that’s fair.