agreed. right now some humans believe they can ride the “manipulate-others” beast without getting destroyed by manipulation themselves; as ai gets stronger, there’s significant reason to believe that the frontier of unfriendliness will come from advertising companies.
currently the youtube recommender is quite weak. it’s some sort of a reinforcement system that does not plan far ahead; I think it may be a transformer, and it has a lot of representation capability, but as we’ve seen repeatedly, most of the crazy strength of deepmind’s strongest agents is combining planning with a strong model that can learn to guide the planning as the RL occurs.
adding planning to a sufficiently general system can make it catastrophically strong without it being clear that it’s done so and plan right through the agents who built the planner, though for a weak advertising planner, that would take a few weeks probably. and an ai that is already in use by a group who desires to use the ai to manipulate has illusion of incentive to add planning, because it would seem that being able to plan ahead would be able to schedule ads to manipulate the user into very specific emotional states. even if much of upper management is initially spared from impact, it wouldn’t take long for the added chaos in the global system to result in severe damage to the company’s viability and plausibly even ruin lives fast.
I hope deepmind has stern words with anyone on ad teams who tries that shit. and in the meantime, we need better tools for countering attempted manipulation. what objective helps users come into understanding of a system, rather than being manipulated? maybe MIMI+ai aided education stuff?
agreed. right now some humans believe they can ride the “manipulate-others” beast without getting destroyed by manipulation themselves; as ai gets stronger, there’s significant reason to believe that the frontier of unfriendliness will come from advertising companies.
currently the youtube recommender is quite weak. it’s some sort of a reinforcement system that does not plan far ahead; I think it may be a transformer, and it has a lot of representation capability, but as we’ve seen repeatedly, most of the crazy strength of deepmind’s strongest agents is combining planning with a strong model that can learn to guide the planning as the RL occurs.
adding planning to a sufficiently general system can make it catastrophically strong without it being clear that it’s done so and plan right through the agents who built the planner, though for a weak advertising planner, that would take a few weeks probably. and an ai that is already in use by a group who desires to use the ai to manipulate has illusion of incentive to add planning, because it would seem that being able to plan ahead would be able to schedule ads to manipulate the user into very specific emotional states. even if much of upper management is initially spared from impact, it wouldn’t take long for the added chaos in the global system to result in severe damage to the company’s viability and plausibly even ruin lives fast.
I hope deepmind has stern words with anyone on ad teams who tries that shit. and in the meantime, we need better tools for countering attempted manipulation. what objective helps users come into understanding of a system, rather than being manipulated? maybe MIMI+ai aided education stuff?