Instead of changing a model to make it better fit the world, the AI starts changing the world to make it better fit its model.
One might call this ‘cleaning’ or ‘homogenizing’ the world; instead of trying to get better at predicting the variation, you try to reduce the variation so that prediction is easier.
I don’t think I’ve seen much mathematical work on this, and very little that discusses it as an AI failure mode. Most of the discussions I see of it as a failure mode have to do with markets, globalization, agriculture, and pandemic risk.
One might call this ‘cleaning’ or ‘homogenizing’ the world; instead of trying to get better at predicting the variation, you try to reduce the variation so that prediction is easier.
I don’t think I’ve seen much mathematical work on this, and very little that discusses it as an AI failure mode. Most of the discussions I see of it as a failure mode have to do with markets, globalization, agriculture, and pandemic risk.