Personally, a downside of the post is that “Predict-O-Matic problems” isn’t that great a category. I prefer “inner and outer alignment problems for predictive systems,” which is neater. On the other hand, if I mention the Parable of the Predict-O-Matic people can quickly understand what I’m talking about.
But the post provides a useful starting point. In particular, to me it suggests looking to prediction systems as a toy model for the alignment problem, which is something I’ve personally had fun looking into, and which strikes me as promising.
The short story presents some intuitions which would be harder to get from a more theoretical standpoint. And these intuitions then catalyzed further discussion, like The Dualist Predict-O-Matic ($100 prize), or my own Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems).
Personally, a downside of the post is that “Predict-O-Matic problems” isn’t that great a category. I prefer “inner and outer alignment problems for predictive systems,” which is neater. On the other hand, if I mention the Parable of the Predict-O-Matic people can quickly understand what I’m talking about.
But the post provides a useful starting point. In particular, to me it suggests looking to prediction systems as a toy model for the alignment problem, which is something I’ve personally had fun looking into, and which strikes me as promising.
Lastly, I feel that the title is missing a “the.”