Hmm. So on one hand, I think it’s reasonable to argue that all the Simulacra stuff hasn’t made much legible case for it actually being a model with explanatory power.
But, to say “there’s nothing to explain” or “it’s not worth trying” seems pretty wrong. If we’re reliably running into particular kinds of nonsense (and we seem to be), knowing what’s generating the nonsense seems important both for predicting/navigating the world, and for helping us not fall prey to it. (Maybe your point there is that “steering towards goodness” is better than “steering away from badness”, which seems plausibly true, but a) I think we need at least some model of badness, b) there are places where, say, Simulacrum Level 3 might actually be an important coordination strategy)
I haven’t seen these analyses into definitions and causes done with rigor. It also seems very hard to achieve rigor in these analyses, given that the information into individual psychology and sociology of specific institutions we’d need to do so successfully is hard to come by.
As such, the tack these authors take is often not to attempt such a rigorous analyses, but instead to go straight from their current model, composed of guesswork, to activist claims about how to improve the world and the level of destruction caused by that guesswork-based model.
The analysis, then, seems to be of a guesswork-based, ill-defined model with limited predictive power or falsifiability, involving a lot of arguing with organizations and people you perceive as propagandists for empirically and morally wrong views. It also seems to involve a tendency to discourage behaviors that could disconfirm the assumptions.
But I don’t want to tear into it too deeply. I recognize that simulacra levels point at something real. I also think that doing this too much would be hypocritical.
If I saw more attempts to falsify the model or use it to make predictions, I’d be happier with it.
Hmm. So on one hand, I think it’s reasonable to argue that all the Simulacra stuff hasn’t made much legible case for it actually being a model with explanatory power.
But, to say “there’s nothing to explain” or “it’s not worth trying” seems pretty wrong. If we’re reliably running into particular kinds of nonsense (and we seem to be), knowing what’s generating the nonsense seems important both for predicting/navigating the world, and for helping us not fall prey to it. (Maybe your point there is that “steering towards goodness” is better than “steering away from badness”, which seems plausibly true, but a) I think we need at least some model of badness, b) there are places where, say, Simulacrum Level 3 might actually be an important coordination strategy)
I haven’t seen these analyses into definitions and causes done with rigor. It also seems very hard to achieve rigor in these analyses, given that the information into individual psychology and sociology of specific institutions we’d need to do so successfully is hard to come by.
As such, the tack these authors take is often not to attempt such a rigorous analyses, but instead to go straight from their current model, composed of guesswork, to activist claims about how to improve the world and the level of destruction caused by that guesswork-based model.
The analysis, then, seems to be of a guesswork-based, ill-defined model with limited predictive power or falsifiability, involving a lot of arguing with organizations and people you perceive as propagandists for empirically and morally wrong views. It also seems to involve a tendency to discourage behaviors that could disconfirm the assumptions.
But I don’t want to tear into it too deeply. I recognize that simulacra levels point at something real. I also think that doing this too much would be hypocritical.
If I saw more attempts to falsify the model or use it to make predictions, I’d be happier with it.