While I agree with you that the approach lsusr’s taking isn’t great, and I disagree with the focus on prediction right now, at the same time I do sympathize with lsusr’s approach, and I think this is related to a problem LW has on AI safety epistemics, which is related to general problems of epistemics on LW.
1a3orn nailed it perfectly in that LW has theories that are not predictive enough, in that you could justify too many outcomes of AI safety with the theories we have, and from my meta/outside viewpoint, a lot of LW is neither using empirical evidence like science or trying to formalize things as mathematics has done, but rather philosophizing, which is terrible at getting anywhere close to the epistemic truth of the matter often, and the response is some variation of LW is special at something, but why is this assumed rather than deferring to the outside view.
Don’t get me wrong, I think LW is right on that the science establishment we have is deeply inadequate, and is stuck in an inadequate equilibrium, but one thing we have learned at great cost is that we need to be empirical and touch reality, rather than appeal to cultural traditions or out of touch grand theories.
What lsusr is trying to do is to get people to have predictions on AI safety that are falsifiable and testable, to prevent the problem of being out of touch with reality.
A good model here is Ajeya Cotra and the Technical interpretability community. While Stephen Casper criticized them, they’re probably the only group that is actually trying to touch reality. The fact that that most LWers are either hostile to empirical evidence is probably a product of assumed specialness, without any reason for that specialness.
While I agree with you that the approach lsusr’s taking isn’t great, and I disagree with the focus on prediction right now, at the same time I do sympathize with lsusr’s approach, and I think this is related to a problem LW has on AI safety epistemics, which is related to general problems of epistemics on LW.
1a3orn nailed it perfectly in that LW has theories that are not predictive enough, in that you could justify too many outcomes of AI safety with the theories we have, and from my meta/outside viewpoint, a lot of LW is neither using empirical evidence like science or trying to formalize things as mathematics has done, but rather philosophizing, which is terrible at getting anywhere close to the epistemic truth of the matter often, and the response is some variation of LW is special at something, but why is this assumed rather than deferring to the outside view.
Don’t get me wrong, I think LW is right on that the science establishment we have is deeply inadequate, and is stuck in an inadequate equilibrium, but one thing we have learned at great cost is that we need to be empirical and touch reality, rather than appeal to cultural traditions or out of touch grand theories.
What lsusr is trying to do is to get people to have predictions on AI safety that are falsifiable and testable, to prevent the problem of being out of touch with reality.
A good model here is Ajeya Cotra and the Technical interpretability community. While Stephen Casper criticized them, they’re probably the only group that is actually trying to touch reality. The fact that that most LWers are either hostile to empirical evidence is probably a product of assumed specialness, without any reason for that specialness.
I am very much on board with encouraging more specific predictions and more contact with reality.