My point was that you mentioned removing a source of uncertainty in the model as though that’s by default a good thing. The exclamation point on your mention of the wide uncertainty seems to imply that pretty strongly. I still don’t know if you endorse that general attitude. I was not arguing that you shouldn’t have removed that factor; I was arguing that you didn’t argue for why removing it was good, but implied it was good anyway.
Perhaps this is a nitpick. But in my mind this point is about taking into account how your models are used and interpreted. It seems to me that precise models tend to cause overconfidence, so it would be wiser to err in the direction of including uncertainty in the models, rather than letting that uncertainty sit in complex grounding assumptions.
My point was that you mentioned removing a source of uncertainty in the model as though that’s by default a good thing. The exclamation point on your mention of the wide uncertainty seems to imply that pretty strongly. I still don’t know if you endorse that general attitude. I was not arguing that you shouldn’t have removed that factor; I was arguing that you didn’t argue for why removing it was good, but implied it was good anyway.
Perhaps this is a nitpick. But in my mind this point is about taking into account how your models are used and interpreted. It seems to me that precise models tend to cause overconfidence, so it would be wiser to err in the direction of including uncertainty in the models, rather than letting that uncertainty sit in complex grounding assumptions.
Naturally every model will have some of both.