I think the author (Ben Pace) is proposing that in some contexts, it is good to spend a lot of effort building up and improving your models of things. And that in those contexts, if you just adopt the belief of others without improving your model, well, that won’t be good.
I think the big thing here is research. In the context of research, Ben proposes that it’s important to build up and improve your model. And for you to share with the community what beliefs your model outputs.
This seems correct to me. But I’m pretty sure that it isn’t true in other contexts.
For example, I wanted to buy a new thermometer recently. Infrared ones are convenient, so I wanted to know if they’re comparably accurate to oral ones. I googled it and Cleveland Clinic says they are. Boom. Good enough for me. In this context, I don’t think it was worth spending the effort updating my model of thermometer accuracy. In this context, I just need the output.
I think it’d be interesting to hear people’s thoughts on when it is and isn’t important to improve your models. In what contexts?
I think it’d also be interesting to hear more about why exactly it is harmful in the context of intellectual progress to stray away from building and improving your models. There’s probably a lot to say. I think I remember the book Superforecasters talk about this, but I forget.
In A Sketch of Good Communication—or really, in the Share Models, Not Beliefs sequence, which A Sketch of Good Communication is part of—the author proposes that, hm, I’m not sure exactly how to phrase it.
I think the author (Ben Pace) is proposing that in some contexts, it is good to spend a lot of effort building up and improving your models of things. And that in those contexts, if you just adopt the belief of others without improving your model, well, that won’t be good.
I think the big thing here is research. In the context of research, Ben proposes that it’s important to build up and improve your model. And for you to share with the community what beliefs your model outputs.
This seems correct to me. But I’m pretty sure that it isn’t true in other contexts.
For example, I wanted to buy a new thermometer recently. Infrared ones are convenient, so I wanted to know if they’re comparably accurate to oral ones. I googled it and Cleveland Clinic says they are. Boom. Good enough for me. In this context, I don’t think it was worth spending the effort updating my model of thermometer accuracy. In this context, I just need the output.
I think it’d be interesting to hear people’s thoughts on when it is and isn’t important to improve your models. In what contexts?
I think it’d also be interesting to hear more about why exactly it is harmful in the context of intellectual progress to stray away from building and improving your models. There’s probably a lot to say. I think I remember the book Superforecasters talk about this, but I forget.