Models vs beliefs

I think that there is an important difference between sharing your beliefs and sharing what your model predicts. Let me explain.

I’m a basketball fan. There’s this guy named Ben Taylor who has a podcast called Thinking Basketball. He’s currently doing a series of episodes on the top 25 peaks of the century. And he ranked a guy named Draymond Green as having the 22nd best peak.

I don’t agree with this. I would probably have Draymond as, I don’t know, maybe I’d have him somewhere in the 40s? 50s? Maybe even the 60s?

Well, but if you had a gun to my head I’d probably just adopt Taylor’s belief and rank Draymond 22nd.

Suppose the all-knowing god of the courts, Omega, has the answer to where Draymond’s peak is. And suppose that Omega allows me one guess and will shoot me if I’m wrong. Or, if you want to be less grim, gives me $1,000,000 if I’m right. Either way, my guess would be 22.

But despite that being my guess, I still wouldn’t say that I agree with Taylor. There’s this voice inside me that wants to utter “I think you’re wrong, Ben”.

What’s going on here?

I think what’s going on here is that my belief differs from what my model predicts. Let me explain. Dammit, I said that already. But still, let me explain.

My model of how good a basketball player depends on various things. Shot creation, spacing, finishing, perimeter defense, rim protection, etc etc. It also incorporates numbers and statistics. Box score stats like points per game. On-off stats like how the teams defensive efficiency is with you on the court vs off the court. It even incorporates things like award voting and general reputation.

Anyway, when I do my best to model Draymond and determine where his peak ranks amongst other players this century, this model has him at around 45.

But I don’t trust my model. Well, that’s not true. I have some trust in my model. It’s just that, with a gun to my head, I’d have more trust in Taylor’s model than I would in my own.

Is there anything contradictory about this? Not at all! Or at least not from what I can tell. There are just two separate things at play here.

I feel like I often see people conflate these two separate things. Like if someone has a certain hot take about effective altruism that differs from the mainstream. I find myself wanting to ask them whether this hot take is their genuine gun-to-your-head belief or whether it is just what their best attempt at a model would predict.

I don’t want to downplay the importance of forming models nor of discussing the predictions of your models. Imagine if, in discussing the top 25 peaks of the century, the only conversation was “Ben Taylor says X. I trust Taylor and adopt his belief.” That doesn’t sound like a recipe for intellectual progress. Similarly, if everyone simply deferred to Toby Ord on questions of effective giving—or to Eliezer on AI timelines, Zvi on Covid numbers, NNGroup on hamburger menus, whatever—I don’t think that would be very productive either.

But we’re in a “two things could be true” situation here. It is almost certainly true that sharing your models and their predictions is good for intellectual progress. It is also true that the experiences you actually anticipate are not necessarily the same experiences that your model anticipates. That with a gun to your head, you very well might ditch your model and adopt the beliefs of others.