Models vs beliefs
I think that there is an important difference between sharing your beliefs and sharing what your model predicts. Let me explain.
I’m a basketball fan. There’s this guy named Ben Taylor who has a podcast called Thinking Basketball. He’s currently doing a series of episodes on the top 25 peaks of the century. And he ranked a guy named Draymond Green as having the 22nd best peak.
I don’t agree with this. I would probably have Draymond as, I don’t know, maybe I’d have him somewhere in the 40s? 50s? Maybe even the 60s?
Well, but if you had a gun to my head I’d probably just adopt Taylor’s belief and rank Draymond 22nd.
Suppose the all-knowing god of the courts, Omega, has the answer to where Draymond’s peak is. And suppose that Omega allows me one guess and will shoot me if I’m wrong. Or, if you want to be less grim, gives me $1,000,000 if I’m right. Either way, my guess would be 22.
But despite that being my guess, I still wouldn’t say that I agree with Taylor. There’s this voice inside me that wants to utter “I think you’re wrong, Ben”.
What’s going on here?
I think what’s going on here is that my belief differs from what my model predicts. Let me explain. Dammit, I said that already. But still, let me explain.
My model of how good a basketball player depends on various things. Shot creation, spacing, finishing, perimeter defense, rim protection, etc etc. It also incorporates numbers and statistics. Box score stats like points per game. On-off stats like how the teams defensive efficiency is with you on the court vs off the court. It even incorporates things like award voting and general reputation.
Anyway, when I do my best to model Draymond and determine where his peak ranks amongst other players this century, this model has him at around 45.
But I don’t trust my model. Well, that’s not true. I have some trust in my model. It’s just that, with a gun to my head, I’d have more trust in Taylor’s model than I would in my own.
Is there anything contradictory about this? Not at all! Or at least not from what I can tell. There are just two separate things at play here.
I feel like I often see people conflate these two separate things. Like if someone has a certain hot take about effective altruism that differs from the mainstream. I find myself wanting to ask them whether this hot take is their genuine gun-to-your-head belief or whether it is just what their best attempt at a model would predict.
I don’t want to downplay the importance of forming models nor of discussing the predictions of your models. Imagine if, in discussing the top 25 peaks of the century, the only conversation was “Ben Taylor says X. I trust Taylor and adopt his belief.” That doesn’t sound like a recipe for intellectual progress. Similarly, if everyone simply deferred to Toby Ord on questions of effective giving—or to Eliezer on AI timelines, Zvi on Covid numbers, NNGroup on hamburger menus, whatever—I don’t think that would be very productive either.
But we’re in a “two things could be true” situation here. It is almost certainly true that sharing your models and their predictions is good for intellectual progress. It is also true that the experiences you actually anticipate are not necessarily the same experiences that your model anticipates. That with a gun to your head, you very well might ditch your model and adopt the beliefs of others.
The use of models/theories is in their legibility, you don’t necessarily want to obey your models even when forming beliefs on your own. Developing and applying models is good exercise, and there is nothing wrong with working on multiple mutually contradictory models.
Framings take this further, towards an even more partial grasp on reality, and can occasionally insist on patently wrong claims for situations that are not central to how they view the world. Where models help with local validity and communication, framings help with prioritization of concepts/concerns, including prioritization of development of appropriate kinds of models.
Neither should replace the potentially illegible judgement that isn’t necessarily possible to articulate or motivate well. That seems to be an important failure mode that leads to either rigid refusal to work with (and get better at) the situations that are noncentral for your favored theories, or to deference to such theories even where they have no business having a clue. If any situation is free to spin up new framings and models around itself, even when they are much worse than and contradictory to the nearby models and framings that don’t quite fit, then there is potential to efficiently get better at understanding new things, without getting overly anchored to ways of thinking that are much more familiar or better understood.
Is the idea that
your “belief” you’re describing is a somehow unupdated ranked calibration with E[Draymond rank|Ben Taylor’s opinion] = E[Draymond rank] = ~50, whereas your model (which you consider separate from a true belief) would predict that the random variable “Draymond rank|Ben Taylor’s opinion” has mode 22, which is clearly different from your prior of ~50
your alief is that Draymond’s rank is ~50 while your System 2 level belief is that Draymond’s rank is ~22
or some combination of the two?
I’m not sure what “unupdated ranked calibraiton” or “E[...]” mean so I’m having trouble understanding the first list item.
For the second list item, I wouldn’t say either of those things are true. Let me try to clarify.
Suppose that I have a really simple model of what makes a good basketball player with two parameters: shooting ability and passing ability. Maybe I’d rank Draymond at a 15⁄100 at shooting and a 90⁄100 at passing. And maybe I weigh shooting as twice as important as passing. This yields a value of 40⁄100 for Draymond. And maybe if I ran all of the other NBA players through this model it’d to Draymond having the 500th best peak of the century.
So yeah, we can say that this simple model assigns him a value of 40⁄100 and predicts that his peak is the 500th best peak of the century.
Maybe I then listen to a podcast episode where Ben Taylor talks about the importance of defense. So I add defense in as a third parameter, updating my model. Taylor also talks about how passing is underrated. This causes me to further refine my model, saying now that shooting is only 1.5x as important as passing instead of 2x more important. Maybe this updated model gives Draymond a value of 75 and ranks his peak at 325.
Taking this further, I keep updating and refining my model, but the end result is that my model still yields a different prediction from Taylor’s. It’s not totally clear to me why this is the case.
But regardless of what this model of mine says, in attempting to predict what Omega’s ranking is, I would throw my model away and just use Taylor’s belief. And I wouldn’t classify this decision as solely a System 2 level decision. In making this decision, I’m utilizing System 1 and System 2.
I’m not too familiar with the concept of aliefs, but that doesn’t seem to me to be the right concept to describe the output of my model.
You confused the numbers 22 and 45. But the idea is mostly correct: if the author’s model and parameter values were true, it would place Draymond at the 45th place. On the other hand, Taylor’s opinion places Draymond on the 22nd place and the author believes that Taylor knows something much better.
This implies that the author’s model either got some facts wrong or doesn’t take into account something unknown to the author, but known to Taylor. If the author described the model, then others would be able to point out potential mistakes. But you cannot tell what exactly you don’t know, only potential areas of search.[1]
EDIT: As an example, one could study Ajeya Cotra’s model, which is wrong on so many levels that I find it hard to believe that the model even appeared.
However, given access to a ground truth like the location of Uranus, one can understand what factor affects the model and where one could find the factor’s potential source.
This seems more like an underspecified question than a prediction difference. You and Ben (and Omega) have different criteria for your rankings. Or, I guess different factual data about what happened—maybe you misread a stat or something.
The reason you feel a dissonance is that you’re not noticing the difference between “rank of peak using my subjective and unspecified weighting”, which is not objectively testable against any future experience, vs “my prediction of what someone else would say to a different question using the same words”, which is resolvable.
I hear ya, but no, I don’t think it’s a criteria difference. Ben and I both are evaluating players based on the criteria of, roughly, how much that player helps you win a championship, or Championship Odds over Replacement Player (CORP). It’s a “real” disagreement.
Often times that isn’t the case though with these sorts of top 25 lists. For example, some people incorporate “floor raising”—making a bad team average—and not just “ceiling raising”.
Hmm. I’m not sure how to resolve our disagreement on this. When you say “roughly”, you’re acknowledging the lack of precision in your criteria, which is exactly the place I think your and Ben’s criteria differ.
Does it feel like if you built the calculator / trained the ranking model such that all the weights were visible, and all the inputs about Draymond Green’s (and all other players’) performances were agreed, and if your counterparts did the same, you’d be able to actually WANT to change your mind to be more correct, or at least identify the places where you disagree on definition/methodology?
Hm, yeah. It does seem a little tough to resolve.
My position is that Ben and I are using very similar criteria and when my model outputs a different ranking of Draymond than Ben’s ranking of 22, very little if any of that is because Ben and I are using different criteria.
It sounds like your position is that you worry that the criteria Ben and I are using differ in a pretty meaningful way, and that a big reason why we are ranking Draymond differently is because we are using different criteria. Does that seem correct?
If so, I suppose the way to resolve this would be for me to speak more about the criteria I am using and, since Ben isn’t here, for me to speak more about what I think the criteria is that Ben is using. Then try to diff them. I think that’d be diving relatively deeply into the domain of basketball which I find fun discuss but I’m not sure how interested you would be in that. What do you think?
If I’m understanding this correctly, yeah, I would want to change my mind. I think two people with the same inputs and weights would only disagree on things like criteria and definitions, not on anticipated experiences.
I don’t care enough about basketball to follow that object-level analysis. I do appreciate that the thought experiment of doing so seems to indicate that you believe there IS some objective thing (the inputs or the weighting) that you or they are incorrect about.
I think that is what I was pointing at, and in my mind dissolves our disagreement. I was probably over-weighting your line
I took this to mean that you cognitively preferred your model, even though it differed from theirs. After our discussion, you may have only meant that your instincts are probably wrong, and they’re likely more correct than your feelings.
edit: which I guess means I should ask more directly—if you believe there is a more correct answer than yours, why don’t you want to hone your instincts and change your feelings?
Gotcha. Yeah with that line I indeed meant that I have more trust in Taylor’s judgement than my own instincts.
Related: Philosopher Peter van Inwagen on the way that one philosopher can disagree with another while accepting that the other guy is smarter and better informed. (Which he points out both because he finds it interesting in itself, and to argue that there’s something inconsistent in complaining about religious believers—of whom he is one—believing things when the balance of the evidence is against them.)
When things are complicated, we probably develop a partial model first (taking only some things into account) and maybe a full model later. The partial model may give worse results, compared to trusting the experts blindly. And yet, making your own model brings you closer to understanding how things work.
Like, maybe you are trying to estimate some X = A + B + C + D + E, but so far your model is only X = A + B + C.
I don’t buy this usage, my brain is nothing but models. but I could buy a related concept this post made me think of: my brain, which is a big honkin’ messy model of my behavior, includes as part of it a model of the environment as it is now. and separately (well, sort of. not really. or at least not exactly) it has temporal models of “given that this is how things are right now, here’s how they might change”. and actually, sometimes I have blurry understandings, where the thing I’m thinking about is abstracted over a chunk of spacetime. maybe I don’t know exactly when you’ll eat, so I don’t know if you’re eating right now, but I know that, having “now” as being as long as a day, you eat now. if “now” is only as long as a speed of light round trip between us, then I don’t know where you are exactly, but I still know that you are, that you are a body, that you’re between 95f and 110f (probably right around 98.5f!), some other things like that. but in order to talk about what happens in the future, I need to think about way more than just you. I have to think about the state right now of—well, most humans on earth, the weather, various machines, and so on, and consider how they might interact. I can be much more sure of facts that are close to me in spacetime than facts that are far, which I need to think to figure out. so in that sense I would say I have a model of the world, but I only have beliefs about the future.
… but that’s the opposite of how you used the words. I find your usage confusing.