I’m not sure what “unupdated ranked calibraiton” or “E[...]” mean so I’m having trouble understanding the first list item.
For the second list item, I wouldn’t say either of those things are true. Let me try to clarify.
Suppose that I have a really simple model of what makes a good basketball player with two parameters: shooting ability and passing ability. Maybe I’d rank Draymond at a 15⁄100 at shooting and a 90⁄100 at passing. And maybe I weigh shooting as twice as important as passing. This yields a value of 40⁄100 for Draymond. And maybe if I ran all of the other NBA players through this model it’d to Draymond having the 500th best peak of the century.
So yeah, we can say that this simple model assigns him a value of 40⁄100 and predicts that his peak is the 500th best peak of the century.
Maybe I then listen to a podcast episode where Ben Taylor talks about the importance of defense. So I add defense in as a third parameter, updating my model. Taylor also talks about how passing is underrated. This causes me to further refine my model, saying now that shooting is only 1.5x as important as passing instead of 2x more important. Maybe this updated model gives Draymond a value of 75 and ranks his peak at 325.
Taking this further, I keep updating and refining my model, but the end result is that my model still yields a different prediction from Taylor’s. It’s not totally clear to me why this is the case.
But regardless of what this model of mine says, in attempting to predict what Omega’s ranking is, I would throw my model away and just use Taylor’s belief. And I wouldn’t classify this decision as solely a System 2 level decision. In making this decision, I’m utilizing System 1 and System 2.
I’m not too familiar with the concept of aliefs, but that doesn’t seem to me to be the right concept to describe the output of my model.
I’m not sure what “unupdated ranked calibraiton” or “E[...]” mean so I’m having trouble understanding the first list item.
For the second list item, I wouldn’t say either of those things are true. Let me try to clarify.
Suppose that I have a really simple model of what makes a good basketball player with two parameters: shooting ability and passing ability. Maybe I’d rank Draymond at a 15⁄100 at shooting and a 90⁄100 at passing. And maybe I weigh shooting as twice as important as passing. This yields a value of 40⁄100 for Draymond. And maybe if I ran all of the other NBA players through this model it’d to Draymond having the 500th best peak of the century.
So yeah, we can say that this simple model assigns him a value of 40⁄100 and predicts that his peak is the 500th best peak of the century.
Maybe I then listen to a podcast episode where Ben Taylor talks about the importance of defense. So I add defense in as a third parameter, updating my model. Taylor also talks about how passing is underrated. This causes me to further refine my model, saying now that shooting is only 1.5x as important as passing instead of 2x more important. Maybe this updated model gives Draymond a value of 75 and ranks his peak at 325.
Taking this further, I keep updating and refining my model, but the end result is that my model still yields a different prediction from Taylor’s. It’s not totally clear to me why this is the case.
But regardless of what this model of mine says, in attempting to predict what Omega’s ranking is, I would throw my model away and just use Taylor’s belief. And I wouldn’t classify this decision as solely a System 2 level decision. In making this decision, I’m utilizing System 1 and System 2.
I’m not too familiar with the concept of aliefs, but that doesn’t seem to me to be the right concept to describe the output of my model.