your “belief” you’re describing is a somehow unupdated ranked calibration with E[Draymond rank|Ben Taylor’s opinion] = E[Draymond rank] = ~50, whereas your model (which you consider separate from a true belief) would predict that the random variable “Draymond rank|Ben Taylor’s opinion” has mode 22, which is clearly different from your prior of ~50
your alief is that Draymond’s rank is ~50 while your System 2 level belief is that Draymond’s rank is ~22
I’m not sure what “unupdated ranked calibraiton” or “E[...]” mean so I’m having trouble understanding the first list item.
For the second list item, I wouldn’t say either of those things are true. Let me try to clarify.
Suppose that I have a really simple model of what makes a good basketball player with two parameters: shooting ability and passing ability. Maybe I’d rank Draymond at a 15⁄100 at shooting and a 90⁄100 at passing. And maybe I weigh shooting as twice as important as passing. This yields a value of 40⁄100 for Draymond. And maybe if I ran all of the other NBA players through this model it’d to Draymond having the 500th best peak of the century.
So yeah, we can say that this simple model assigns him a value of 40⁄100 and predicts that his peak is the 500th best peak of the century.
Maybe I then listen to a podcast episode where Ben Taylor talks about the importance of defense. So I add defense in as a third parameter, updating my model. Taylor also talks about how passing is underrated. This causes me to further refine my model, saying now that shooting is only 1.5x as important as passing instead of 2x more important. Maybe this updated model gives Draymond a value of 75 and ranks his peak at 325.
Taking this further, I keep updating and refining my model, but the end result is that my model still yields a different prediction from Taylor’s. It’s not totally clear to me why this is the case.
But regardless of what this model of mine says, in attempting to predict what Omega’s ranking is, I would throw my model away and just use Taylor’s belief. And I wouldn’t classify this decision as solely a System 2 level decision. In making this decision, I’m utilizing System 1 and System 2.
I’m not too familiar with the concept of aliefs, but that doesn’t seem to me to be the right concept to describe the output of my model.
You confused the numbers 22 and 45. But the idea is mostly correct: if the author’s model and parameter values were true, it would place Draymond at the 45th place. On the other hand, Taylor’s opinion places Draymond on the 22nd place and the author believes that Taylor knows something much better.
This implies that the author’s model either got some facts wrong or doesn’t take into account something unknown to the author, but known to Taylor. If the author described the model, then others would be able to point out potential mistakes. But you cannot tell what exactly you don’t know, only potential areas of search.[1]
EDIT: As an example, one could study Ajeya Cotra’s model, which is wrong on so many levels that I find it hard to believe that the model even appeared.
However, given access to a ground truth like the location of Uranus, one can understand what factor affects the model and where one could find the factor’s potential source.
Is the idea that
your “belief” you’re describing is a somehow unupdated ranked calibration with E[Draymond rank|Ben Taylor’s opinion] = E[Draymond rank] = ~50, whereas your model (which you consider separate from a true belief) would predict that the random variable “Draymond rank|Ben Taylor’s opinion” has mode 22, which is clearly different from your prior of ~50
your alief is that Draymond’s rank is ~50 while your System 2 level belief is that Draymond’s rank is ~22
or some combination of the two?
I’m not sure what “unupdated ranked calibraiton” or “E[...]” mean so I’m having trouble understanding the first list item.
For the second list item, I wouldn’t say either of those things are true. Let me try to clarify.
Suppose that I have a really simple model of what makes a good basketball player with two parameters: shooting ability and passing ability. Maybe I’d rank Draymond at a 15⁄100 at shooting and a 90⁄100 at passing. And maybe I weigh shooting as twice as important as passing. This yields a value of 40⁄100 for Draymond. And maybe if I ran all of the other NBA players through this model it’d to Draymond having the 500th best peak of the century.
So yeah, we can say that this simple model assigns him a value of 40⁄100 and predicts that his peak is the 500th best peak of the century.
Maybe I then listen to a podcast episode where Ben Taylor talks about the importance of defense. So I add defense in as a third parameter, updating my model. Taylor also talks about how passing is underrated. This causes me to further refine my model, saying now that shooting is only 1.5x as important as passing instead of 2x more important. Maybe this updated model gives Draymond a value of 75 and ranks his peak at 325.
Taking this further, I keep updating and refining my model, but the end result is that my model still yields a different prediction from Taylor’s. It’s not totally clear to me why this is the case.
But regardless of what this model of mine says, in attempting to predict what Omega’s ranking is, I would throw my model away and just use Taylor’s belief. And I wouldn’t classify this decision as solely a System 2 level decision. In making this decision, I’m utilizing System 1 and System 2.
I’m not too familiar with the concept of aliefs, but that doesn’t seem to me to be the right concept to describe the output of my model.
You confused the numbers 22 and 45. But the idea is mostly correct: if the author’s model and parameter values were true, it would place Draymond at the 45th place. On the other hand, Taylor’s opinion places Draymond on the 22nd place and the author believes that Taylor knows something much better.
This implies that the author’s model either got some facts wrong or doesn’t take into account something unknown to the author, but known to Taylor. If the author described the model, then others would be able to point out potential mistakes. But you cannot tell what exactly you don’t know, only potential areas of search.[1]
EDIT: As an example, one could study Ajeya Cotra’s model, which is wrong on so many levels that I find it hard to believe that the model even appeared.
However, given access to a ground truth like the location of Uranus, one can understand what factor affects the model and where one could find the factor’s potential source.