Let me say a little more about the “is this Knightian uncertainty” question.
Here are some statements about Knightian uncertainty from the Wikipedia page:
In economics, Knightian uncertainty is a lack of any quantifiable knowledge about some possible occurrence, as opposed to the presence of quantifiable risk (e.g., that in statistical noise or a parameter’s confidence interval). The concept acknowledges some fundamental degree of ignorance, a limit to knowledge, and an essential unpredictability of future events...
However, the concept is largely informal and there is no single best formal system of probability and belief to represent Knightian uncertainty...
Taleb asserts that Knightian risk does not exist in the real world, and instead finds gradations of computable risk.
Qualitatively, we can say that there is no widely accepted formal definition of Knightian uncertainty, and it’s disputed whether it is actually a meaningful concept at all.
The Ellsberg paradox is taken to illustrate Knightian uncertainty—a barrel either holds 2⁄3 yellow and 1⁄3 black balls, or 2⁄3 black and 1⁄3 yellow balls, but you don’t know which.
Personally, I just don’t see a paradox here. You start with probability uniformly distributed, and in this case, you have no other evidence to update with, so you assign an equal 50% chance to the possibility that the barrel is majority-black or majority-yellow. If I had some psychological insight into what the barrel-filler would do, then I can update based on that information.
An airline might forecast that the risk of an accident involving one of its planes is exactly one per 20 million takeoffs. But the economic outlook for airlines 30 years from now involves so many unknown factors as to be incalculable.
First of all, this doesn’t seem entirely incalculable (assuming we can come up with a definition of ‘economic outlook’). If we want to know, say, the airline miles per year, we can pick a range from 0 to an arbitrarily high number X and say “I’m at least 99% sure it’s between 0 and X.” And maybe we are 70% confident that the economy will grow between now and then, and so we can say we’re even more confident that it’s between [current airline miles per year, X]. And so once again, while our error bars are wide, there’s nothing literally incalculable here.
The same article also acknowledges the controversy with a reverse spin in which almost everything is Knightian, and non-Knightian risk is only when risks are precisely calculable:
Some economists have argued that this distinction is overblown. In the real business world, this objection goes, all events are so complex that forecasting is always a matter of grappling with “true uncertainty,” not risk; past data used to forecast risk may not reflect current conditions, anyway. In this view, “risk” would be best applied to a highly controlled environment, like a pure game of chance in a casino, and “uncertainty” would apply to nearly everything else.
Knight distinguished between three different types of probability, which he termed: “a priori probability;” “statistical probability” and “estimates”. The first type “is on the same logical plane as the propositions of mathematics;” the canonical example is the odds of rolling any number on a die. “Statistical probability” depends upon the “empirical evaluation of the frequency of association between predicates” and on “the empirical classification of instances”. When “there is no valid basis of any kind for classifying instances”, only “estimates” can be made.
So in fact, even under Knightian uncertainty, we can still make estimates! We don’t have to throw up our hands and say “I just don’t know, we’re in a separate magisterium because this uncertainty is Knightian!” We are just saying ’I can’t deduce the probabilities from mathematical argument, I don’t have a precise definition of the probability distribution, and so I must estimate what the outcomes might be and how likely they are.”
And that is exactly what people who put hard-number estimates on the likelihood of AI doom are doing. When Scott Alexander says “33% risk of AI doom” or Eliezer puts it at 90%, they are making estimates, and that is clearly a display of Knightian uncertainty as Knight would have defined it himself.
When others say “no, you can’t put any sort of hard probability on it, don’t even make an estimate!” they are not displaying Knightian uncertainty, they’re just rejecting the debate topic entirely.
Overall, as I delve into this, the examples of uncertainty purported to be Knightian just seem to be the sort of thing superforecasters have to estimate. Everything on Metaculus is an exercise in dealing with Knightian uncertainty. Every score on Metaculus results from forecasters establishing base rates, updating based on inside view considerations and the passage of time, and then turning that into a hard number estimate which gets aggregated. Nothing incalculable or mysterious there.
It’s possible to get meaningful results by system 2 processes like explicit calculation...Knights apriori probability …. and also by system 1 processes . But system.1 needs feedback to be accurate … that makes the difference between educated guesswork and guesswork...and feedback isnt always available.
So in fact, even under Knightian uncertainty, we can still make estimates!
Nothing can stop you making subjective estimates: plenty of things can stop them being objectively meaningful.
And that is exactly what people who put hard-number estimates on the likelihood of AI doom are doing
What’s hard about their numbers? They are giving an exact figure, without an error bar, but that is a superficial appearance....they haven’t actually performed a calculation , and they don’t actually know anything within +/-1%.
I think there are ideas about “objectivity” and “meaningfulness” that I don’t agree with implicit in your definition.
For example, let’s say I’m a regional manager for Starbucks. I go and inspect all the stores, and then, based on my subjective assessment of how well-organized they seem to be, I give them all a number scoring them on “organization.” Those estimates seem to me to be “meaningful,” in the sense of being a shorthand way of representing qualitative observational information, and yet I would also not say they are “objective,” in the sense that “anybody in their right mind would have come to the same conclusion.”
These point estimates seem useful on their own, and if the scorer wanted to go further, they could add error bars. We could even add another scorer, normalize their scores, and then compare them and do all sorts of statistics.
On the other hand, I could have several scorers all rank the same Starbucks, then gather then in a room and have them tell me their subjective impressions. It’s the same raw data, but now I’m getting the information in the form of a narrative instead of a number.
In all these cases, I claim that we are getting meaningful estimates out of the process, whether represented in the form of a number or in the form of a narrative, and that these estimates of “how organized the regional Starbucks are” is not “Knightianly uncertain” but is just a normal estimate.
Semantically, you can have “meaningful” information that only means your own subjective impression, and “estimates” that estimate exactly the same thing, and so on.
That’s not addressing the actual point. The point is not to exploit the vagueness of the English language. You wouldn’t accept monopoly money as payment even though it says “money” in the name.
You are kind of implying that it’s unfair of Knightians to reject subjective estimates because they have greater than zero value...but why shouldn’t they be entitled to set the threshold somewhere above eta?
Here’s a quick argument: there’s eight billion people, they’ve all got opinions, and I have not got the time to listen to them all.
Let me say a little more about the “is this Knightian uncertainty” question.
Here are some statements about Knightian uncertainty from the Wikipedia page:
Qualitatively, we can say that there is no widely accepted formal definition of Knightian uncertainty, and it’s disputed whether it is actually a meaningful concept at all.
The Ellsberg paradox is taken to illustrate Knightian uncertainty—a barrel either holds 2⁄3 yellow and 1⁄3 black balls, or 2⁄3 black and 1⁄3 yellow balls, but you don’t know which.
Personally, I just don’t see a paradox here. You start with probability uniformly distributed, and in this case, you have no other evidence to update with, so you assign an equal 50% chance to the possibility that the barrel is majority-black or majority-yellow. If I had some psychological insight into what the barrel-filler would do, then I can update based on that information.
In another MIT description of Knightian uncertainty, they offer another example:
First of all, this doesn’t seem entirely incalculable (assuming we can come up with a definition of ‘economic outlook’). If we want to know, say, the airline miles per year, we can pick a range from 0 to an arbitrarily high number X and say “I’m at least 99% sure it’s between 0 and X.” And maybe we are 70% confident that the economy will grow between now and then, and so we can say we’re even more confident that it’s between [current airline miles per year, X]. And so once again, while our error bars are wide, there’s nothing literally incalculable here.
The same article also acknowledges the controversy with a reverse spin in which almost everything is Knightian, and non-Knightian risk is only when risks are precisely calculable:
And if we go back to Knight himself,
So in fact, even under Knightian uncertainty, we can still make estimates! We don’t have to throw up our hands and say “I just don’t know, we’re in a separate magisterium because this uncertainty is Knightian!” We are just saying ’I can’t deduce the probabilities from mathematical argument, I don’t have a precise definition of the probability distribution, and so I must estimate what the outcomes might be and how likely they are.”
And that is exactly what people who put hard-number estimates on the likelihood of AI doom are doing. When Scott Alexander says “33% risk of AI doom” or Eliezer puts it at 90%, they are making estimates, and that is clearly a display of Knightian uncertainty as Knight would have defined it himself.
When others say “no, you can’t put any sort of hard probability on it, don’t even make an estimate!” they are not displaying Knightian uncertainty, they’re just rejecting the debate topic entirely.
Overall, as I delve into this, the examples of uncertainty purported to be Knightian just seem to be the sort of thing superforecasters have to estimate. Everything on Metaculus is an exercise in dealing with Knightian uncertainty. Every score on Metaculus results from forecasters establishing base rates, updating based on inside view considerations and the passage of time, and then turning that into a hard number estimate which gets aggregated. Nothing incalculable or mysterious there.
It’s possible to get meaningful results by system 2 processes like explicit calculation...Knights apriori probability …. and also by system 1 processes . But system.1 needs feedback to be accurate … that makes the difference between educated guesswork and guesswork...and feedback isnt always available.
Nothing can stop you making subjective estimates: plenty of things can stop them being objectively meaningful.
What’s hard about their numbers? They are giving an exact figure, without an error bar, but that is a superficial appearance....they haven’t actually performed a calculation , and they don’t actually know anything within +/-1%.
https://www.johndcook.com/blog/2018/10/26/excessive-precision/
That’s a reasonable complaint to me! “You can’t use numbers to make estimates because this uncertainty is Knightian” is not.
Is it unreasonable to require estimates to be meaningful?
Define “meaningful” in a way that’s unambiguous and clear to a stranger like me, and I’ll be happy to give you my opinion/argument!
The numbers that go into the final estimate are themselves objective , and not pulled out of the air, or anything else beginning with “a’”.
I think there are ideas about “objectivity” and “meaningfulness” that I don’t agree with implicit in your definition.
For example, let’s say I’m a regional manager for Starbucks. I go and inspect all the stores, and then, based on my subjective assessment of how well-organized they seem to be, I give them all a number scoring them on “organization.” Those estimates seem to me to be “meaningful,” in the sense of being a shorthand way of representing qualitative observational information, and yet I would also not say they are “objective,” in the sense that “anybody in their right mind would have come to the same conclusion.”
These point estimates seem useful on their own, and if the scorer wanted to go further, they could add error bars. We could even add another scorer, normalize their scores, and then compare them and do all sorts of statistics.
On the other hand, I could have several scorers all rank the same Starbucks, then gather then in a room and have them tell me their subjective impressions. It’s the same raw data, but now I’m getting the information in the form of a narrative instead of a number.
In all these cases, I claim that we are getting meaningful estimates out of the process, whether represented in the form of a number or in the form of a narrative, and that these estimates of “how organized the regional Starbucks are” is not “Knightianly uncertain” but is just a normal estimate.
Semantically, you can have “meaningful” information that only means your own subjective impression, and “estimates” that estimate exactly the same thing, and so on.
That’s not addressing the actual point. The point is not to exploit the vagueness of the English language. You wouldn’t accept monopoly money as payment even though it says “money” in the name.
You are kind of implying that it’s unfair of Knightians to reject subjective estimates because they have greater than zero value...but why shouldn’t they be entitled to set the threshold somewhere above eta?
Here’s a quick argument: there’s eight billion people, they’ve all got opinions, and I have not got the time to listen to them all.
I’m not sure what you mean.