Existential risks from AI are not under 5%. If anyone claims they are, that is, in emotional practice, an instant-win knockdown argument unless countered; it should be countered directly and aggressively, not weakly deflected.
If you talk about the probability of a coin coming up heads, that is a question that well-informed people can be expected to agree on—since it can be experimentally determined.
However, the probability of civilisation being terminally obliterated isn’t a probability that can easily be measured by us. Either all earth-sentients will be obliterated, or they won’t be. However, we can’t assign probabilities and check them aftterwards using frequency analysis. We can’t have a betting market on the probability—since one side never pays out. From the perspective of a human, the probability is just not meaningful—there’s no way for a human to measure it.
Possibly our distant descendants will figure out a reasonable estimate of what the chances of oblivion are (to a sufficiently well-informed agent) - e.g. by recreating the Earth many times and repeatedly running the experiment. I think that claims to know what the results of that experiment are would represent overconfidence. The fraction of Earths obliterated by disasters at the hands of machines could be very low, very high, or somewhere in between—we just don’t know with very much confidence.
Well, and of course “we don’t know with very much confidence” is a statement about the standard deviation, not about the mean. The standard deviation may impact a legal decision or human argument, but not the probability estimate itself.
The issue is not really about standard deviations, it is that probability is subjective. Humans are in a very bad position to determine this probability—we have little relevant experience, we can’t usefully bet on it, and if there are differences or disagreement, it is very difficult to tell who is right. The “human probability” seems practically worthless—a reflection of our ignorance, not anything with much to do with the event. We need that probability to guide our actions—but we can hardly expect two people to agree on it.
The nearest think I can think of which is well defined is the probability that our descendants put on the event retrospectively. A probability estimate by wiser and better informed creatures of the chances of a world like our own making it. That estimate could—quite plausibly—be very low or very high.
Given a certain chunk of information, the evidence in it isn’t subjective. Priors may be subjective, although there is a class of cases where they’re objective too. “It is difficult to tell who is right” is an informative statement about the human decision process, but not really informative about probability.
Given a certain chunk of information, the evidence in it isn’t subjective. Priors may be subjective, although there is a class of cases where they’re objective too.
Well, two agents with the same priors can easily come to different conclusions as a result of observing the same evidence. Different cognitive limitations can result in that happening.
If you talk about the probability of a coin coming up heads, that is a question that well-informed people can be expected to agree on—since it can be experimentally determined.
However, the probability of civilisation being terminally obliterated isn’t a probability that can easily be measured by us. Either all earth-sentients will be obliterated, or they won’t be. However, we can’t assign probabilities and check them aftterwards using frequency analysis. We can’t have a betting market on the probability—since one side never pays out. From the perspective of a human, the probability is just not meaningful—there’s no way for a human to measure it.
Possibly our distant descendants will figure out a reasonable estimate of what the chances of oblivion are (to a sufficiently well-informed agent) - e.g. by recreating the Earth many times and repeatedly running the experiment. I think that claims to know what the results of that experiment are would represent overconfidence. The fraction of Earths obliterated by disasters at the hands of machines could be very low, very high, or somewhere in between—we just don’t know with very much confidence.
Well, and of course “we don’t know with very much confidence” is a statement about the standard deviation, not about the mean. The standard deviation may impact a legal decision or human argument, but not the probability estimate itself.
The issue is not really about standard deviations, it is that probability is subjective. Humans are in a very bad position to determine this probability—we have little relevant experience, we can’t usefully bet on it, and if there are differences or disagreement, it is very difficult to tell who is right. The “human probability” seems practically worthless—a reflection of our ignorance, not anything with much to do with the event. We need that probability to guide our actions—but we can hardly expect two people to agree on it.
The nearest think I can think of which is well defined is the probability that our descendants put on the event retrospectively. A probability estimate by wiser and better informed creatures of the chances of a world like our own making it. That estimate could—quite plausibly—be very low or very high.
Given a certain chunk of information, the evidence in it isn’t subjective. Priors may be subjective, although there is a class of cases where they’re objective too. “It is difficult to tell who is right” is an informative statement about the human decision process, but not really informative about probability.
Well, two agents with the same priors can easily come to different conclusions as a result of observing the same evidence. Different cognitive limitations can result in that happening.