I stopped listening fairly quickly, after determining that it was rubbish from a Bayesian perspective. Specifically I stopped listening when he says that the future of humanity is different from russian roulette because the future can’t be modeled by probability. This is the belief that there is a basic “probability-ness” that dice have and gun chambers have but people don’t, and that things with “probability-ness” can be described by probability, but things without “probability-ness” can’t be. But of course, we’re all fermions and bosons in the end—there is no such thing as “probability-ness,” probability is simply what happens when you reason from incomplete information.
That seems pretty reasonable. “What will the future be like” is a pretty undetermined question.
However, he was applying this same logic to “will civilization be destroyed,” where “destroyed” and “not destroyed” are a pretty complete range of possibilities.
Unless maybe he meant that you have to know every possible way civilization could be destroyed in order to estimate a probability, which seems like searching for a reason that civilization doesn’t have probability-ness.
I stopped listening fairly quickly, after determining that it was rubbish from a Bayesian perspective. Specifically I stopped listening when he says that the future of humanity is different from russian roulette because the future can’t be modeled by probability. This is the belief that there is a basic “probability-ness” that dice have and gun chambers have but people don’t, and that things with “probability-ness” can be described by probability, but things without “probability-ness” can’t be. But of course, we’re all fermions and bosons in the end—there is no such thing as “probability-ness,” probability is simply what happens when you reason from incomplete information.
Duetsch is arguing (and I think correctly) that there’s a difference between knowing the full range of possibilities in a system and not knowing it.
That seems pretty reasonable. “What will the future be like” is a pretty undetermined question.
However, he was applying this same logic to “will civilization be destroyed,” where “destroyed” and “not destroyed” are a pretty complete range of possibilities.
Unless maybe he meant that you have to know every possible way civilization could be destroyed in order to estimate a probability, which seems like searching for a reason that civilization doesn’t have probability-ness.