I’m not sure if this is what Eliezer was taking a swing at, but this clicked while reading and I think it’s a similar underlying logic error. Apologies to those not already familiar with the argument I’m referencing:
There’s an stats argument that’s been discussed here before, the short version of which is roughly: “Half of all humans that will ever be born have probably already been born, we think that’s about N people. At current birth rates we will make that many people again in X years and humanity will thus on average go extinct Soon™” (insert proper writeup here). This fails is because it privileges a metric we have no principled reason to think is that special vs any of the equally sensible metrics we could have chosen and used to make the same argument—e.g. years anatomically modern humans have existed; years Earth life has existed. We could make even more pessimistic estimates by suggesting we focus on e.g., the total energy consumed by human civilization. Lest we think there’s a finite number of such properties to choose between, we can also combine any set of seemingly relevant metrics with arbitrary weights.
The statistical trick that argument rests on works in the motivating example because [serial numbers in industrially produced parts] is something we do have strong cause to think is tied to the current number of such items that exist and is the single best property we could choose to make that prediction.
These timeline estimates are failing for what feels like very similar reasons. Why specifically *that* graph/formula for timelines, with those and only those metrics, and close to the factors chosen? If we’re discussing a concrete model rather than a broad family of underconstrained models we’re very likely privileging the hypothesis and making a wild guess with extra steps.
I’m not sure if this is what Eliezer was taking a swing at, but this clicked while reading and I think it’s a similar underlying logic error. Apologies to those not already familiar with the argument I’m referencing:
There’s an stats argument that’s been discussed here before, the short version of which is roughly: “Half of all humans that will ever be born have probably already been born, we think that’s about N people. At current birth rates we will make that many people again in X years and humanity will thus on average go extinct Soon™” (insert proper writeup here). This fails is because it privileges a metric we have no principled reason to think is that special vs any of the equally sensible metrics we could have chosen and used to make the same argument—e.g. years anatomically modern humans have existed; years Earth life has existed. We could make even more pessimistic estimates by suggesting we focus on e.g., the total energy consumed by human civilization. Lest we think there’s a finite number of such properties to choose between, we can also combine any set of seemingly relevant metrics with arbitrary weights.
The statistical trick that argument rests on works in the motivating example because [serial numbers in industrially produced parts] is something we do have strong cause to think is tied to the current number of such items that exist and is the single best property we could choose to make that prediction.
These timeline estimates are failing for what feels like very similar reasons. Why specifically *that* graph/formula for timelines, with those and only those metrics, and close to the factors chosen? If we’re discussing a concrete model rather than a broad family of underconstrained models we’re very likely privileging the hypothesis and making a wild guess with extra steps.
Adding some references:
The Wikipedia article for this family of statistical techniques is German tank problem
The Wikipedia article for its application to x-risk inference is Doomsday argument
There’s even an entire article about the Self-Indication Assumption rebuttal to the Doomsday Argument
A much shorter explanation of why the classic Doomsday argument fails was posted here on LW by Stuart Armstrong