I think there is some value in the original bio anchors, if it’s done privately with full awareness of the limitations of the approach, but I think it’s a mistake to try to publish an estimate with that many caveats.
I’ve been in much smaller, much less contentious, vastly lower stakes version of that myself, and in more than one case I made the decision to just not publish a number. My main example was very mundane: In 2015, I was writing a report estimating the market size for carbon fiber composites in automotive applications in 2025. I was able (thanks to skills I learned here!) to explain to my bosses that there was no good way to estimate this that would be actually useful to anyone for making any decisions, because different reasonable assumptions gave scenarios with answers varying by >3 OOMs. My solution was to explain that very fact in the intro of my report, and otherwise focus on a flowchart of possible pathways and how to respond to different hypothetical events.
This is also a big part of why I’ve been so impressed with the AI 2027 crew. They’ve been about as open as it is possible to be about the implications and limits of their approach, what they’re actually saying and doing, and why they’re expressing it the way they are. They have also been incredibly gracious with the large subset of people who constantly misinterpret, oversimplify, or otherwise ignore the things they actually say, and are working hard to communicate clearly despite that.
I think there is some value in the original bio anchors, if it’s done privately with full awareness of the limitations of the approach, but I think it’s a mistake to try to publish an estimate with that many caveats.
I’ve been in much smaller, much less contentious, vastly lower stakes version of that myself, and in more than one case I made the decision to just not publish a number. My main example was very mundane: In 2015, I was writing a report estimating the market size for carbon fiber composites in automotive applications in 2025. I was able (thanks to skills I learned here!) to explain to my bosses that there was no good way to estimate this that would be actually useful to anyone for making any decisions, because different reasonable assumptions gave scenarios with answers varying by >3 OOMs. My solution was to explain that very fact in the intro of my report, and otherwise focus on a flowchart of possible pathways and how to respond to different hypothetical events.
This is also a big part of why I’ve been so impressed with the AI 2027 crew. They’ve been about as open as it is possible to be about the implications and limits of their approach, what they’re actually saying and doing, and why they’re expressing it the way they are. They have also been incredibly gracious with the large subset of people who constantly misinterpret, oversimplify, or otherwise ignore the things they actually say, and are working hard to communicate clearly despite that.