I really liked reading this post, and that you documented looking into key claims and doing quick epistemic spot checks.
But it’s worth considering that these are just four cases, out of the entire history of predictions made about technological developments. That’s a very small sample.
I was expecting you might say something like this. I do want to point out how small sample sizes are incredibly useful. In How To Measure Anything, Hubbard gives the example of estimating the weight in grams of the average jelly baby. Now, if you’re like me, by that point you’ve managed to get through life not really knowing how much a gram is. What’s the right order of magnitude? 10 grams? 1000 grams? 0.1 grams? What Hubbard points out is that if I tell you that a random one out of the packet weighs 190g, suddenly you have a massive amount of information about even what order of magnitude is sensible. The first data point is really valuable for orienting in a very wide open space.
I haven’t the time to respond in detail, so I’ll just mention that in this situation, I think that looking into predictions that a technology cannot be made and is far out, and finding some things confidently saying it’s very far out while it’s in fact days/months away, is very surprising and has zoomed me in quite substantially on what sorts of theories make sense here, even given many of the qualifiers above.
I do want to point out how small sample sizes are incredibly useful.
Yeah, I think that point is true, valuable, and relevant. (I also found How To Measure Anything very interesting and would recommend it, or at least this summary by Muehlhauser, to any readers of this comment who haven’t read those yet.)
In this case, I think the issue of representativeness is more important/relevant than sample size. On reflection, I probably should’ve been clearer about that. I’ve now edited that section to make that clearer, and linked to this comment and Muehlhauser’s summary post. So thanks for pointing that out!
I really liked reading this post, and that you documented looking into key claims and doing quick epistemic spot checks.
I was expecting you might say something like this. I do want to point out how small sample sizes are incredibly useful. In How To Measure Anything, Hubbard gives the example of estimating the weight in grams of the average jelly baby. Now, if you’re like me, by that point you’ve managed to get through life not really knowing how much a gram is. What’s the right order of magnitude? 10 grams? 1000 grams? 0.1 grams? What Hubbard points out is that if I tell you that a random one out of the packet weighs 190g, suddenly you have a massive amount of information about even what order of magnitude is sensible. The first data point is really valuable for orienting in a very wide open space.
I haven’t the time to respond in detail, so I’ll just mention that in this situation, I think that looking into predictions that a technology cannot be made and is far out, and finding some things confidently saying it’s very far out while it’s in fact days/months away, is very surprising and has zoomed me in quite substantially on what sorts of theories make sense here, even given many of the qualifiers above.
Yeah, I think that point is true, valuable, and relevant. (I also found How To Measure Anything very interesting and would recommend it, or at least this summary by Muehlhauser, to any readers of this comment who haven’t read those yet.)
In this case, I think the issue of representativeness is more important/relevant than sample size. On reflection, I probably should’ve been clearer about that. I’ve now edited that section to make that clearer, and linked to this comment and Muehlhauser’s summary post. So thanks for pointing that out!