I think I agree with all of that under the definitions you’re using (and I too prefer the bounded rationality version). I think in practice I was using words somewhat differently than you.
(The rest of this comment is at the object level and is mostly for other readers, not for you)
Saying it’s “crazy” means it’s low probability of being (part of) the right world-description.
The “right” world-description is a very high bar (all models are wrong but some are useful), but if I go with the spirit of what you’re saying I think I might not endorse calling bio anchors “crazy” by this definition, I’d say more like “medium” probability of being a generally good framework for thinking about the domain, plus an expectation that lots of the specific details would change with more investigation.
Honestly I didn’t have any really precise meaning by “crazy” in my original comment, I was mainly using it as a shorthand to gesture at the fact that the claim is in tension with reductionist intuitions, and also that the legibly written support for the claim is weak in an absolute sense.
Saying it’s “the best we have” means it’s the clearest model we have—the most fleshed-out hypothesis.
I meant a higher bar than this; more like “the most informative and relevant thing for informing your views on the topic” (beyond extremely basic stuff like observing that humanity can do science at all, or things like reference class priors). Like, I also claim it is better than “query your intuitions about how close we are to AGI, and how fast we are going, to come up with a time until we get to AGI”. So it’s not just the clearest / most fleshed-out, it’s also the one that should move you the most, even including various illegible or intuition-driven arguments. (Obviously scoped only to the arguments I know about; for all I know other people have better arguments that I haven’t seen.)
If it were merely the clearest model or most fleshed-out hypothesis, I agree it would usually be a mistake to make a large belief update or take big consequential actions on that basis.
I think I agree with all of that under the definitions you’re using (and I too prefer the bounded rationality version). I think in practice I was using words somewhat differently than you.
(The rest of this comment is at the object level and is mostly for other readers, not for you)
The “right” world-description is a very high bar (all models are wrong but some are useful), but if I go with the spirit of what you’re saying I think I might not endorse calling bio anchors “crazy” by this definition, I’d say more like “medium” probability of being a generally good framework for thinking about the domain, plus an expectation that lots of the specific details would change with more investigation.
Honestly I didn’t have any really precise meaning by “crazy” in my original comment, I was mainly using it as a shorthand to gesture at the fact that the claim is in tension with reductionist intuitions, and also that the legibly written support for the claim is weak in an absolute sense.
I meant a higher bar than this; more like “the most informative and relevant thing for informing your views on the topic” (beyond extremely basic stuff like observing that humanity can do science at all, or things like reference class priors). Like, I also claim it is better than “query your intuitions about how close we are to AGI, and how fast we are going, to come up with a time until we get to AGI”. So it’s not just the clearest / most fleshed-out, it’s also the one that should move you the most, even including various illegible or intuition-driven arguments. (Obviously scoped only to the arguments I know about; for all I know other people have better arguments that I haven’t seen.)
If it were merely the clearest model or most fleshed-out hypothesis, I agree it would usually be a mistake to make a large belief update or take big consequential actions on that basis.