arguing against having probabilistic beliefs about events which are unprecedented
Sorry, I’m definitely not saying this. First, in the linked post (see here), I argue that our beliefs should still be probabilistic, just imprecisely so. Second, I’m not drawing a sharp line between “precedented” and “unprecedented.” My point is: Intuitions are only as reliable as the mechanisms that generate them. And given the sparsity of feedback loops[1] and unusual complexity here, I don’t see why the mechanisms generating AGI/ASI forecasting intuitions would be truth-tracking to a high degree of precision. (Cf. Violet Hour’s discussion in Sec. 3 here.)
the level of precedentedness is continous
Right, and that’s consistent with my view. I’m saying, roughly, the degree of imprecision (/width of the interval-valued credence) should increase continuously with the depth of unprecedentedness, among other things.
forecasters have successfully done OK at predicting increasingly unprecedented events
As I note here, our direct evidence only tells us (at best) that people can successfully forecast up to some degree of precision, in some domains. How we ought to extrapolate from this to the case of AGI/ASI forecasting is very underdetermined.
(Yes, I’m aware you meant imprecise probabilities. These aren’t probablities though (in the same sense that a range of numbers isn’t a number), e.g., you’re unwilling to state a median.)
(Replying now bc of the “missed the point” reaction:) To be clear, my concern is that someone without more context might pattern-match the claim “Anthony thinks we shouldn’t have probabilistic beliefs” to “Anthony thinks we have full Knightian uncertainty about everything / doesn’t think we can say any A is more or less likely than any B”. From my experience having discussions about imprecision, conceptual rounding errors are super common, so I think this is a reasonable concern even if you personally find it obvious that “probabilistic” should be read as “using a precise probability distribution”.
Sorry, I’m definitely not saying this. First, in the linked post (see here), I argue that our beliefs should still be probabilistic, just imprecisely so. Second, I’m not drawing a sharp line between “precedented” and “unprecedented.” My point is: Intuitions are only as reliable as the mechanisms that generate them. And given the sparsity of feedback loops[1] and unusual complexity here, I don’t see why the mechanisms generating AGI/ASI forecasting intuitions would be truth-tracking to a high degree of precision. (Cf. Violet Hour’s discussion in Sec. 3 here.)
Right, and that’s consistent with my view. I’m saying, roughly, the degree of imprecision (/width of the interval-valued credence) should increase continuously with the depth of unprecedentedness, among other things.
As I note here, our direct evidence only tells us (at best) that people can successfully forecast up to some degree of precision, in some domains. How we ought to extrapolate from this to the case of AGI/ASI forecasting is very underdetermined.
On the actual information of interest (i.e. information about AGI/ASI), that is, not just proxies like forecasting progress in weaker or narrower AI.
(Yes, I’m aware you meant imprecise probabilities. These aren’t probablities though (in the same sense that a range of numbers isn’t a number), e.g., you’re unwilling to state a median.)
(Replying now bc of the “missed the point” reaction:) To be clear, my concern is that someone without more context might pattern-match the claim “Anthony thinks we shouldn’t have probabilistic beliefs” to “Anthony thinks we have full Knightian uncertainty about everything / doesn’t think we can say any A is more or less likely than any B”. From my experience having discussions about imprecision, conceptual rounding errors are super common, so I think this is a reasonable concern even if you personally find it obvious that “probabilistic” should be read as “using a precise probability distribution”.