I doubt that in the particular case of AGI, the nonstandard position’s complexity exceeds the typical AI expert’s complexity limit. I know a few AI experts, and they can handle extreme complexity. For that matter, I think AGI is well within the complexity limits of general computer science/mathematics/physics/science experts and at least some social science experts (e.g. Robin Hanson).
In fact, the number of such experts that have looked seriously at AGI, and come to different conclusions, strongly suggests to me that the jury is still out on this one. The answers, whatever they are, are not obvious or self-evident.
Repeat the same but s/AGI/Bayesianism. Bayesianism is routinely and quickly adopted within the community of mathematicians/scientists/software developers when it is useful and produces better answers. The conflict between Bayesianism and frequentism that is sometimes alluded to here is simply not an issue in every day practical work.
I doubt that in the particular case of AGI, the nonstandard position’s complexity exceeds the typical AI expert’s complexity limit. I know a few AI experts, and they can handle extreme complexity. For that matter, I think AGI is well within the complexity limits of general computer science/mathematics/physics/science experts and at least some social science experts (e.g. Robin Hanson).
In fact, the number of such experts that have looked seriously at AGI, and come to different conclusions, strongly suggests to me that the jury is still out on this one. The answers, whatever they are, are not obvious or self-evident.
Repeat the same but s/AGI/Bayesianism. Bayesianism is routinely and quickly adopted within the community of mathematicians/scientists/software developers when it is useful and produces better answers. The conflict between Bayesianism and frequentism that is sometimes alluded to here is simply not an issue in every day practical work.