Unless you think transformative AI won’t be trained with some variant of SGD, I don’t see why this objection matters.
Also, I think the a priori methodological problems with counting arguments in general are decisive. You always need some kind of mechanistic story for why a “uniform prior” makes sense in a particular context, you can’t just assume it.
Unless you think transformative AI won’t be trained with some variant of SGD, I don’t see why this objection matters.
Also, I think the a priori methodological problems with counting arguments in general are decisive. You always need some kind of mechanistic story for why a “uniform prior” makes sense in a particular context, you can’t just assume it.