I partially agree, but the distinction between “internal” and “external” results is more fuzzy and complicated than you imply. Ultimately, it depends on the original problem you started with. For example, if you only care about prime numbers, then most results of complex analysis are “internal”, with the exception of results that imply something about the distribution of prime numbers. However, if complex functions are a natural way to formalize the original problem, then the same results become “external”.

In our case, the original problem is “creating a mathematical theory of intelligent agents”. (Or rather, the problem is “solving AI alignment”, or “preventing existential risk from AI”, or “creating a flourishing future for human civilization”, but let’s suppose that the path from there to “creating a mathematical theory of intelligent agents” is already clear; in any case that’s not related specifically to IB.) Infra-Bayesianism is supposed to be an actual ingredient in this theory of agents, not just some tool brought from the outside. In this sense, it already starts out as somewhat “external”.

To give a concrete example, you said that results about IB multi-armed bandits are “internal”. While I agree that these results are only useful as very simplistic toy models, they are potentially necessary steps towards stronger regret bounds in the future. At what point does it become “external”? Taking it to the extreme, I can imagine regret bounds so powerful, that they would serve as substantial evidence that an algorithm satisfying them is AGI or close to AGI. Would such a result still be “internal”?! Arguably not, because AGI algorithms are very pertinent to what we’re interested in!

You can also take the position that any result without direct applications to existing, practical, economically competitive AI systems is “internal”. In such case, I am comfortable with a research programme that only has “internal” results for a long time (although not everyone would agree). But this also doesn’t seem to be your position, since you view results about Newcombian problems as “external”.

I partially agree, but the distinction between “internal” and “external” results is more fuzzy and complicated than you imply. Ultimately, it depends on the original problem you started with. For example, if you only care about prime numbers, then most results of complex analysis are “internal”, with the exception of results that imply something about the distribution of prime numbers. However, if complex functions are a natural way to formalize the original problem, then the same results become “external”.

In our case, the original problem is “creating a mathematical theory of intelligent agents”. (Or rather, the problem is “solving AI alignment”, or “preventing existential risk from AI”, or “creating a flourishing future for human civilization”, but let’s suppose that the path from there to “creating a mathematical theory of intelligent agents” is already clear; in any case that’s not related specifically to IB.) Infra-Bayesianism is supposed to be an actual

ingredientin this theory of agents, not just some tool brought from the outside. In this sense, it already starts out as somewhat “external”.To give a concrete example, you said that results about IB multi-armed bandits are “internal”. While I agree that these results are only useful as very simplistic toy models, they are potentially necessary steps towards stronger regret bounds in the future. At what point does it become “external”? Taking it to the extreme, I can imagine regret bounds so powerful, that they would serve as substantial evidence that an algorithm satisfying them is AGI or close to AGI. Would such a result still be “internal”?! Arguably not, because AGI algorithms are very pertinent to what we’re interested in!

You can also take the position that any result without direct applications to existing, practical, economically competitive AI systems is “internal”. In such case, I am comfortable with a research programme that only has “internal” results for a long time (although not everyone would agree). But this also doesn’t seem to be your position, since you view results about Newcombian problems as “external”.