though I think you don’t need to invoke knightian uncertainty. I think it’s simply enough to model there being a very large attack surface combined with a more intelligent adversary.
One of the problems I’m pointing to is that you don’t know what the attack surface is. This puts you in a pretty different situation than if you have a known large attack surface to defend, even against a smarter adversary (e.g. the whole length of a border; or every possible sequence of Go moves).
Separately, I may be being a bit sloppy by using “Knightian uncertainty” as a broad handle for cases where you have important “unknown unknowns”, aka you don’t even know what ontology to use. But it feels close enough that I’m by default planning to continue describing the research project outlined above as trying to develop a theory of Knightian uncertainty in which Bayesian uncertainty is a special case.
One of the problems I’m pointing to is that you don’t know what the attack surface is. This puts you in a pretty different situation than if you have a known large attack surface to defend, even against a smarter adversary (e.g. the whole length of a border; or every possible sequence of Go moves).
Separately, I may be being a bit sloppy by using “Knightian uncertainty” as a broad handle for cases where you have important “unknown unknowns”, aka you don’t even know what ontology to use. But it feels close enough that I’m by default planning to continue describing the research project outlined above as trying to develop a theory of Knightian uncertainty in which Bayesian uncertainty is a special case.