I claim you can have asymptotic alignment without having a formally certified proof of asymptotic alignment, but that it would be surprising to be able to have empirical asymptotic alignment without the model confidently telling you that it expects that someday, it or a successor will be able to give a formal proof of alignment.
tl;dr it seems that you can get basic tiling to work by proving that there will be safety proofs in the future, rather than trying to prove safety directly.
Reminded me of James’s https://www.lesswrong.com/posts/akuMwu8SkmQSdospi/working-through-a-small-tiling-result