[Question] What qualities does an AGI need to have to realize the risk of false vacuum, without hardcoding physics theories into it?

For this question, let’s assume the worst about the universe:
1. That a false vacuum decay can be created,
2. False vacuum decay will destroy the universe, i. e. inside the true vacuum bubble life/​computation will be impossible.
3. Faster-than-light travel is impossible, i. e. it’s not possible to outrun the bubble of true vacuum.

The AGI isn’t aware of the above. What qualities does it need to have, to come to the realization that it’s dangerous?
I think that an aligned AGI may be more dangerous in this case, depending on how it’s aligned, since it may be restricted in the methods it can use to prevent the rest of the universe creating a false vacuum decay.
Since a false vacuum decay will also destroy the AGI, not just humanity, even an unaligned AGI will view false vacuum as a risk to itself.
However, if AGI doesn’t care about its own existence, it will not care about the dangers of false vacuum as well.

But what if AGI (following Occam’s Razor) creates a simpler theory of physics that agrees with what can be observed in the universe, but without the possibility of false vacuum decay, and it turns out to be wrong?
This dilemma is different from Pascal’s Wager, in the sense that false vacuum decay is a scientific hypothesis, not a religious belief.

I struggle to think of how an AGI can be programmed to tread carefully with false vacuum research, without hardcoding it to reject physics theories that allow no possibility for false vacuum.