Regardless of how complicated the design is, how much the SI wants to know about the design or the reasons for the SI’s interest, the SI will almost certainly not bother actually running the program or simulating the design because there will almost certainly be much better ways to accomplish the same ends.
Err no. Actually the SI would be smart enough to understand that the optimal algorithm for perfect simulation of a physical system requires:
a full quantum computer with at least as many qubits as the original system
at least as much energy and time than the original system
In other words, there is no free lunch, there is no shortcut, if you really want to build something in this world, you can’t be certain 100% that it will work until you actually build it.
That being said, the next best thing, the closest program is a very close approximate simulation.
From wikipedia on “formal verification” the links mention that the cost for formally verifying large software in the few cases that it was done were astronomical. It mentions they are used for hardware design, but I’m not sure how that relates to simulation—I know extensive physical simulation is also used. It sounds like from the wiki formal verification can remove the need for simulating all possible states. (note in my analysis above I was considering only simulating one timeslice, not all possible configurations—thats obviously far far worse). So it sounds like formal verification is a tool building on top of physical simulation to reduce the exponential explosion.
You can imagine that:
there are probably techniques available to a SI that require neither correctness proofs nor running or simulating anything—although I would not want to have to imagine what they would be.
But imagining things alone does not make them exist, and we know from current theory that absolute physical knowledge requires perfect simulation. There is a reason why we investigate time/space complexity bounds. No SI, no matter how smart, can do the impossible.
In other words, there is no free lunch, there is no shortcut, if you really want to
build something in this world, you can’t be certain 100% that it will work until
you actually build it.
You can’t be 100% certain even then. Testing doesn’t produce certainty—you usually can’t test every possible set of input configurations.
Err no. Actually the SI would be smart enough to understand that the optimal algorithm for perfect simulation of a physical system requires:
a full quantum computer with at least as many qubits as the original system
at least as much energy and time than the original system
In other words, there is no free lunch, there is no shortcut, if you really want to build something in this world, you can’t be certain 100% that it will work until you actually build it.
That being said, the next best thing, the closest program is a very close approximate simulation.
From wikipedia on “formal verification” the links mention that the cost for formally verifying large software in the few cases that it was done were astronomical. It mentions they are used for hardware design, but I’m not sure how that relates to simulation—I know extensive physical simulation is also used. It sounds like from the wiki formal verification can remove the need for simulating all possible states. (note in my analysis above I was considering only simulating one timeslice, not all possible configurations—thats obviously far far worse). So it sounds like formal verification is a tool building on top of physical simulation to reduce the exponential explosion.
You can imagine that:
But imagining things alone does not make them exist, and we know from current theory that absolute physical knowledge requires perfect simulation. There is a reason why we investigate time/space complexity bounds. No SI, no matter how smart, can do the impossible.
You can’t be 100% certain even then. Testing doesn’t produce certainty—you usually can’t test every possible set of input configurations.