Everything outside our vicinity, e.g., outside our solar system, will be calculated planetarium-style, and not from the level of particle physics.
If the physics on which ultra-high-energy cosmic ray sources run is not the same physics on which we run but only an approximation thereof, we might eventually notice weird things with them.
The way you typically converge an adaptive simulation is to start with a cheap coarse-grained approximation, then:
Run your simulation.
Check to see if it was accurate enough on the whole for you
2b. If so quit.
Do some a posteriori error estimation to find out where the coarseness was most damaging to your accuracy.
3b. Replace the coarse discretization in those locations (or time steps, models, etc) with a more refined version
Go back to step 1.
I’m not sure how this analogy affects astrophysicists’ decision making processes, though. After seeing odd results, what do you say to yourself (and any hypothetical omniscient listeners) in a loud voice?
“Wow, that certainly looked wrong! Clearly something funny is going on which requires more investigation!” (saving the entire universe from fate 2b)
or
“Well, that’s close enough for me! Nothing strange or erroneous going on there!” (saving our local chunk of universe from being refined-into-something-else via fate 3b)
Personally I would say the latter, but historically the UHECR community has been prone to say things like the former. (E.g., when AGASA failed to detect the GZK cutoff, everyone was like “there must be new physics allowing particles to evade the cutoff!”, as opposed to “there must be something wrong with the experiment”—but given that all later experiments have seen a cutoff, it’s most likely that AGASA did indeed do something wrong. OTOH I can’t recall anyone making “planetarium”-like hypotheses, except jokingly (I suppose).)
EDIT: Also, I can’t count the times people have claimed to detect an anisotropy in the UHECR arrival direction distribution and then retracted them after more statistics was available. Which doesn’t surprise me, given the badly unBayesian ad-hockeries (to borrow E.T. Jaynes’ term) they use to test them. And now, I’ll tap out for, ahem, decision-theoretical reasons.
If the heuristics of the simulator are good enough, it might just do something akin to detecting our attempts at analyzing low-res data, and dynamically generate something relevant and self-consistent.
Or, the simulation might be paused while the system or the engineers come up with a way to resolve the problem, which to us would still appear as if the whole thing had all been in the same resolution all along, since whatever they change while we’re paused will happen in zero time for us.
Honestly, not much, at least in the foreseeable future—data from cosmic ray experiments are way too noisy to discriminate between source models. (We’ve been able to rule out the hypothesis that a sizeable fraction of UHECRs are decay products of as-yet-unknown extremely heavy particles, but that’s pretty much it.) But see this. (I’ve tried a dozen times to download the paper and failed—are the Simulators messing with me? Aaaargh.)
Ah, I’ve read that article before. From what I understood, they essentially conclude “Here’s a way we could tell the difference if we were simulated with system X. However, it’s unlikely that we would be simulated with system X.” without giving all that much evidence concerning other possible simulation systems.
Personally, I hold the belief that if 1) we are a simulation and 2) the simulation will not be stopped at some near point in time, then we will eventually discover the fact that we are running in a simulated universe and begin learning about the “outside”, by reasoning that:
Running simulations of other universes at a rate slower than one’s own universe defeats the purpose of most plausible reasons to run the simulation.
If we are running faster than the Simulators, then our own intelligence and information capabilities will eventually exceed theirs, which, if also given that they are aware of our existence, is likely to be part of the very purpose of the simulation.
If given that we become more intelligent than them, it becomes increasingly likely that we will outsmart (perhaps accidentally) any safety measures they might take or heuristics built into the program, since they won’t be able to understand what we’re doing anymore (presumably).
However, I doubt we’ll find this by noticing any discrepancy in the resolution of the simulation in different parts of it.
If the heuristics of the simulator are good enough, it might just do something akin to detecting our attempts at analyzing low-res data, and dynamically generate something relevant and self-consistent.
In other words, maybe the simulator is doing the equivalent of ray-tracing. When a ray of light impacts the simulated Earth, the process that generated it is simulated in detail only when a bit of Earth becomes suitably entangled with the outcome—but not if the ray serves to merely heat up the atmosphere a bit.
If the physics on which ultra-high-energy cosmic ray sources run is not the same physics on which we run but only an approximation thereof, we might eventually notice weird things with them.
The way you typically converge an adaptive simulation is to start with a cheap coarse-grained approximation, then:
Run your simulation.
Check to see if it was accurate enough on the whole for you 2b. If so quit.
Do some a posteriori error estimation to find out where the coarseness was most damaging to your accuracy. 3b. Replace the coarse discretization in those locations (or time steps, models, etc) with a more refined version
Go back to step 1.
I’m not sure how this analogy affects astrophysicists’ decision making processes, though. After seeing odd results, what do you say to yourself (and any hypothetical omniscient listeners) in a loud voice?
“Wow, that certainly looked wrong! Clearly something funny is going on which requires more investigation!” (saving the entire universe from fate 2b) or “Well, that’s close enough for me! Nothing strange or erroneous going on there!” (saving our local chunk of universe from being refined-into-something-else via fate 3b)
Personally I would say the latter, but historically the UHECR community has been prone to say things like the former. (E.g., when AGASA failed to detect the GZK cutoff, everyone was like “there must be new physics allowing particles to evade the cutoff!”, as opposed to “there must be something wrong with the experiment”—but given that all later experiments have seen a cutoff, it’s most likely that AGASA did indeed do something wrong. OTOH I can’t recall anyone making “planetarium”-like hypotheses, except jokingly (I suppose).)
EDIT: Also, I can’t count the times people have claimed to detect an anisotropy in the UHECR arrival direction distribution and then retracted them after more statistics was available. Which doesn’t surprise me, given the badly unBayesian ad-hockeries (to borrow E.T. Jaynes’ term) they use to test them. And now, I’ll tap out for, ahem, decision-theoretical reasons.
How confident are you that we would notice?
If the heuristics of the simulator are good enough, it might just do something akin to detecting our attempts at analyzing low-res data, and dynamically generate something relevant and self-consistent.
Or, the simulation might be paused while the system or the engineers come up with a way to resolve the problem, which to us would still appear as if the whole thing had all been in the same resolution all along, since whatever they change while we’re paused will happen in zero time for us.
Honestly, not much, at least in the foreseeable future—data from cosmic ray experiments are way too noisy to discriminate between source models. (We’ve been able to rule out the hypothesis that a sizeable fraction of UHECRs are decay products of as-yet-unknown extremely heavy particles, but that’s pretty much it.) But see this. (I’ve tried a dozen times to download the paper and failed—are the Simulators messing with me? Aaaargh.)
Ah, I’ve read that article before. From what I understood, they essentially conclude “Here’s a way we could tell the difference if we were simulated with system X. However, it’s unlikely that we would be simulated with system X.” without giving all that much evidence concerning other possible simulation systems.
Personally, I hold the belief that if 1) we are a simulation and 2) the simulation will not be stopped at some near point in time, then we will eventually discover the fact that we are running in a simulated universe and begin learning about the “outside”, by reasoning that:
Running simulations of other universes at a rate slower than one’s own universe defeats the purpose of most plausible reasons to run the simulation.
If we are running faster than the Simulators, then our own intelligence and information capabilities will eventually exceed theirs, which, if also given that they are aware of our existence, is likely to be part of the very purpose of the simulation.
If given that we become more intelligent than them, it becomes increasingly likely that we will outsmart (perhaps accidentally) any safety measures they might take or heuristics built into the program, since they won’t be able to understand what we’re doing anymore (presumably).
However, I doubt we’ll find this by noticing any discrepancy in the resolution of the simulation in different parts of it.
In other words, maybe the simulator is doing the equivalent of ray-tracing. When a ray of light impacts the simulated Earth, the process that generated it is simulated in detail only when a bit of Earth becomes suitably entangled with the outcome—but not if the ray serves to merely heat up the atmosphere a bit.