the machine can run our physics without insane implementation size.
I’m well out of my depth here, and this is probably a stupid question, but given the standard views of the “known” part of our physics, does that mean that the machine can do operations on arbitrary, fully precise complex numbers in constant time?
The continuous state-space is coarse-grained into discrete cells where the dynamics are approximately markovian (the theory is currently classical) & the “laws of physics” probably refers to the stochastic matrix that specifies the transition probabilities of the discrete cells (otherwise we could probably deal with infinite precision through limit computability)
The current theory is based on classical hamiltonian mechanics, but I think the theorems apply whenever you have a markovian coarse-graining. Fermion doubling is a problem for spacetime discretization in the quantum case, so the coarse-graining might need to be different. (E.g. coarse-grain the entire hilbert space, which might have locality issues but probably not load-bearing for algorithmic thermodynamics)
On outside view, quantum reduces to classical (which admits markovian coarse-graining) in the correspondence limit, so there must be some coarse-graining that works
In practice, we only ever measure things to finite precision. To predict these observations, all we need is to be able to do these operations to any arbitrary specified precision. Runtime is not a consideration here; while time-constrained notions of entropy can also be useful, their theory becomes messier (e.g., the 2nd law won’t hold in its current form).
Good question, it’s the right sort of question to ask here, and I don’t know the answer. That does get straight into some interesting follow-up questions about e.g. the ability to physically isolate the machine from noise, which might be conceptually load-bearing for things like working with arbitrary precision quantities.
I’m well out of my depth here, and this is probably a stupid question, but given the standard views of the “known” part of our physics, does that mean that the machine can do operations on arbitrary, fully precise complex numbers in constant time?
The continuous state-space is coarse-grained into discrete cells where the dynamics are approximately markovian (the theory is currently classical) & the “laws of physics” probably refers to the stochastic matrix that specifies the transition probabilities of the discrete cells (otherwise we could probably deal with infinite precision through limit computability)
Doesn’t such a discretization run into the fermion doubling problem?
The current theory is based on classical hamiltonian mechanics, but I think the theorems apply whenever you have a markovian coarse-graining. Fermion doubling is a problem for spacetime discretization in the quantum case, so the coarse-graining might need to be different. (E.g. coarse-grain the entire hilbert space, which might have locality issues but probably not load-bearing for algorithmic thermodynamics)
On outside view, quantum reduces to classical (which admits markovian coarse-graining) in the correspondence limit, so there must be some coarse-graining that works
In practice, we only ever measure things to finite precision. To predict these observations, all we need is to be able to do these operations to any arbitrary specified precision. Runtime is not a consideration here; while time-constrained notions of entropy can also be useful, their theory becomes messier (e.g., the 2nd law won’t hold in its current form).
Good question, it’s the right sort of question to ask here, and I don’t know the answer. That does get straight into some interesting follow-up questions about e.g. the ability to physically isolate the machine from noise, which might be conceptually load-bearing for things like working with arbitrary precision quantities.