The original problem, as stated, is “valid”: a mind with a “grue”-like prior would make the grue prediction, while normal human minds (with a “green”-like prior, mostly as a result of our evolution around colors) would make the “green” prediction. If we want a more neutral prior, we go with “minimum message length”, and “what are colors”. Grue and green are words in a dictionary, so they do not count for math—only Turing machines do. It’s simpler to write a Turing machine which puts out “light at XXXhz, light at XXXhz” then one that takes time T into account. Therefore, the green prior is more in-line with an MML-prior mind. We take MML priors as most compatible with human-like reasoning.
This seems problematic because it implies that humans would be perfectly fine with accepting grue over blue if they didn’t know about the nature of light.
Fortunately, the reason this helps is deeper than counting the number of hertz. When you want to determine the complexity of a term, you have to specify what language to use to write the term. The reason grue seems complicated to us evolved animals is because it has higher complexity in the language of our observations—the language of what neurons we feel light up when we look at the rock.
So does that mean that if an entity had a neuronal structure that intuited grue and bleen it would be justified in treating the hypothesis that way? I’d be willing to bite that bullet I think.
It means that that entity’s evolved instincts would be out-of-whack with the MML, so if that entity also got to the point where it invented Turing machines, it would see the flaw in its reasoning. This is no different than realizing that Maxwell’s equations, though they look more complicated than “anger” to a human, are actually simpler. Sometimes, the intuition is wrong. In the blue/grue case, human intuition happens to not be wrong, but a hypothetical entity is—and both humans and the entity, after understanding math and computer science, would agree that humans are wrong about anger, and hypothetical entities are wrong about grue. Why is that a problem?
This seems problematic because it implies that humans would be perfectly fine with accepting grue over blue if they didn’t know about the nature of light.
Right, they would, if for weird historical reasons they also thought of “grue” and “bleen” as reasonable linguistic primitives. So the human scientists would be surprised when the next emerald turned out to be bleen rather than grue, and they’d be able to observe that the shift happened at time T, and thus observe that green is a natural property. So this isn’t really much of a problem.
That’s not completely satisfying in that one wants an induction scheme that assigns priors independent of linguistic accident. If one tries to make hypotheses simplicity depend on language then one quickly gets very complicated hypotheses being labeled as simple (e.g. “God”).
If grue-people expect the green emeralds to spontaneously change into blue emeralds, why shouldn’t they also expect a simple green-detecting turing machine to spontaneously change into a blue-detecting turing machine and vice versa? Yes, a Turing machine is a mathematical construction; it does not spontaneously change. But they, using “grue” as a basic concept, would expect everything that even remotely depends on colours to change at a certain time, including physical approximations to Turing machines.
The original problem, as stated, is “valid”: a mind with a “grue”-like prior would make the grue prediction, while normal human minds (with a “green”-like prior, mostly as a result of our evolution around colors) would make the “green” prediction. If we want a more neutral prior, we go with “minimum message length”, and “what are colors”. Grue and green are words in a dictionary, so they do not count for math—only Turing machines do. It’s simpler to write a Turing machine which puts out “light at XXXhz, light at XXXhz” then one that takes time T into account. Therefore, the green prior is more in-line with an MML-prior mind. We take MML priors as most compatible with human-like reasoning.
This seems problematic because it implies that humans would be perfectly fine with accepting grue over blue if they didn’t know about the nature of light.
Fortunately, the reason this helps is deeper than counting the number of hertz. When you want to determine the complexity of a term, you have to specify what language to use to write the term. The reason grue seems complicated to us evolved animals is because it has higher complexity in the language of our observations—the language of what neurons we feel light up when we look at the rock.
So does that mean that if an entity had a neuronal structure that intuited grue and bleen it would be justified in treating the hypothesis that way? I’d be willing to bite that bullet I think.
It means that that entity’s evolved instincts would be out-of-whack with the MML, so if that entity also got to the point where it invented Turing machines, it would see the flaw in its reasoning. This is no different than realizing that Maxwell’s equations, though they look more complicated than “anger” to a human, are actually simpler. Sometimes, the intuition is wrong. In the blue/grue case, human intuition happens to not be wrong, but a hypothetical entity is—and both humans and the entity, after understanding math and computer science, would agree that humans are wrong about anger, and hypothetical entities are wrong about grue. Why is that a problem?
Right, they would, if for weird historical reasons they also thought of “grue” and “bleen” as reasonable linguistic primitives. So the human scientists would be surprised when the next emerald turned out to be bleen rather than grue, and they’d be able to observe that the shift happened at time T, and thus observe that green is a natural property. So this isn’t really much of a problem.
That’s not completely satisfying in that one wants an induction scheme that assigns priors independent of linguistic accident. If one tries to make hypotheses simplicity depend on language then one quickly gets very complicated hypotheses being labeled as simple (e.g. “God”).
Well, it is if you use hz. However, I prefer hz’. hz’ are just like hz until time T, but then different in the appropriate way after time T.
If grue-people expect the green emeralds to spontaneously change into blue emeralds, why shouldn’t they also expect a simple green-detecting turing machine to spontaneously change into a blue-detecting turing machine and vice versa? Yes, a Turing machine is a mathematical construction; it does not spontaneously change. But they, using “grue” as a basic concept, would expect everything that even remotely depends on colours to change at a certain time, including physical approximations to Turing machines.