I’m still not entirely sure how you classify variables as latent vs observed. Could you classify these as “latent”, “observed” or “ambiguous” to classify?
The light patterns that hit the photoreceptors in your eyes
The inferrences made output by your visual cortex
A person, in the broad sense including e.g. ems
A human, in the narrow sense of a biological being
I’ve edited the post after the fact to clarify what I meant, so I don’t think you’re dumb (in the sense that I don’t think you missed something that was there). I was just not clear enough the first time around.
I posted this question within less than 20 minutes of the thought occurring to me, so I didn’t understand what was going on well enough & couldn’t express my thoughts properly as a consequence. Your answer helped clarify my thoughts, so thanks for that!
Whether particular variables are latent or not is a property relative to what the “correct model” ends up being. Given our current understanding physics, I’d classify your examples like this:
The light patterns that hit the photoreceptors in your eyes: Observed
The inferrences made output by your visual cortex: Ambiguous
A person, in the broad sense including e.g. ems: Latent
A human, in the narrow sense of a biological being: Latent
An apple: Latent
A chair: Latent
An atom: Ambiguous
A lie: Latent
A friendship: Latent
With visual cortex inferences and atoms, I think the distinction is fuzzy enough that you have to specify exactly what you mean.
It’s important to notice that atoms are “latent” in both chemistry and quantum field theory in the usual sense, but they are causally relevant in chemistry while they probably aren’t in quantum field theory, so in the context of my question I’d say atoms are observed in chemistry and latent in QFT.
The realization I had while responding to your answer was that I really care about the model that an AGI would learn and not the models that humans use right now, and whether a particular variable is downstream or upstream of observed variables (so, whether they are latent or not in the sense I’ve been using the word here) depends on what the world model you’re using actually is.
I’m still not entirely sure how you classify variables as latent vs observed. Could you classify these as “latent”, “observed” or “ambiguous” to classify?
The light patterns that hit the photoreceptors in your eyes
The inferrences made output by your visual cortex
A person, in the broad sense including e.g. ems
A human, in the narrow sense of a biological being
An apple
A chair
An atom
A lie
A friendship
Wait I guess I’m dumb, this is explained in the OP.
I’ve edited the post after the fact to clarify what I meant, so I don’t think you’re dumb (in the sense that I don’t think you missed something that was there). I was just not clear enough the first time around.
Ah ok, didn’t realize it was edited.
I posted this question within less than 20 minutes of the thought occurring to me, so I didn’t understand what was going on well enough & couldn’t express my thoughts properly as a consequence. Your answer helped clarify my thoughts, so thanks for that!
Whether particular variables are latent or not is a property relative to what the “correct model” ends up being. Given our current understanding physics, I’d classify your examples like this:
The light patterns that hit the photoreceptors in your eyes: Observed
The inferrences made output by your visual cortex: Ambiguous
A person, in the broad sense including e.g. ems: Latent
A human, in the narrow sense of a biological being: Latent
An apple: Latent
A chair: Latent
An atom: Ambiguous
A lie: Latent
A friendship: Latent
With visual cortex inferences and atoms, I think the distinction is fuzzy enough that you have to specify exactly what you mean.
It’s important to notice that atoms are “latent” in both chemistry and quantum field theory in the usual sense, but they are causally relevant in chemistry while they probably aren’t in quantum field theory, so in the context of my question I’d say atoms are observed in chemistry and latent in QFT.
The realization I had while responding to your answer was that I really care about the model that an AGI would learn and not the models that humans use right now, and whether a particular variable is downstream or upstream of observed variables (so, whether they are latent or not in the sense I’ve been using the word here) depends on what the world model you’re using actually is.