I think it’s important to remind yourself what the latent vs observable variables represent.
The observable variables are, well, the observables: sights, sounds, smells, touches, etc.. Meanwhile, the latents are all concepts we have that aren’t directly observable, which yes includes intangibles like friendship, but also includes high-level objects like chairs and apples, or low-level objects like atoms.
One reason to mention this is because it has implications for your graphs. The causal graph would more look like this:
(As the graphs for HMMs tend to look. Arguably the sideways arrows for the xs are not needed, but I put them in anyway.)
Of course, that’s not to say that you can’t factor the probability distribution as you did, it just seems more accurate to call it something other than causal graph. Maybe inferential graph? (I suppose you could call your original graph as causal graph of people’s psychology. But then the hs would only represent people’s estimates of their latent variables, which they would claim could differ from their “actual” latent variables, if e.g. they were mistaken.)
Anyway, much more importantly, I think this distinction also answers your question about path-dependence. There’d be lots of path-dependence, and it would not be undesirable to have the path-dependence. For example:
If you observe that you get put into a simulation, but the simulation otherwise appears realistic, then you have path-dependence because you still know that there is an “outside the simulation”.
If you observe that an apple in your house, then you update your estimate of h to contain that apple. If you then leave your house, then x no longer shows the apple, but you keep believing in it, even though you wouldn’t believe it if your original observation of your house had not found an apple.
If you get told that someone is celebrating their birthday tomorrow, then tomorrow you will believe that they are celebrating their birthday, even if you aren’t present there.
I think you misunderstood my graph—the way I drew it was intentional, not a mistake. Probably I wasn’t explicit enough about how I was splitting the variables and what I do is somewhat different from what johnswentworth does, so let me explain.
Some latent variables could have causal explanatory power, but I’m focusing on ones that don’t seem to have any such power because they are the ones human values depend on most strongly. For example, anything to do with qualia is not going to have any causal arrows going from it to what we can observe, but nevertheless we make inferences about people’s internal state of mind from what we externally observe of their behavior.
As for my questions about path-dependence, I think your responses don’t address the question I meant to ask. For example,
If you observe that an apple in your house, then you update your estimate of h to contain that apple. If you then leave your house, then x no longer shows the apple, but you keep believing in it, even though you wouldn’t believe it if your original observation of your house had not found an apple.
This is not a property of path-dependence in the sense I’m talking about it, because for me anything that has causal explanatory power goes into the state xt. This would include whether there actually is an apple in your house or not, even if your current sensory inputs show no evidence of an apple.
EDIT: I notice now that there’s a central question here about to what extent the latent variables human values are defined over are causally relevant vs causally irrelevant. I assumed that states of mind wouldn’t be relevant but actually they could be causally relevant in the world model of the human even if they wouldn’t be in the “true model”, whatever that means.
I think in this case I still want to say that human values are path-dependent. This is because I care more about whether the values end up being path-dependent in the “true model” and not in the human’s world model (which is imperfect), because a sufficiently powerful AGI would pick up the true model and then try to map its states to the latent variables that the human seems to care about. In other words, for it the latent variables could end up being causally irrelevant, even if for the human they aren’t. I’ve edited the post to reflect this.
I’m still not entirely sure how you classify variables as latent vs observed. Could you classify these as “latent”, “observed” or “ambiguous” to classify?
The light patterns that hit the photoreceptors in your eyes
The inferrences made output by your visual cortex
A person, in the broad sense including e.g. ems
A human, in the narrow sense of a biological being
I’ve edited the post after the fact to clarify what I meant, so I don’t think you’re dumb (in the sense that I don’t think you missed something that was there). I was just not clear enough the first time around.
I posted this question within less than 20 minutes of the thought occurring to me, so I didn’t understand what was going on well enough & couldn’t express my thoughts properly as a consequence. Your answer helped clarify my thoughts, so thanks for that!
Whether particular variables are latent or not is a property relative to what the “correct model” ends up being. Given our current understanding physics, I’d classify your examples like this:
The light patterns that hit the photoreceptors in your eyes: Observed
The inferrences made output by your visual cortex: Ambiguous
A person, in the broad sense including e.g. ems: Latent
A human, in the narrow sense of a biological being: Latent
An apple: Latent
A chair: Latent
An atom: Ambiguous
A lie: Latent
A friendship: Latent
With visual cortex inferences and atoms, I think the distinction is fuzzy enough that you have to specify exactly what you mean.
It’s important to notice that atoms are “latent” in both chemistry and quantum field theory in the usual sense, but they are causally relevant in chemistry while they probably aren’t in quantum field theory, so in the context of my question I’d say atoms are observed in chemistry and latent in QFT.
The realization I had while responding to your answer was that I really care about the model that an AGI would learn and not the models that humans use right now, and whether a particular variable is downstream or upstream of observed variables (so, whether they are latent or not in the sense I’ve been using the word here) depends on what the world model you’re using actually is.
I think it’s important to remind yourself what the latent vs observable variables represent.
The observable variables are, well, the observables: sights, sounds, smells, touches, etc.. Meanwhile, the latents are all concepts we have that aren’t directly observable, which yes includes intangibles like friendship, but also includes high-level objects like chairs and apples, or low-level objects like atoms.
One reason to mention this is because it has implications for your graphs. The causal graph would more look like this:
(As the graphs for HMMs tend to look. Arguably the sideways arrows for the xs are not needed, but I put them in anyway.)
Of course, that’s not to say that you can’t factor the probability distribution as you did, it just seems more accurate to call it something other than causal graph. Maybe inferential graph? (I suppose you could call your original graph as causal graph of people’s psychology. But then the hs would only represent people’s estimates of their latent variables, which they would claim could differ from their “actual” latent variables, if e.g. they were mistaken.)
Anyway, much more importantly, I think this distinction also answers your question about path-dependence. There’d be lots of path-dependence, and it would not be undesirable to have the path-dependence. For example:
If you observe that you get put into a simulation, but the simulation otherwise appears realistic, then you have path-dependence because you still know that there is an “outside the simulation”.
If you observe that an apple in your house, then you update your estimate of h to contain that apple. If you then leave your house, then x no longer shows the apple, but you keep believing in it, even though you wouldn’t believe it if your original observation of your house had not found an apple.
If you get told that someone is celebrating their birthday tomorrow, then tomorrow you will believe that they are celebrating their birthday, even if you aren’t present there.
I think you misunderstood my graph—the way I drew it was intentional, not a mistake. Probably I wasn’t explicit enough about how I was splitting the variables and what I do is somewhat different from what johnswentworth does, so let me explain.
Some latent variables could have causal explanatory power, but I’m focusing on ones that don’t seem to have any such power because they are the ones human values depend on most strongly. For example, anything to do with qualia is not going to have any causal arrows going from it to what we can observe, but nevertheless we make inferences about people’s internal state of mind from what we externally observe of their behavior.
As for my questions about path-dependence, I think your responses don’t address the question I meant to ask. For example,
This is not a property of path-dependence in the sense I’m talking about it, because for me anything that has causal explanatory power goes into the state xt. This would include whether there actually is an apple in your house or not, even if your current sensory inputs show no evidence of an apple.
EDIT: I notice now that there’s a central question here about to what extent the latent variables human values are defined over are causally relevant vs causally irrelevant. I assumed that states of mind wouldn’t be relevant but actually they could be causally relevant in the world model of the human even if they wouldn’t be in the “true model”, whatever that means.
I think in this case I still want to say that human values are path-dependent. This is because I care more about whether the values end up being path-dependent in the “true model” and not in the human’s world model (which is imperfect), because a sufficiently powerful AGI would pick up the true model and then try to map its states to the latent variables that the human seems to care about. In other words, for it the latent variables could end up being causally irrelevant, even if for the human they aren’t. I’ve edited the post to reflect this.
I’m still not entirely sure how you classify variables as latent vs observed. Could you classify these as “latent”, “observed” or “ambiguous” to classify?
The light patterns that hit the photoreceptors in your eyes
The inferrences made output by your visual cortex
A person, in the broad sense including e.g. ems
A human, in the narrow sense of a biological being
An apple
A chair
An atom
A lie
A friendship
Wait I guess I’m dumb, this is explained in the OP.
I’ve edited the post after the fact to clarify what I meant, so I don’t think you’re dumb (in the sense that I don’t think you missed something that was there). I was just not clear enough the first time around.
Ah ok, didn’t realize it was edited.
I posted this question within less than 20 minutes of the thought occurring to me, so I didn’t understand what was going on well enough & couldn’t express my thoughts properly as a consequence. Your answer helped clarify my thoughts, so thanks for that!
Whether particular variables are latent or not is a property relative to what the “correct model” ends up being. Given our current understanding physics, I’d classify your examples like this:
The light patterns that hit the photoreceptors in your eyes: Observed
The inferrences made output by your visual cortex: Ambiguous
A person, in the broad sense including e.g. ems: Latent
A human, in the narrow sense of a biological being: Latent
An apple: Latent
A chair: Latent
An atom: Ambiguous
A lie: Latent
A friendship: Latent
With visual cortex inferences and atoms, I think the distinction is fuzzy enough that you have to specify exactly what you mean.
It’s important to notice that atoms are “latent” in both chemistry and quantum field theory in the usual sense, but they are causally relevant in chemistry while they probably aren’t in quantum field theory, so in the context of my question I’d say atoms are observed in chemistry and latent in QFT.
The realization I had while responding to your answer was that I really care about the model that an AGI would learn and not the models that humans use right now, and whether a particular variable is downstream or upstream of observed variables (so, whether they are latent or not in the sense I’ve been using the word here) depends on what the world model you’re using actually is.