I sometimes use the notion of natural latents in my own thinking—it’s useful in the same way that the notion of Bayes networks is useful.
A frame I have is that many real world questions consist of hierarchical latents: for example, the vitality of a city is determined by employment, number of companies, migration, free-time activities and so on, and “free-time activities” is a latent (or multiple latents?) on its own.
I sometimes get use of assessing whether a topic at hand is a high-level or low-level latent and orienting accordingly. For example: if the topic at hand is “what will the societal response to AI be like?”, it’s by default not a great conversational move to talk about one’s interactions with ChatGPT the other day—those observations are likely too low-level[1] to be informative about the high-level latent(s) under discussion. Conversely, if the topic at hand is low-level, then analyzing low-level observations is very sensible.
(One could probably have derived the same every-day lessons simply from Bayes nets, without the need for natural latent math, but the latter helped me clarify “hold on, what are the nodes of the Bayes net?”)
But admittedly, while this is a fun perspective to think about, I haven’t got that much value out of it so far. This is why I give this post +4 instead of +9 for the review.
I sometimes use the notion of natural latents in my own thinking—it’s useful in the same way that the notion of Bayes networks is useful.
A frame I have is that many real world questions consist of hierarchical latents: for example, the vitality of a city is determined by employment, number of companies, migration, free-time activities and so on, and “free-time activities” is a latent (or multiple latents?) on its own.
I sometimes get use of assessing whether a topic at hand is a high-level or low-level latent and orienting accordingly. For example: if the topic at hand is “what will the societal response to AI be like?”, it’s by default not a great conversational move to talk about one’s interactions with ChatGPT the other day—those observations are likely too low-level[1] to be informative about the high-level latent(s) under discussion. Conversely, if the topic at hand is low-level, then analyzing low-level observations is very sensible.
(One could probably have derived the same every-day lessons simply from Bayes nets, without the need for natural latent math, but the latter helped me clarify “hold on, what are the nodes of the Bayes net?”)
But admittedly, while this is a fun perspective to think about, I haven’t got that much value out of it so far. This is why I give this post +4 instead of +9 for the review.
And, separately, too low sample size.