First: Yes, this post seems to essentially be about thermodynamics and either way it is salient to immediately bring up symmetry. So agree on that point.
Symmetry, thermodynamics, information theory and ontology happen to be topics I take interest in (as stated in my LW bio).
Now, James, for your approach, I would like to understand better what you are saying here, and what you are actually claiming. Could you dumb this down or make it clearer? What scope/context do you intend for this approach? How far do you take it? And how much have you thought about it?
The tricky part when parsing John’s post is understanding what he means by “insensitive functions.” He doesn’t define it anywhere, and I think it’s because he was pointing at an idea but didn’t yet have a good definition for it. However, the example he gives—conservation of energy—occurs because the laws of physics are insensitive to some kind of symmetry, in this particular case time-translation. I’ve been thinking a lot about the relationship symmetries + physics + information theory this past year or two, and you can see some of my progress here and here. To me, it felt kind of natural to jump to “insensitive functions” being a sort of stochastic symmetry in the data.
I haven’t fleshed out exactly what that means. For exact symmetries, we can break up the data into a symmetry-invariant piece and the symmetry factor
x=(^x,g).
However, it feels like in real data, there is not such a clean separation. It’s closer to something like this: we could write x “big-edian” form, so that we get finer and finer details about x as we read off more bits. My guess is there is an “elbow” in the importance of the bits, similar to how Chebyshev series have this elbow in coefficient magnitude that chebfun identifies and chops off for quicker calculations:
In fact, as I’m writing this, I just realized that your autoencoder model could just be a discrete cosine (Chebyshev series) transform. It won’t be the best autoencoder to exist, but it is what JPEG uses. Anyway, I think the “arm”—or the bits to the left of the elbow—seems to form a natural ontology. The bits to the left of it seem to be doing something to help describe x, but not the bits to the right.
How does this relate to symmetries? Well, an exact symmetry is cleanly separable, which means the bits could be added after every other bit—it’s far to the right of the elbow. Chopping off the elbow does satisfy our idea of “ontology” in the exact symmetry case. Then all we need to do is create a model that chops off those uninteresting bits. The parameter β in the information bottleneck pretty much specifies a chopping point. The first term, I(Y;g(X)), says to keep important bits, while the second term, I(g(x);X) says to cut out unimportant bits, and β specifies at what point bits become too unimportant to leave in. You can slowly increase β until things start catastrophically failing (e.g. validation loss goes down), at which point you’ve probably identified the elbow.
No, the reason why we should have insensitivity is not quite that. As an example suppose I have a long random bitstring of fair coinflips X_1,X_2, etc. Now say that the “laws of physics” randomly turns over some number of coins—but for some reason can only turn over an even number. This gives us variables Y_1, Y_2, … at the second timestep. That is, P(Xs, Ys) ∝ 2^{-sum(xor(Xs,Ys))} if xorsum(xor(Xs,Ys)) == 0 else 0
where xor is bitwise xor, sum is sum of strings bits (as an integer, e.g. sum(110)=2), xorsum is sum of string’s bits mod 2, and it’s only proportional because we threw away the ones that changed an odd number of bits.
There’s a symmetry for the distribution of Xs: we can NOT any bit without affecting the distribution. Therefore it must be the uniform distribution—which we already knew, of course.
Likewise, the marginal distribution P(Ys) is symmetric under flipping any bit, since P(Xs, singleflip(Ys)) = P(singleflip(Xs), Ys) and so after summing over X we’ll just get a reindexing of our sum. So the Ys are also given by a uniform distribution. We could repeat our ‘laws of physics’, getting a sequence of bitstrings. The marginal distribution over the bitstrings at any time is still uniform.
The xorsum of the bitstring at any time is conserved over time. Yet it’s certainly not ‘insensitive’! If you look at some time, you cannot just use it as a latent for the distribution at that time, because you fail both mediation and redundancy.
I still think it’s likely that you can connect symmetries to redundantly encoded information, but it can’t be done this way.
I don’t understand your objection. What do you mean here?
“If you look at some time, you cannot just use [the xorsum] as a latent for the distribution at that time, because you fail both mediation and redundancy.”
Which distribution? The distribution of is uniform. You do not need any information to perfectly recover this distribution at a particular timestep. Do you mean the joint distribution ? If is very large, then
where the distribution is uniform for the sets of bitstrings that share parity (xorsum) with , and zero for the other half. So parity here seems to be doing exactly the same thing that made energy a natural latent.
Parity is not a latent, because the information is not redundantly expressed/not insensitive.
Furthermore, after conditioning on the parity there will now be some mutual information between one of the bits and all the others, in that it’s the max value of 1 bit (since knowing all the others and the parity allows you to figure out the last). Thus we have the worst KL-error for mediation.
In both of these, I’m talking about the distribution of the bitstring at a certain time.
I’m not sure what your limit is supposed to mean—Y here is not always taking the bits at the same timestep, right? But then, why divide by n?
Why divide by ? This is the relative entropy rate. And actually, I was wrong, it does not go to zero. I’m still not sure I understand your objection though. It feels like it applies equally well to why energy should not be a natural latent?
Consider a set of gas particles trapped in a box with given positions and velocities. There is a little bit of randomness due to the box vibrating a due to its temperature. You can mostly predict where the gas particles will be from one time step to another. The energy though, does not help with this prediction. It completely fails at mediating between time steps or reducing the complexity of the particle positions/velocities. Even if you know a couple of the particle positions/velocities, the energy isn’t going to help you find the rest.
The mediation condition is that when you condition on the latent, the mutual information between any one variable and the joint distribution of all other variables is low. In the case of the energy and temperature, once you know the energy and temperature, all the variables are now independent, and so you get no mutual information. However, with the parity, the rest of the variables let you figure out the last, so we fail mediation.
For redundancy, the energy and temperature is for the most part determined by any (n-1) variable subset, becaues averages. This isn’t true of the parity—the last bit being 50⁄50 means you still have total uncertainty over the parity.
You clearly have some idea of what “mediation” and “redundancy” means for these particular scenarios and why they matter. I still have no clue what you mean by those words, why I should care about these properties, or how they related to the notion of insensitivity.
Ah, I was talking about the conditions for natural latents, the main research program of the post author. See this post for a good math intro containing those definitions.
I now have the definitions, but I still don’t see the relation to insensitivity. Yes, natural latents are natural ontologies, but natural ontologies are not necessarily natural latents.
At the very least, the stochastic redund condition feels like a pretty minimal version of what ‘insensitivity’ could mean. The parity is still pretty maximally insensitive—if you’re trying to reduce your uncertainty about what the parity is, learning about (n-1) bits doesn’t even help you until you learn the last one! I doubt a good definition of “insensitivity” would call the parity insensitive.
What do you mean by “the stochastic redund condition”? Here’s what I feel like you’re doing: you have some unformalized intuitions. It seems to be the case that ‘insensitive’ stuff matches your intuition about redundancy for uncontrived examples. You then went and contrived an example where it didn’t match your intuition.
If I were in your situation, I would conclude, “my intuition is missing something, let me try to formalize this and see where I went wrong.”
I’m still really confused by your opening salvo:
“No, the reason why we should have insensitivity is not quite that.”
What do you mean??? What is “that”, what is “the reason why we should have insensitivity”? I think the reason we should have insensitivity is so the oracle can make predictions.
Also, I’m not going to continue responding. I do not think you have anything here. I think you are just confused, and you have not done the work to figure out what you yourself mean.
So, I feel like you just got deeper into the weeds here, thinking aloud. This seems interesting. I am trying to parse, but there is not enough formal context to make it make sense to me.
My main question was anyway, what w/could you use it for? What is the scope/context?
(Making some light banter) Maybe you are american, so I need to “debate” you to make it more obvious. “James, this is all a nice theoretical concept, but it seems useless practically. In its current form, I don’t see how it could be used for anything important”.
Haha, I did initially start with trying to be more explanatory but that ended after a few sentences. Where I think this could immediately improve a lot of models is by replacing the VAEs everyone is using in diffusion models with information bottleneck autoencoders. In short: VAEs are viruses. In long: VAEs got popular because they work decently well, but they are not theoretically correct. Their paper gestures at a theoretical justification, but it settles for less than is optimal. They do work better than vanilla autoencoders, because they “splat out” encodings which lets you interpolate between datapoints smoothly, and this is why everyone uses them today. If you ask most people using them, they will tell you it’s “industry standard” and “the right way to do things, because it is industry standard.” An information bottleneck autoencoder also ends up “splatting out” encodings, but has the correct theoretical backing. My expectations are you will automatically get things like finer details and better instruction following (“the table is on the apple”), because bottleneck encoders have more pressure to conserve encoding bits for such details.
There are probably a few other places this would be useful—for example, in LLM autoregression, you should try to minimize the mutual information between the embeddings and the previous tokens—but I have yet to do any experiments in other places. This is because estimating the mutual information is hard and makes training more fragile.
In terms of just philosophy, well I don’t particularly care for just the subject of philosophy. Philosophers too often assign muddy meanings to words and wonder why they’re confused ten propositions in. My goal when interacting with such sophistry is usually to define the words and figure out what that entails. I think philosophers just do not have the mathematical training to put into words what they mean, and even with that training it’s hard to do and will often be wrong. For example, I do not think the information bottleneck is a proper definition of “ontology” but is closer to “describing an ontology”. It does not say why something is the way it is, but it helps you figure out what it is. It’s a way to find natural ontologies, but does not say anything about how they came to be.
Thank you, just knowing you are strictly coming from a ML perspective already helps a lot. This was not obvious to me, who have approached these topics more from a physics lens.
// So, addressing your implementation ideas, this approach is practically speaking pretty neat! I lack formal ML background to properly evaluate it, but it seems neat.
Now, I will try to succinctly decipher the theory behind your core idea, and you let me know how I do.
You propose compressing data into a form that preserves the core identity. It gives us something practical we can work with.
The elbow has variables that break symmetry to the left and variables that hold symmetry to the right. This is an important distinction between from* noise and signal that I think many miss.
*mended, edit
This is all context dependent? Context defines the curve, the Beta parameter.
// How did I do?
Note: I should say at this point, understanding fundamental reality is my lifelong quest (constantly ignored in order to live out my little side quests) and I care about this topic. This quest, is what ontology means in the classical, and philosophical sense. When I speak about ontology in AI-context, I usually mean formal representations of reality, not induced ones. You seem to use AI context but mean induced ontologies.
The ‘ontology as insensitivity’ concept described by johnswentworth is interesting, and basically follows from statistical mechanics. But it is perhaps missing the inherent symmetry aspect, or something replacing it, as a fundamental factor. You can’t remove all symmetry. Everything with identity exists within a symmetry. This is non-obvious and partly my own assertion, but looking at modern group theory, this is indeed how mathematics define objects and so I am supported within this framework.
If we take wentworth’s idea and your elbow analogy, and try to define an object within a formal ontology, within my framework that all objects exist within symmetries, then we get:
Concept=Total RealitySymmetries (The Tail)
The “Elbow” doesn’t mark where reality ends and noise begins. It marks the resolution limit of your current context.
To the left of the elbow: Information that matters (Differences).
To the right of the elbow: Information that doesn’t matter (Equivalences/Symmetries).
Your example was a hand-written digit “7”. The Tail is the symmetries. You can slant the digit, thicken the line, or shift it left. These are the symmetries. As long as the variation stays in the “tail” of the curve, the identity “7” is preserved. (Note that the identity is relative and context dependent).
The Elbow: This is the breaking point. If you bend the top horizontal line too much, it becomes a “1“. You have left the chosen symmetry group of “7” and entered the chosen symmetry group of “1”.
If so, I would be genuinely curious to hear your ideas here. This might be an actually powerful concept if it holds up and you can formalize it properly. I assume you are an engineer, not a scientist? I think this idea deserves some deep thinking.
First: Yes, this post seems to essentially be about thermodynamics and either way it is salient to immediately bring up symmetry. So agree on that point.
Symmetry, thermodynamics, information theory and ontology happen to be topics I take interest in (as stated in my LW bio).
Now, James, for your approach, I would like to understand better what you are saying here, and what you are actually claiming. Could you dumb this down or make it clearer? What scope/context do you intend for this approach? How far do you take it? And how much have you thought about it?
The tricky part when parsing John’s post is understanding what he means by “insensitive functions.” He doesn’t define it anywhere, and I think it’s because he was pointing at an idea but didn’t yet have a good definition for it. However, the example he gives—conservation of energy—occurs because the laws of physics are insensitive to some kind of symmetry, in this particular case time-translation. I’ve been thinking a lot about the relationship symmetries + physics + information theory this past year or two, and you can see some of my progress here and here. To me, it felt kind of natural to jump to “insensitive functions” being a sort of stochastic symmetry in the data.
I haven’t fleshed out exactly what that means. For exact symmetries, we can break up the data into a symmetry-invariant piece and the symmetry factor
x=(^x,g).However, it feels like in real data, there is not such a clean separation. It’s closer to something like this: we could write x “big-edian” form, so that we get finer and finer details about x as we read off more bits. My guess is there is an “elbow” in the importance of the bits, similar to how Chebyshev series have this elbow in coefficient magnitude that
chebfunidentifies and chops off for quicker calculations:(Source: Chopping a Chebyshev Series)
In fact, as I’m writing this, I just realized that your autoencoder model could just be a discrete cosine (Chebyshev series) transform. It won’t be the best autoencoder to exist, but it is what JPEG uses. Anyway, I think the “arm”—or the bits to the left of the elbow—seems to form a natural ontology. The bits to the left of it seem to be doing something to help describe x, but not the bits to the right.
How does this relate to symmetries? Well, an exact symmetry is cleanly separable, which means the bits could be added after every other bit—it’s far to the right of the elbow. Chopping off the elbow does satisfy our idea of “ontology” in the exact symmetry case. Then all we need to do is create a model that chops off those uninteresting bits. The parameter β in the information bottleneck pretty much specifies a chopping point. The first term, I(Y;g(X)), says to keep important bits, while the second term, I(g(x);X) says to cut out unimportant bits, and β specifies at what point bits become too unimportant to leave in. You can slowly increase β until things start catastrophically failing (e.g. validation loss goes down), at which point you’ve probably identified the elbow.
No, the reason why we should have insensitivity is not quite that. As an example suppose I have a long random bitstring of fair coinflips X_1,X_2, etc. Now say that the “laws of physics” randomly turns over some number of coins—but for some reason can only turn over an even number. This gives us variables Y_1, Y_2, … at the second timestep. That is, P(Xs, Ys) ∝ 2^{-sum(xor(Xs,Ys))} if xorsum(xor(Xs,Ys)) == 0 else 0
where xor is bitwise xor, sum is sum of strings bits (as an integer, e.g. sum(110)=2), xorsum is sum of string’s bits mod 2, and it’s only proportional because we threw away the ones that changed an odd number of bits.
There’s a symmetry for the distribution of Xs: we can NOT any bit without affecting the distribution. Therefore it must be the uniform distribution—which we already knew, of course.
Likewise, the marginal distribution P(Ys) is symmetric under flipping any bit, since P(Xs, singleflip(Ys)) = P(singleflip(Xs), Ys) and so after summing over X we’ll just get a reindexing of our sum. So the Ys are also given by a uniform distribution. We could repeat our ‘laws of physics’, getting a sequence of bitstrings. The marginal distribution over the bitstrings at any time is still uniform.
The xorsum of the bitstring at any time is conserved over time. Yet it’s certainly not ‘insensitive’! If you look at some time, you cannot just use it as a latent for the distribution at that time, because you fail both mediation and redundancy.
I still think it’s likely that you can connect symmetries to redundantly encoded information, but it can’t be done this way.
I don’t understand your objection. What do you mean here?
“If you look at some time, you cannot just use [the xorsum] as a latent for the distribution at that time, because you fail both mediation and redundancy.”
Which distribution? The distribution of is uniform. You do not need any information to perfectly recover this distribution at a particular timestep. Do you mean the joint distribution ? If is very large, then
where the distribution is uniform for the sets of bitstrings that share parity (xorsum) with , and zero for the other half. So parity here seems to be doing exactly the same thing that made energy a natural latent.
Parity is not a latent, because the information is not redundantly expressed/not insensitive.
Furthermore, after conditioning on the parity there will now be some mutual information between one of the bits and all the others, in that it’s the max value of 1 bit (since knowing all the others and the parity allows you to figure out the last). Thus we have the worst KL-error for mediation.
In both of these, I’m talking about the distribution of the bitstring at a certain time.
I’m not sure what your limit is supposed to mean—Y here is not always taking the bits at the same timestep, right? But then, why divide by n?
Why divide by ? This is the relative entropy rate. And actually, I was wrong, it does not go to zero. I’m still not sure I understand your objection though. It feels like it applies equally well to why energy should not be a natural latent?
The mediation condition is that when you condition on the latent, the mutual information between any one variable and the joint distribution of all other variables is low. In the case of the energy and temperature, once you know the energy and temperature, all the variables are now independent, and so you get no mutual information. However, with the parity, the rest of the variables let you figure out the last, so we fail mediation.
For redundancy, the energy and temperature is for the most part determined by any (n-1) variable subset, becaues averages. This isn’t true of the parity—the last bit being 50⁄50 means you still have total uncertainty over the parity.
You clearly have some idea of what “mediation” and “redundancy” means for these particular scenarios and why they matter. I still have no clue what you mean by those words, why I should care about these properties, or how they related to the notion of insensitivity.
Ah, I was talking about the conditions for natural latents, the main research program of the post author. See this post for a good math intro containing those definitions.
I now have the definitions, but I still don’t see the relation to insensitivity. Yes, natural latents are natural ontologies, but natural ontologies are not necessarily natural latents.
At the very least, the stochastic redund condition feels like a pretty minimal version of what ‘insensitivity’ could mean. The parity is still pretty maximally insensitive—if you’re trying to reduce your uncertainty about what the parity is, learning about (n-1) bits doesn’t even help you until you learn the last one! I doubt a good definition of “insensitivity” would call the parity insensitive.
What do you mean by “the stochastic redund condition”? Here’s what I feel like you’re doing: you have some unformalized intuitions. It seems to be the case that ‘insensitive’ stuff matches your intuition about redundancy for uncontrived examples. You then went and contrived an example where it didn’t match your intuition.
If I were in your situation, I would conclude, “my intuition is missing something, let me try to formalize this and see where I went wrong.”
I’m still really confused by your opening salvo:
“No, the reason why we should have insensitivity is not quite that.”
What do you mean??? What is “that”, what is “the reason why we should have insensitivity”? I think the reason we should have insensitivity is so the oracle can make predictions.
Also, I’m not going to continue responding. I do not think you have anything here. I think you are just confused, and you have not done the work to figure out what you yourself mean.
So, I feel like you just got deeper into the weeds here, thinking aloud. This seems interesting. I am trying to parse, but there is not enough formal context to make it make sense to me.
My main question was anyway, what w/could you use it for? What is the scope/context?
(Making some light banter) Maybe you are american, so I need to “debate” you to make it more obvious. “James, this is all a nice theoretical concept, but it seems useless practically. In its current form, I don’t see how it could be used for anything important”.
Haha, I did initially start with trying to be more explanatory but that ended after a few sentences. Where I think this could immediately improve a lot of models is by replacing the VAEs everyone is using in diffusion models with information bottleneck autoencoders. In short: VAEs are viruses. In long: VAEs got popular because they work decently well, but they are not theoretically correct. Their paper gestures at a theoretical justification, but it settles for less than is optimal. They do work better than vanilla autoencoders, because they “splat out” encodings which lets you interpolate between datapoints smoothly, and this is why everyone uses them today. If you ask most people using them, they will tell you it’s “industry standard” and “the right way to do things, because it is industry standard.” An information bottleneck autoencoder also ends up “splatting out” encodings, but has the correct theoretical backing. My expectations are you will automatically get things like finer details and better instruction following (“the table is on the apple”), because bottleneck encoders have more pressure to conserve encoding bits for such details.
There are probably a few other places this would be useful—for example, in LLM autoregression, you should try to minimize the mutual information between the embeddings and the previous tokens—but I have yet to do any experiments in other places. This is because estimating the mutual information is hard and makes training more fragile.
In terms of just philosophy, well I don’t particularly care for just the subject of philosophy. Philosophers too often assign muddy meanings to words and wonder why they’re confused ten propositions in. My goal when interacting with such sophistry is usually to define the words and figure out what that entails. I think philosophers just do not have the mathematical training to put into words what they mean, and even with that training it’s hard to do and will often be wrong. For example, I do not think the information bottleneck is a proper definition of “ontology” but is closer to “describing an ontology”. It does not say why something is the way it is, but it helps you figure out what it is. It’s a way to find natural ontologies, but does not say anything about how they came to be.
Thank you, just knowing you are strictly coming from a ML perspective already helps a lot. This was not obvious to me, who have approached these topics more from a physics lens.
//
So, addressing your implementation ideas, this approach is practically speaking pretty neat! I lack formal ML background to properly evaluate it, but it seems neat.
Now, I will try to succinctly decipher the theory behind your core idea, and you let me know how I do.
You propose compressing data into a form that preserves the core identity. It gives us something practical we can work with.
The elbow has variables that break symmetry to the left and variables that hold symmetry to the right. This is an important distinction
betweenfrom* noise and signal that I think many miss.*mended, edit
This is all context dependent? Context defines the curve, the Beta parameter.
// How did I do?
Note: I should say at this point, understanding fundamental reality is my lifelong quest (constantly ignored in order to live out my little side quests) and I care about this topic. This quest, is what ontology means in the classical, and philosophical sense. When I speak about ontology in AI-context, I usually mean formal representations of reality, not induced ones. You seem to use AI context but mean induced ontologies.
The ‘ontology as insensitivity’ concept described by johnswentworth is interesting, and basically follows from statistical mechanics. But it is perhaps missing the inherent symmetry aspect, or something replacing it, as a fundamental factor. You can’t remove all symmetry. Everything with identity exists within a symmetry. This is non-obvious and partly my own assertion, but looking at modern group theory, this is indeed how mathematics define objects and so I am supported within this framework.
If we take wentworth’s idea and your elbow analogy, and try to define an object within a formal ontology, within my framework that all objects exist within symmetries, then we get:
Concept=Total RealitySymmetries (The Tail)
The “Elbow” doesn’t mark where reality ends and noise begins. It marks the resolution limit of your current context.
To the left of the elbow: Information that matters (Differences).
To the right of the elbow: Information that doesn’t matter (Equivalences/Symmetries).
Your example was a hand-written digit “7”. The Tail is the symmetries. You can slant the digit, thicken the line, or shift it left. These are the symmetries. As long as the variation stays in the “tail” of the curve, the identity “7” is preserved. (Note that the identity is relative and context dependent).
The Elbow: This is the breaking point. If you bend the top horizontal line too much, it becomes a “1“. You have left the chosen symmetry group of “7” and entered the chosen symmetry group of “1”.
This is mostly correct, though I think there are phase changes making some β more natural than others.
If so, I would be genuinely curious to hear your ideas here. This might be an actually powerful concept if it holds up and you can formalize it properly. I assume you are an engineer, not a scientist? I think this idea deserves some deep thinking.
I don’t have any more thoughts on this at present, and I probably won’t think too much on it in the future, as it isn’t super interesting to me.