Thanks for getting into the details here. I’m brand new to this field of mathematics and this conversation is helping me get a much better handle on what’s going on.
[Disclaimer: I am relying very heavily on ChatGPT to work my way through this stuff. I’m mostly using it to learn the math, sort through research papers and check my writing for errors. (Ironically, the reason my writings here contain mistakes is because I’m mostly writing it myself rather than letting the AI take over.) I just want to be upfront about this; I get the impression that you’re using LLM-assisted research much less—if at all.]
I don’t disagree with your blockquote rewrite in any substantive way applicable to the special case of biological neural networks.
You didn’t use thermodynamic entropy anywhere. Personally, I come from a physics background, so my understanding of signal processing—especially in the context of physical systems—uses a lot of thermodynamic metaphors. Consequently, I end up thinking in mixed metaphors, which is bad. To fix this problem, I’m going to stop using the term “entropy” in this thread. (Perhaps I should stop using the word “chaotic” too.)
(Is there actually a proper term for the thing that increases as you move from subcritical to supercritical? I keep finding that I need ugly circumlocutions for want of one.)
Universally? No. But if I were to rewrite this post I would use “gain”, since it works fine
but isn’t Lyapunov exponent much the same thing as you’re calling “gain”?…
Yes.
While “gain” can indeed be handwaved into Lyapunov exponent, jhana isn’t just about gain. It’s also about noise, which is an orthogonal axis.
What I think is going on is that there’s two important factors: noise and gain. Jhana increases gain but decreases noise. In this way a jhanic state is more “ordered” in the lower noise sense. Jhana is closer to critical, because it has higher gain. In this sense it is more sensitive in the dynamical systems sense that small perturbations can get amplified into large-scale patterns.
Consider a leftover warhead from WWII. There are two things that could make it explode. One is if the bomb is sensitive (higher gain). The other one is if the whole room is shaking (higher noise).
(2) things about Complex Systems…never seem to give actual explicit definitions of the things they are talking about. Probably I have just not found the right things to read.
fE/I is a proxy used to measure the ratio between excitation and inhibition.
PS: This is the first time you’ve commented on my posts where I don’t want to crawl into a cave and die. My writing is improving! 🎉 I still need to do a re-write of this article that credits you at the end, but at least I won’t have to throw the entire thing away.
Me too, mostly. I took an undergraduate course on dynamical systems many years ago but I’ve forgotten most of what was in it and in any case it seems like this complex-systems stuff uses the language of dynamical systems but not always in ways I can see how to connect with the mathematics I kinda-sorta know.
I get the impression that you’re using LLM-assisted research much less—if at all
I make almost no use of LLMs. (I am not at all claiming that this is a good thing, just validating your impression :-).)
jhana isn’t just about gain. It’s also about noise
If we’re thinking about the brain as a dynamical system, how is this noise being represented? Maybe as arising from inputs coming in from outside. If jhana reduces sensitivity to those (which might fit with “pronounced self-reported sensory fading”, as described in the article) then that could reduce the overall amount of noise in the system.
But I still can’t quite make sense of this. (1) I haven’t read the article closely but it doesn’t look like it attributes their observations about jhana to reduced effects of noise. (2) The article specifically claims that jhana is associated with a lower max Lyapunov exponent—that’s the basis for its claim of “reduced chaoticity”. Doesn’t that mean, in your terms, that the article is claiming that jhana puts the brain in a state where the “gain” is lower, not higher?
The original paper that led me down this rabbit hole
Thanks—I’ll take a look. At first glance it seems to be very specifically about brains; what I’d really like to find is something that explains the general principles in terms that in principle I could apply to domains other than brains, and with enough precision and explicitness that I can see how to do mathematics to it.
The DFA exponent and so-called “fE/I” are both properties, if I am understanding correctly, of arbitrary time series (and the hope is that when the time series is derived from a dynamical system it tells you something interesting about the structure of that system). That’s good, in that they are nice and general and well defined and I can understand what they are. But if we’re talking about properties of a dynamical system rather than of some set of signals captured from it, I’d like to understand what properties are in question. Handwavily I understand that we’re looking at something along the lines of “coefficient in an exponential dependence” where <0 means things decay and >0 means things explode and interesting stuff might happen at 0. (And presumably that exponential dependence arises from something like a differential equation where again we’re looking at something like the eigenvalues in the matrix you get by linearizing the d.e.) But I don’t get the impression that people talking about subcriticality and supercriticality are actually working with concrete precisely-specified mathematical systems for which they could define those terms precisely; it seems (perhaps unfairly) more as if they are defining “supercritical” to mean something like “if we go looking for instabilities or exponential divergences, we can find things that look like that” and “subcritical” to mean the reverse, and it’s all kinda phenomenological, looking at the outputs of the system rather than at the system itself.
Which may very well be the best one can do with a brain, but it’s all a bit frustrating when trying to understand exactly what’s going on.
This is the first time you’ve commented on my posts where I don’t want to crawl into a cave and die.
Ouch!
I was going to say “I hope that indicates only that you feel very bad when someone points out issues with what you’ve written, rather than that I am incredibly tactless” … but maybe it’s actually better overall for one person to be very tactless than for one person to be painfully sensitive to criticism. Anyway, to whatever extent your past pain is the result of my tactlessness, I’m sorry.
[Meta: This comment is messy because I think that spewing out a large number of words in an attempt to gesture at what I’m thinking right at this moment is probably easier for you to understand than if I write in my usual concise style.]
I don’t have much to say on your mathematical analysis, but I have some meditative contextual data that I predict could help put your analysis into context.
jhana isn’t just about gain. It’s also about noise
I originally claimed that:
Deep jhana reduces chaoticity and moves dynamics toward criticality.
That was clumsy of me to write. It’s ambiguous at best and wrong at worst, depending on how I define terms. (And I defined terms—including “jhana”—right at the start of this article, so that oversight is on me.) By the logic of my post, deep samatha jhana ought to move the dynamics away from criticality, toward deeper subcriticality. Whereas deep insight (jhanas?) are what move the dynamics toward criticality.
I will try to set the record straight here. If I’m understanding you correctly, you seem to be taking seriously the idea that jhana and open awareness are opposites where jhana decreases Lyapunov exponent and open awareness increases it. Maybe I said or implied this, but to consider them entirely separate is, from a meditation perspective (not considering the math at all), too lossy of a simplification. To switch into Buddhist lingo for a moment, meditation always has both a samatha component and an insight component. Deep samatha jhana usually contains an insight element, and getting to insight usually requires a samatha element. If you want to do Zen nondual open awareness meditation, you have to bootstrap yourself there through a phase of stabilized attention. This seems to imply that there’s a common factor moving the mind toward both ends of this meditative spectrum simultaneously. Which means that what’s going on can’t be a single variable like Lypunov exponent. There has to be at least two important dimensions that we care about. One dial is the deepness of your meditation. The other dial is a spectrum from samatha to insight.
It is possible to do deep jhana without moving your brain toward criticality. This is considered a mistake, from an insight perspective, if that’s all you do, but it can and does sometimes happen.
Here’s my current theory as of writing this comment. There’s two important dimensions: noise and gain (Lypunov exponent). Your brain can only handle a certain amount of combined noise + gain without running into problems. All meditation lowers noise. Some meditation (samatha) just leaves it at that, and may not bring you closer to criticality. Open awareness meditation uses this low noise to increase gain. (A very common, effective meditation technique is to start with samatha and then transition into open awareness.)
state
noise
gain
normative human
high
nominal
samatha jhana
low
IDK
open awareness (jhana?)
low
high
[I hope this doesn’t come across as wishy-washy. Even without the math, explaining how to do insight meditation is notoriously prone to miscommunications.]
If we’re thinking about the brain as a dynamical system, how is this noise being represented? Maybe as arising from inputs coming in from outside
Samatha jhana mostly ignores inputs from the outside. Open awareness states do allow sensory inputs to reach consciousness, but they don’t result in destabilization of attention.
state
effect of sensory inputs on consciousness
effect of sensory inputs on motor action
normative human
nominal
yes
samatha jhana
low
no
open awarenesss
high
no
Much noise is internally-generated. If you’re talking to yourself in your head, then that’s noise, even if you do it while physically motionless.
…it’s all kinda phenomenological, looking at the outputs of the system rather than at the system itself.
Which may very well be the best one can do with a brain, but it’s all a bit frustrating when trying to understand exactly what’s going on.
I believe you are correctly describing the current state of the science.
About tactfulness: When I see your name in the comments it means I messed something up. You’re perfectly tactful. :)
I hope I will return to this when I have time to read it properly and think about it properly, but for now I’ll just drop in two things at the meta-level: (1) I don’t know how comprehensible I’d have found something more in your usual concise style, but the above certainly seems nice and clear so it seems like you probably made a good choice. (2) I’m glad to hear that I’m perfectly tactful but now I’m worried about a different issue, namely that maybe I never say anything unless I have something mean^H^H^H^Hcritical to say, which I’m aware is the exact opposite of what generations of parents have been teaching their children to do :-). (I definitely do lean in that direction, and I’m somewhat prepared to defend it in that offering hopefully-informative criticism is arguably more useful than offering compliments, but it’s still probably suboptimal.)
Thanks for getting into the details here. I’m brand new to this field of mathematics and this conversation is helping me get a much better handle on what’s going on.
[Disclaimer: I am relying very heavily on ChatGPT to work my way through this stuff. I’m mostly using it to learn the math, sort through research papers and check my writing for errors. (Ironically, the reason my writings here contain mistakes is because I’m mostly writing it myself rather than letting the AI take over.) I just want to be upfront about this; I get the impression that you’re using LLM-assisted research much less—if at all.]
I don’t disagree with your blockquote rewrite in any substantive way applicable to the special case of biological neural networks.
You didn’t use thermodynamic entropy anywhere. Personally, I come from a physics background, so my understanding of signal processing—especially in the context of physical systems—uses a lot of thermodynamic metaphors. Consequently, I end up thinking in mixed metaphors, which is bad. To fix this problem, I’m going to stop using the term “entropy” in this thread. (Perhaps I should stop using the word “chaotic” too.)
Universally? No. But if I were to rewrite this post I would use “gain”, since it works fine
Yes.
While “gain” can indeed be handwaved into Lyapunov exponent, jhana isn’t just about gain. It’s also about noise, which is an orthogonal axis.
What I think is going on is that there’s two important factors: noise and gain. Jhana increases gain but decreases noise. In this way a jhanic state is more “ordered” in the lower noise sense. Jhana is closer to critical, because it has higher gain. In this sense it is more sensitive in the dynamical systems sense that small perturbations can get amplified into large-scale patterns.
Consider a leftover warhead from WWII. There are two things that could make it explode. One is if the bomb is sensitive (higher gain). The other one is if the whole room is shaking (higher noise).
The original paper that led me down this rabbit hole in the first place used “DFA and the fE/I ratio”.
DFA (Detrended Fluctuation Analysis) quantifies long-range temporal correlations.
fE/I is a proxy used to measure the ratio between excitation and inhibition.
PS: This is the first time you’ve commented on my posts where I don’t want to crawl into a cave and die. My writing is improving! 🎉 I still need to do a re-write of this article that credits you at the end, but at least I won’t have to throw the entire thing away.
Me too, mostly. I took an undergraduate course on dynamical systems many years ago but I’ve forgotten most of what was in it and in any case it seems like this complex-systems stuff uses the language of dynamical systems but not always in ways I can see how to connect with the mathematics I kinda-sorta know.
I make almost no use of LLMs. (I am not at all claiming that this is a good thing, just validating your impression :-).)
If we’re thinking about the brain as a dynamical system, how is this noise being represented? Maybe as arising from inputs coming in from outside. If jhana reduces sensitivity to those (which might fit with “pronounced self-reported sensory fading”, as described in the article) then that could reduce the overall amount of noise in the system.
But I still can’t quite make sense of this. (1) I haven’t read the article closely but it doesn’t look like it attributes their observations about jhana to reduced effects of noise. (2) The article specifically claims that jhana is associated with a lower max Lyapunov exponent—that’s the basis for its claim of “reduced chaoticity”. Doesn’t that mean, in your terms, that the article is claiming that jhana puts the brain in a state where the “gain” is lower, not higher?
Thanks—I’ll take a look. At first glance it seems to be very specifically about brains; what I’d really like to find is something that explains the general principles in terms that in principle I could apply to domains other than brains, and with enough precision and explicitness that I can see how to do mathematics to it.
The DFA exponent and so-called “fE/I” are both properties, if I am understanding correctly, of arbitrary time series (and the hope is that when the time series is derived from a dynamical system it tells you something interesting about the structure of that system). That’s good, in that they are nice and general and well defined and I can understand what they are. But if we’re talking about properties of a dynamical system rather than of some set of signals captured from it, I’d like to understand what properties are in question. Handwavily I understand that we’re looking at something along the lines of “coefficient in an exponential dependence” where <0 means things decay and >0 means things explode and interesting stuff might happen at 0. (And presumably that exponential dependence arises from something like a differential equation where again we’re looking at something like the eigenvalues in the matrix you get by linearizing the d.e.) But I don’t get the impression that people talking about subcriticality and supercriticality are actually working with concrete precisely-specified mathematical systems for which they could define those terms precisely; it seems (perhaps unfairly) more as if they are defining “supercritical” to mean something like “if we go looking for instabilities or exponential divergences, we can find things that look like that” and “subcritical” to mean the reverse, and it’s all kinda phenomenological, looking at the outputs of the system rather than at the system itself.
Which may very well be the best one can do with a brain, but it’s all a bit frustrating when trying to understand exactly what’s going on.
Ouch!
I was going to say “I hope that indicates only that you feel very bad when someone points out issues with what you’ve written, rather than that I am incredibly tactless” … but maybe it’s actually better overall for one person to be very tactless than for one person to be painfully sensitive to criticism. Anyway, to whatever extent your past pain is the result of my tactlessness, I’m sorry.
[Meta: This comment is messy because I think that spewing out a large number of words in an attempt to gesture at what I’m thinking right at this moment is probably easier for you to understand than if I write in my usual concise style.]
I don’t have much to say on your mathematical analysis, but I have some meditative contextual data that I predict could help put your analysis into context.
I originally claimed that:
That was clumsy of me to write. It’s ambiguous at best and wrong at worst, depending on how I define terms. (And I defined terms—including “jhana”—right at the start of this article, so that oversight is on me.) By the logic of my post, deep samatha jhana ought to move the dynamics away from criticality, toward deeper subcriticality. Whereas deep insight (jhanas?) are what move the dynamics toward criticality.
I will try to set the record straight here. If I’m understanding you correctly, you seem to be taking seriously the idea that jhana and open awareness are opposites where jhana decreases Lyapunov exponent and open awareness increases it. Maybe I said or implied this, but to consider them entirely separate is, from a meditation perspective (not considering the math at all), too lossy of a simplification. To switch into Buddhist lingo for a moment, meditation always has both a samatha component and an insight component. Deep samatha jhana usually contains an insight element, and getting to insight usually requires a samatha element. If you want to do Zen nondual open awareness meditation, you have to bootstrap yourself there through a phase of stabilized attention. This seems to imply that there’s a common factor moving the mind toward both ends of this meditative spectrum simultaneously. Which means that what’s going on can’t be a single variable like Lypunov exponent. There has to be at least two important dimensions that we care about. One dial is the deepness of your meditation. The other dial is a spectrum from samatha to insight.
It is possible to do deep jhana without moving your brain toward criticality. This is considered a mistake, from an insight perspective, if that’s all you do, but it can and does sometimes happen.
Here’s my current theory as of writing this comment. There’s two important dimensions: noise and gain (Lypunov exponent). Your brain can only handle a certain amount of combined noise + gain without running into problems. All meditation lowers noise. Some meditation (samatha) just leaves it at that, and may not bring you closer to criticality. Open awareness meditation uses this low noise to increase gain. (A very common, effective meditation technique is to start with samatha and then transition into open awareness.)
[I hope this doesn’t come across as wishy-washy. Even without the math, explaining how to do insight meditation is notoriously prone to miscommunications.]
Samatha jhana mostly ignores inputs from the outside. Open awareness states do allow sensory inputs to reach consciousness, but they don’t result in destabilization of attention.
Much noise is internally-generated. If you’re talking to yourself in your head, then that’s noise, even if you do it while physically motionless.
I believe you are correctly describing the current state of the science.
About tactfulness: When I see your name in the comments it means I messed something up. You’re perfectly tactful. :)
I hope I will return to this when I have time to read it properly and think about it properly, but for now I’ll just drop in two things at the meta-level: (1) I don’t know how comprehensible I’d have found something more in your usual concise style, but the above certainly seems nice and clear so it seems like you probably made a good choice. (2) I’m glad to hear that I’m perfectly tactful but now I’m worried about a different issue, namely that maybe I never say anything unless I have something mean^H^H^H^Hcritical to say, which I’m aware is the exact opposite of what generations of parents have been teaching their children to do :-). (I definitely do lean in that direction, and I’m somewhat prepared to defend it in that offering hopefully-informative criticism is arguably more useful than offering compliments, but it’s still probably suboptimal.)