This is a question for the people working on more foundational research. My underlying objective is loose and in the future and is something like “figure out a good basis to describe collective intelligence and agency and then improve that so that we can incorporate AI into our collective systems”. I therefore believe that the question of how a collective agent is formed is very important. I also find it very important to figure out the properties of good systems in terms of institutions in terms of information theory.
There’s a lot of foundational ground to cover here and I’m worried that I’m stepping in the wrong direction so to keep myself grounded I try to talk to academics and researcher in the fields I’m trying to study and unify (compositionally). I’m getting these local reward signals about whether things make sense, I also post these things on LW and substack yet I find the signal to be quite sparse in various ways or at least uncorrelated with what I would consider progress for myself.
The classic good ol advice is to backchain from your end states you want, to run experiments, to think about the real world and if things are true there. I’ve done this in the past and now it feels like I’m at a point where I kind of need to take a step of faith in what I’ve done so far and I wanted to know if there’s some tips from people who have taken this sort of step of faith in the past. How long did you find your exploration to be useful? If you did it again for yourself, what would you do differently?
The actual question is something like: Do I go down the route of discretizing collective intelligence through something like koopman operators, renormalization groups or something similar? How do they relate to things like active inference and game theory? What about spectral graph theory? Could actually all Collective Intelligence be expressed as graphs? There’s like at least 6 different relatively deep mathematical ways I’ve found that you can express these systems through and I’ve got no clue which one to dive deeper into.
I know part of the direction where I want to go (expressed as a fancy schmancy thing here: https://eq-network.org/roadmap/ ) but the foundation stage is predecessor of the rest and it seems quite important to get right and it’s just really difficult so if anyone has any thoughts I would be very happy to hear them.
I think I spent over a decade regularly thinking about theoretical anatomy. I feel like a had a fundamental breakthrough, when I was applying an existing concept to a clear existing problem and something surprising happened. I probably was not the first person to have that surprising experience but my theoretical anatomy interest did allow me to generalize and push the concept further.
I can’t see how backchaining is going to work if you are doing research that needs critical new insights. Do you believe that critical new insights are necessary or do you think just throwing one of those concepts you listed at the problem has a good chance of bringing you where you want to go? I remember Thomas Kuhn making a point that those fields where people think about practical outcomes and then try to backchaining from there tend to be less scientifically productive than fields where people focus on the research challenges that come up in the engagement with experiments.
I think Elizabeth’s Truthseeking is the ground in which other principles grow is good. Currently, you try to “ground” yourself by social proof instead of empirical reality. David Chapman’s ideaof “To do good work, one must get up-close and personal with the phenomenon.” is a good orientation to orient.
I appreciate the answer and I’m not sure I find it that useful at least at a first glance? I think that I probably explained myself relatively badly within the first one if that is how you interpret some of what I wrote so let’s see if I make more sense by explaining myself more.
I can’t see how backchaining is going to work if you are doing research that needs critical new insights.
I would totally agree with you that backchaining doesn’t work and that was what I was trying to express.
Do you believe that critical new insights are necessary or do you think just throwing one of those concepts you listed at the problem has a good chance of bringing you where you want to go?
I feel like I wouldn’t know if they generate solutions to the underlying problems that I want to solve unless I spend maybe 6 months or so just going for that specific direction.
I think Elizabeth’s Truthseeking is the ground in which other principles grow is good. Currently, you try to “ground” yourself by social proof instead of empirical reality. David Chapman’s ideaof “To do good work, one must get up-close and personal with the phenomenon.” is a good orientation to orient.
I’m mainly applying a experimental bend to my work and I’m generating questions concerns and new ways of thinking it is rather that:
They will not be implemented or impactful unless I find good ways to communicate about them and the computational benefits of applying them. Hence it seems useful to ground it in the work of existing fields?
It seems to me that the lack of feedback in trying something without guardrails is bad? Yes truth is primary and it is also true that you can learn lots from other people and if you cannot explain something to different people then maybe you don’t understand it?
How do I know I’m not crazy? You could say ground yourself in empirical fact but what if part of what I’m doing is foundational research that will take a year or more to show its usefulness?
There are times when all you need to do is synthesis of established knowledge that’s distributed among people who don’t talk with each other. I think my latest post on hyaluranonfall into that category. I think there’s value in getting research together to build a gears model and adding new labels. There’s no new fundamental insight in it. That’s different for other work I’m doing that’s actually about persuing insights.
It might very well be that the key problem you want to solve in amendable to just synthesizing existing knowledge of other people. It might also be that it actually requires new fundamental insights. I don’t know in which category it falls. You probably don’t know either but have understanding of the problems you are dealing with so that you can make that judgement better than I can.
You can learn a lot from other people when you talk with them but you also get their blindspots and conventions from doing so. A lot of startup founders are quite young with relatively little knowledge of how the industries that they want to disrupt work. They have naive ideas and some of those naive ideas that established industries wouldn’t try turn out to be correct while most startups fail.
When it comes to finding new fundamental insights, you are looking for ideas that are true and that people haven’t already found in a similar way to a startup that succeeds because they have a thesis that’s the established players didn’t already pursue.
One of Elizabeths sections is titled “Stick to projects small enough for you to comprehend them”. I think the backchaining approach gives you problems the are too big to comprehend them. If you pick problems small enough to comprehend them that are exposed to feedback from reality you can learn new things about reality. Some of them are minor but if you are lucky there’s a major fundamental insight among them.
If I were in your situation one possible project I would pursue might be “What happens when I apply my ideas about agents to internal family system style agents inside myself and when I use them to speak/coach with other people their internal family system.
This is a question for the people working on more foundational research. My underlying objective is loose and in the future and is something like “figure out a good basis to describe collective intelligence and agency and then improve that so that we can incorporate AI into our collective systems”. I therefore believe that the question of how a collective agent is formed is very important. I also find it very important to figure out the properties of good systems in terms of institutions in terms of information theory.
There’s a lot of foundational ground to cover here and I’m worried that I’m stepping in the wrong direction so to keep myself grounded I try to talk to academics and researcher in the fields I’m trying to study and unify (compositionally). I’m getting these local reward signals about whether things make sense, I also post these things on LW and substack yet I find the signal to be quite sparse in various ways or at least uncorrelated with what I would consider progress for myself.
The classic good ol advice is to backchain from your end states you want, to run experiments, to think about the real world and if things are true there. I’ve done this in the past and now it feels like I’m at a point where I kind of need to take a step of faith in what I’ve done so far and I wanted to know if there’s some tips from people who have taken this sort of step of faith in the past. How long did you find your exploration to be useful? If you did it again for yourself, what would you do differently?
The actual question is something like: Do I go down the route of discretizing collective intelligence through something like koopman operators, renormalization groups or something similar? How do they relate to things like active inference and game theory? What about spectral graph theory? Could actually all Collective Intelligence be expressed as graphs? There’s like at least 6 different relatively deep mathematical ways I’ve found that you can express these systems through and I’ve got no clue which one to dive deeper into.
I know part of the direction where I want to go (expressed as a fancy schmancy thing here: https://eq-network.org/roadmap/ ) but the foundation stage is predecessor of the rest and it seems quite important to get right and it’s just really difficult so if anyone has any thoughts I would be very happy to hear them.
I think I spent over a decade regularly thinking about theoretical anatomy. I feel like a had a fundamental breakthrough, when I was applying an existing concept to a clear existing problem and something surprising happened. I probably was not the first person to have that surprising experience but my theoretical anatomy interest did allow me to generalize and push the concept further.
I can’t see how backchaining is going to work if you are doing research that needs critical new insights. Do you believe that critical new insights are necessary or do you think just throwing one of those concepts you listed at the problem has a good chance of bringing you where you want to go? I remember Thomas Kuhn making a point that those fields where people think about practical outcomes and then try to backchaining from there tend to be less scientifically productive than fields where people focus on the research challenges that come up in the engagement with experiments.
I think Elizabeth’s Truthseeking is the ground in which other principles grow is good. Currently, you try to “ground” yourself by social proof instead of empirical reality. David Chapman’s idea of “To do good work, one must get up-close and personal with the phenomenon.” is a good orientation to orient.
I appreciate the answer and I’m not sure I find it that useful at least at a first glance? I think that I probably explained myself relatively badly within the first one if that is how you interpret some of what I wrote so let’s see if I make more sense by explaining myself more.
I would totally agree with you that backchaining doesn’t work and that was what I was trying to express.
I feel like I wouldn’t know if they generate solutions to the underlying problems that I want to solve unless I spend maybe 6 months or so just going for that specific direction.
I’m mainly applying a experimental bend to my work and I’m generating questions concerns and new ways of thinking it is rather that:
They will not be implemented or impactful unless I find good ways to communicate about them and the computational benefits of applying them. Hence it seems useful to ground it in the work of existing fields?
It seems to me that the lack of feedback in trying something without guardrails is bad? Yes truth is primary and it is also true that you can learn lots from other people and if you cannot explain something to different people then maybe you don’t understand it?
How do I know I’m not crazy? You could say ground yourself in empirical fact but what if part of what I’m doing is foundational research that will take a year or more to show its usefulness?
There are times when all you need to do is synthesis of established knowledge that’s distributed among people who don’t talk with each other. I think my latest post on hyaluranon fall into that category. I think there’s value in getting research together to build a gears model and adding new labels. There’s no new fundamental insight in it. That’s different for other work I’m doing that’s actually about persuing insights.
It might very well be that the key problem you want to solve in amendable to just synthesizing existing knowledge of other people. It might also be that it actually requires new fundamental insights. I don’t know in which category it falls. You probably don’t know either but have understanding of the problems you are dealing with so that you can make that judgement better than I can.
You can learn a lot from other people when you talk with them but you also get their blindspots and conventions from doing so. A lot of startup founders are quite young with relatively little knowledge of how the industries that they want to disrupt work. They have naive ideas and some of those naive ideas that established industries wouldn’t try turn out to be correct while most startups fail.
When it comes to finding new fundamental insights, you are looking for ideas that are true and that people haven’t already found in a similar way to a startup that succeeds because they have a thesis that’s the established players didn’t already pursue.
One of Elizabeths sections is titled “Stick to projects small enough for you to comprehend them”. I think the backchaining approach gives you problems the are too big to comprehend them. If you pick problems small enough to comprehend them that are exposed to feedback from reality you can learn new things about reality. Some of them are minor but if you are lucky there’s a major fundamental insight among them.
If I were in your situation one possible project I would pursue might be “What happens when I apply my ideas about agents to internal family system style agents inside myself and when I use them to speak/coach with other people their internal family system.