Lol! I don’t care what “certain kinds of minds” think of “who I am socially put with” if being put that way by those minds doesn’t conduce to better chances of SURVIVING a plausibly imminent global chaos and gigadeath and GETTING to a Win Condition somehow.
If Luddism is correct, I want to believe in Luddism.
If Luddism is not correct, then I don’t want to believe in Luddism.
(I’m not currently doing a lot of Luddism personally? My vibe lately is roughly heading for Agentic Coding and 3D printing and bottom-up affinity groups using BFT coordination protocols to flock efficiently. More “solar punk” than “Luddism”? But I’d be happy to switch if there are actually good reasons for that!)
Your solar punk initiatives sound very cool! To be clear I also try not to act according to what ‘certain minds’ would think, but I guess I was just trying to address what I saw as a perspective mismatch.
Regarding a better way: sometime ago I came to the conclusion that (to use the language of the moloch blog post ) Moloch and Elua are the same. They are, if you would like, what happens when the different bits of evolution (variation, selection, replication) get weighted differently. So Elua encourages variation and replication with weak selection, and Moloch encourages selection and variation with weak replication. Note that replication is not just copying. Both in biology and in culture, replication is also about reducing complexity of software after major feature push, repairing harm after vicious competition leads to loads of bloodshed and damage, refinement of the message of a book between editing passes and print runs, nature’s way of error correction between generations by regressing to the mean. This makes the final branch what is usually called conservatism—strong replication and selection, but weak variation.
Furthermore, which parameters are dominant is dependent on the context the system is operating in. So Moloch is locally dominant when resources are scarce, and vice versa for Elua. For minds which operate using world models, this means that we can play the Moloch-game or the Elua-game based on which state of mind we are in! (This is my best steelman of whatever the hell an “abundance mindset” is supposed to be)
Of course, whatever strategy we choose will need to actually be effective in reality to work. But in reality we all know people who acted like they were in a zero sum game when they were not, ruining everything for everyone; we also know people who gave even when there was little to give, and so enabled the collective as a whole to get itself out of the local minima—more pie for everyone. (Even if you don’t, you are the beneficiaries of their actions). This suggests that there are lenses which are effective at interfacing with reality but do not promote Moloch-thinking or quick-optimisation-thinking.
I have since gone in search for such lenses. The Moloch-lens is easy to find, it’s called the prisoner’s dilemma. It conforms to the ideas we have about short term gain and hard-nosed geopolitical and interpersonal realism, and does it so well that if you reverse the payouts people will call you unrealistic and biased. There is however also an Elua-lens or Elua-game that we can find. So far my incomplete understanding of its logic is something like:
The world is vast and complicated. Really complicated. Like, OOMs more complicated than any agent in the world (not least because the world also contains that agent’s complexity).
Executing plans in the world often requires taking many actions in sequence before a payoff can be identified (if any).
Thus, the world is exponentially complicated (size of action space is , where a is the number of actions you can take each second and n the number of seconds until payoff, both of which can be very very large for ambitions the size of the Apollo program)
This means that exploration is super dominant over exploitation in terms of total future payoff. For any measly local optima you can find, there’s almost certainly a bigger cheese somewhere else to find.
The problem of course is that exploration is hard and time intensive. This is why cooperation is dominant in the elua-game: cooperation parallelises exploration, leading to a much much faster time-to-payoff. Especially if you can cooperate so much you basically become a superorganism, this gives you immensely fast ways to improve your odds. This is the intuition behind why teaming up is good (two heads are better than one etc.)
The last piece of the puzzle is “why not kill the others and use their materials to build more computronium?” If the power law holds behind compute investments holds, making a computer system more powerful uses exponentially more resources than the gains it provides. This means that the odds of any system fully understanding the world using a particular frame or world-model is ~0, even if it turns galaxy after galaxy into compute nodes.
OTOH, cooperation and preserving other ways of seeing the world allows you to cover more of the search space (it’s the difference between sampling an image by checking random pixels versus starting from the bottom left corner and uncovering the image pixel by pixel). This means that cooperation gives you way more compute “per gram” than the alternative, while also being way less taxing for other reasons: first of all, you don’t have to spend executive capacity directing your subordinate compute units, avoiding the curse of dimensionality that happens when you have too many nodes to control top down simultaneously. (Cf. this quote: “Intelligence, Asman explained, is bounded by power laws: each volume of computing requires an exponentially vaster volume of connections.” ). Having someone else that can take care of themselves and just give you the relevant facts is actually really good. Second, you avoid turning your own weaknesses into single points of failure.
I’ll stop here, except for a final note that cooperation is not just coordination (dictatorships are coordinated but have very few of the benefits I mention above). I also wrote more about the Moloch-lens and what it does to people in this comment. Hope this helps!
The way you frame Elua and Moloch is to see them, roughly, as “Darwin’s Babble & Prune” I think. Fun!
If you haven’t already, you should read aboutMitchell & Hofstadter’s CopyCat (link to Python reimplementation) and specifically attend to the concept of “the parallel terraced scan”! It is a microscopic version of this!
This means that the odds of any system fully understanding the world using a particular frame or world-model is ~0, even if it turns galaxy after galaxy into compute nodes.
...HOWEVER, I believe there are ways of structuring one’s own mind to simply “do this” with some non-trivial degree of efficacy? It involves making your identity VERY small (as a sort of timesharing kernel/VM/datacenter/memeplex manager?) and creating habits around running ~”personas as roles in contexts as choices”, and applying “consider the opposite” a lot, and thinking about covering algorithms.
OTOH, two skillfully “Elua-minded people” aren’t actually that different from each other, most likely?
They are in the same basic world, and basically trying to react to it in full generality… They should therefore… converge? Probably? Right?!?
And so “predictably” you get: (1) they cooperate weirdly well in close proximity but also (2) they recognize they are relatively “scarce human resources” and should often have buffer between each other so they helpfully optimize different parts of the world rather than stepping on each other’s toes.
Past posts I wrote aimed at this mental state include the one in Internal Information Cascades (which gestures in a very abstract and theoretical way towards the broad desirability of the meta-skill to those without it) the one on Panology (which imagines a world with a non-trivial number of people engaged in pedagogy and curriculum design to produce more such people, cooperating using the cultural forms of an academic field of study).
The Moloch-lens is easy to find, it’s called the prisoner’s dilemma.
Thank you for the very long and detailed response.
Copycat seems like a great thing to read up on and I will.
Indeed you can kind of “fracture” your own mind to make space for lots of worldviews (actors do this) but there is a minimal amount of cohesion you need, otherwise you are not really a singular person/agent anymore.
Agree with your framing of the two Elua-people, and also the babble and prune framing.
I’ll read your posts. I have also looked into infinite games and nomic (I also used to design TTRPGS, so the game framing is very familiar to me)
So Elua encourages variation and replication with weak selection, and Moloch encourages selection and variation with weak replication.
Nice. New thought for me. Thank you.
I sort of rotate the basis vectors a bit. I sometimes think of evolution as a dance between creativity (variation) and death (natural selection). In that spirit, I’m hearing you say that Elua encourages thriving via creativity, whereas Moloch encourages survival via death.
…Moloch is locally dominant when resources are scarce, and vice versa for Elua. […] (This is my best steelman of whatever the hell an “abundance mindset” is supposed to be)
I came to a similar steelman. Again with slightly shifted basis vectors though — basically the same ones as I mentioned up above I think. It comes out in terms of explore/exploit in my view. If you’re in a resource-poor situation, it’s high risk to explore, and you want to use whatever strategies you have on hand (with some exceptions, like if death is basically certain, at which point your strategies have already failed and you just want to increase variation a whole lot in a final survival bid). But if resources are abundant, then your long term survival is best served by basically preparing for forms of death that haven’t yet arrived. So expanding capacities via exploration. These seem to be two different modes (tight management of scarce resources vs. creative play to increase capacity in a high-resource domain).
If you’re actually in a high resource context but you can’t perceive it because your perceptions are contracted around a kind of emergency survival strategy, it’s helpful to “adopt an abundance mindset” so you can notice your context and correctly switch strategies.
If there’s some degree of self-fulfilling prophecy to the resources available (e.g. being confident you’ll get funds causes people to believe in your cause more and give you more funds), it’s also maybe helpful to assume you’re in an abundant context.
But if you are in fact in a scarce context and it’s not self-fulfilling, you very much want to budget and use what you know works.
Basically agreed, with the extra point that sometimes you can play your way “out” of a high resource context too by exploiting too hard (i.e. killing the goose that lays the golden eggs to get an extra meal). So attuning to what part of reality you are actually in is important.
Lol! I don’t care what “certain kinds of minds” think of “who I am socially put with” if being put that way by those minds doesn’t conduce to better chances of SURVIVING a plausibly imminent global chaos and gigadeath and GETTING to a Win Condition somehow.
If Luddism is correct, I want to believe in Luddism.
If Luddism is not correct, then I don’t want to believe in Luddism.
(I’m not currently doing a lot of Luddism personally? My vibe lately is roughly heading for Agentic Coding and 3D printing and bottom-up affinity groups using BFT coordination protocols to flock efficiently. More “solar punk” than “Luddism”? But I’d be happy to switch if there are actually good reasons for that!)
Say more about your better way! ❤
Your solar punk initiatives sound very cool! To be clear I also try not to act according to what ‘certain minds’ would think, but I guess I was just trying to address what I saw as a perspective mismatch.
Regarding a better way: sometime ago I came to the conclusion that (to use the language of the moloch blog post ) Moloch and Elua are the same. They are, if you would like, what happens when the different bits of evolution (variation, selection, replication) get weighted differently. So Elua encourages variation and replication with weak selection, and Moloch encourages selection and variation with weak replication. Note that replication is not just copying. Both in biology and in culture, replication is also about reducing complexity of software after major feature push, repairing harm after vicious competition leads to loads of bloodshed and damage, refinement of the message of a book between editing passes and print runs, nature’s way of error correction between generations by regressing to the mean. This makes the final branch what is usually called conservatism—strong replication and selection, but weak variation.
Furthermore, which parameters are dominant is dependent on the context the system is operating in. So Moloch is locally dominant when resources are scarce, and vice versa for Elua. For minds which operate using world models, this means that we can play the Moloch-game or the Elua-game based on which state of mind we are in! (This is my best steelman of whatever the hell an “abundance mindset” is supposed to be)
Of course, whatever strategy we choose will need to actually be effective in reality to work. But in reality we all know people who acted like they were in a zero sum game when they were not, ruining everything for everyone; we also know people who gave even when there was little to give, and so enabled the collective as a whole to get itself out of the local minima—more pie for everyone. (Even if you don’t, you are the beneficiaries of their actions). This suggests that there are lenses which are effective at interfacing with reality but do not promote Moloch-thinking or quick-optimisation-thinking.
I have since gone in search for such lenses. The Moloch-lens is easy to find, it’s called the prisoner’s dilemma. It conforms to the ideas we have about short term gain and hard-nosed geopolitical and interpersonal realism, and does it so well that if you reverse the payouts people will call you unrealistic and biased. There is however also an Elua-lens or Elua-game that we can find. So far my incomplete understanding of its logic is something like:
The world is vast and complicated. Really complicated. Like, OOMs more complicated than any agent in the world (not least because the world also contains that agent’s complexity).
Executing plans in the world often requires taking many actions in sequence before a payoff can be identified (if any).
Thus, the world is exponentially complicated (size of action space is , where a is the number of actions you can take each second and n the number of seconds until payoff, both of which can be very very large for ambitions the size of the Apollo program)
This means that exploration is super dominant over exploitation in terms of total future payoff. For any measly local optima you can find, there’s almost certainly a bigger cheese somewhere else to find.
The problem of course is that exploration is hard and time intensive. This is why cooperation is dominant in the elua-game: cooperation parallelises exploration, leading to a much much faster time-to-payoff. Especially if you can cooperate so much you basically become a superorganism, this gives you immensely fast ways to improve your odds. This is the intuition behind why teaming up is good (two heads are better than one etc.)
The last piece of the puzzle is “why not kill the others and use their materials to build more computronium?” If the power law holds behind compute investments holds, making a computer system more powerful uses exponentially more resources than the gains it provides. This means that the odds of any system fully understanding the world using a particular frame or world-model is ~0, even if it turns galaxy after galaxy into compute nodes.
OTOH, cooperation and preserving other ways of seeing the world allows you to cover more of the search space (it’s the difference between sampling an image by checking random pixels versus starting from the bottom left corner and uncovering the image pixel by pixel). This means that cooperation gives you way more compute “per gram” than the alternative, while also being way less taxing for other reasons: first of all, you don’t have to spend executive capacity directing your subordinate compute units, avoiding the curse of dimensionality that happens when you have too many nodes to control top down simultaneously. (Cf. this quote: “Intelligence, Asman explained, is bounded by power laws: each volume of computing requires an exponentially vaster volume of connections.” ). Having someone else that can take care of themselves and just give you the relevant facts is actually really good. Second, you avoid turning your own weaknesses into single points of failure.
I’ll stop here, except for a final note that cooperation is not just coordination (dictatorships are coordinated but have very few of the benefits I mention above). I also wrote more about the Moloch-lens and what it does to people in this comment. Hope this helps!
The way you frame Elua and Moloch is to see them, roughly, as “Darwin’s Babble & Prune” I think. Fun!
If you haven’t already, you should read about Mitchell & Hofstadter’s CopyCat (link to Python reimplementation) and specifically attend to the concept of “the parallel terraced scan”! It is a microscopic version of this!
...HOWEVER, I believe there are ways of structuring one’s own mind to simply “do this” with some non-trivial degree of efficacy? It involves making your identity VERY small (as a sort of timesharing kernel/VM/datacenter/memeplex manager?) and creating habits around running ~”personas as roles in contexts as choices”, and applying “consider the opposite” a lot, and thinking about covering algorithms.
OTOH, two skillfully “Elua-minded people” aren’t actually that different from each other, most likely?
They are in the same basic world, and basically trying to react to it in full generality… They should therefore… converge? Probably? Right?!?
And so “predictably” you get: (1) they cooperate weirdly well in close proximity but also (2) they recognize they are relatively “scarce human resources” and should often have buffer between each other so they helpfully optimize different parts of the world rather than stepping on each other’s toes.
Past posts I wrote aimed at this mental state include the one in Internal Information Cascades (which gestures in a very abstract and theoretical way towards the broad desirability of the meta-skill to those without it) the one on Panology (which imagines a world with a non-trivial number of people engaged in pedagogy and curriculum design to produce more such people, cooperating using the cultural forms of an academic field of study).
Do you know about Carse’s Finite And Infinite Games or Suber’s Nomic? If not, consider checking them out!
Also, on the subject of finding new lenses and choosing between them, Chapman has a lot.
Thank you for the very long and detailed response.
Copycat seems like a great thing to read up on and I will.
Indeed you can kind of “fracture” your own mind to make space for lots of worldviews (actors do this) but there is a minimal amount of cohesion you need, otherwise you are not really a singular person/agent anymore.
Agree with your framing of the two Elua-people, and also the babble and prune framing.
I’ll read your posts. I have also looked into infinite games and nomic (I also used to design TTRPGS, so the game framing is very familiar to me)
Nice. New thought for me. Thank you.
I sort of rotate the basis vectors a bit. I sometimes think of evolution as a dance between creativity (variation) and death (natural selection). In that spirit, I’m hearing you say that Elua encourages thriving via creativity, whereas Moloch encourages survival via death.
I came to a similar steelman. Again with slightly shifted basis vectors though — basically the same ones as I mentioned up above I think. It comes out in terms of explore/exploit in my view. If you’re in a resource-poor situation, it’s high risk to explore, and you want to use whatever strategies you have on hand (with some exceptions, like if death is basically certain, at which point your strategies have already failed and you just want to increase variation a whole lot in a final survival bid). But if resources are abundant, then your long term survival is best served by basically preparing for forms of death that haven’t yet arrived. So expanding capacities via exploration. These seem to be two different modes (tight management of scarce resources vs. creative play to increase capacity in a high-resource domain).
If you’re actually in a high resource context but you can’t perceive it because your perceptions are contracted around a kind of emergency survival strategy, it’s helpful to “adopt an abundance mindset” so you can notice your context and correctly switch strategies.
If there’s some degree of self-fulfilling prophecy to the resources available (e.g. being confident you’ll get funds causes people to believe in your cause more and give you more funds), it’s also maybe helpful to assume you’re in an abundant context.
But if you are in fact in a scarce context and it’s not self-fulfilling, you very much want to budget and use what you know works.
Basically agreed, with the extra point that sometimes you can play your way “out” of a high resource context too by exploiting too hard (i.e. killing the goose that lays the golden eggs to get an extra meal). So attuning to what part of reality you are actually in is important.