Native Americans lost to other humans, not to local predators. European empires lost to other European empires, not to the peoples they colonized. And transhumanists lost to other progressivists — that is, to AI accelerationists — not to traditionalists or conservatives.
...The most dangerous enemies are found among the most powerful agents, not the most ideologically distant ones. Each successive battle is fought among the previous round’s winners, and it never replays the prior distribution of sides.
And my first thought was: Hasn’t this been obvious since ~2022 and isn’t “d/acc” the obvious thing to work on, given the moral nihilism and not-practically-stoppable risk-externalizing cowboy bullshit happening among the “e/acc” types?
Good survival teams MIGHT survive. We probably won’t. But there’s not many other options than to find the people who want to go as fast as fucking possible towards “come with me if you want to live” projects.
Elon Musk invested in Tesla not because it was an obviously good idea at the time, but simply because any timeline where an electric car company didn’t spring into existence was going to collapse into ruinous Global Warming. IF in the Global Warming timeline we die… THEN just act like the the Global Warming timeline will somehow be avoided, and then position yourself to be happy in that future. (The other futures are doomed anyway, so don’t bother optimizing for them. Your energies will be wasted no matter what you do, if nanites kill you 18 months from now, so act as if nanites will definitely not kill you in the next 18 months.)
“D/acc” is playing for something that can absorb people’s actual energies, in the event that nothing else (that they can’t have done something about anyway) kills them even faster.
To be clear, I agree with you, but I suspect that to a certain kind of mind pursuing d/acc and satisficing and governance puts you in the realm of the luddites and the social conservatives. “There is no good or evil, there is only power/optimisation and those too weak to take it.”
I happen to believe there is another way, but Moloch provides for his own.
Lol! I don’t care what “certain kinds of minds” think of “who I am socially put with” if being put that way by those minds doesn’t conduce to better chances of SURVIVING a plausibly imminent global chaos and gigadeath and GETTING to a Win Condition somehow.
If Luddism is correct, I want to believe in Luddism.
If Luddism is not correct, then I don’t want to believe in Luddism.
(I’m not currently doing a lot of Luddism personally? My vibe lately is roughly heading for Agentic Coding and 3D printing and bottom-up affinity groups using BFT coordination protocols to flock efficiently. More “solar punk” than “Luddism”? But I’d be happy to switch if there are actually good reasons for that!)
Your solar punk initiatives sound very cool! To be clear I also try not to act according to what ‘certain minds’ would think, but I guess I was just trying to address what I saw as a perspective mismatch.
Regarding a better way: sometime ago I came to the conclusion that (to use the language of the moloch blog post ) Moloch and Elua are the same. They are, if you would like, what happens when the different bits of evolution (variation, selection, replication) get weighted differently. So Elua encourages variation and replication with weak selection, and Moloch encourages selection and variation with weak replication. Note that replication is not just copying. Both in biology and in culture, replication is also about reducing complexity of software after major feature push, repairing harm after vicious competition leads to loads of bloodshed and damage, refinement of the message of a book between editing passes and print runs, nature’s way of error correction between generations by regressing to the mean. This makes the final branch what is usually called conservatism—strong replication and selection, but weak variation.
Furthermore, which parameters are dominant is dependent on the context the system is operating in. So Moloch is locally dominant when resources are scarce, and vice versa for Elua. For minds which operate using world models, this means that we can play the Moloch-game or the Elua-game based on which state of mind we are in! (This is my best steelman of whatever the hell an “abundance mindset” is supposed to be)
Of course, whatever strategy we choose will need to actually be effective in reality to work. But in reality we all know people who acted like they were in a zero sum game when they were not, ruining everything for everyone; we also know people who gave even when there was little to give, and so enabled the collective as a whole to get itself out of the local minima—more pie for everyone. (Even if you don’t, you are the beneficiaries of their actions). This suggests that there are lenses which are effective at interfacing with reality but do not promote Moloch-thinking or quick-optimisation-thinking.
I have since gone in search for such lenses. The Moloch-lens is easy to find, it’s called the prisoner’s dilemma. It conforms to the ideas we have about short term gain and hard-nosed geopolitical and interpersonal realism, and does it so well that if you reverse the payouts people will call you unrealistic and biased. There is however also an Elua-lens or Elua-game that we can find. So far my incomplete understanding of its logic is something like:
The world is vast and complicated. Really complicated. Like, OOMs more complicated than any agent in the world (not least because the world also contains that agent’s complexity).
Executing plans in the world often requires taking many actions in sequence before a payoff can be identified (if any).
Thus, the world is exponentially complicated (size of action space is , where a is the number of actions you can take each second and n the number of seconds until payoff, both of which can be very very large for ambitions the size of the Apollo program)
This means that exploration is super dominant over exploitation in terms of total future payoff. For any measly local optima you can find, there’s almost certainly a bigger cheese somewhere else to find.
The problem of course is that exploration is hard and time intensive. This is why cooperation is dominant in the elua-game: cooperation parallelises exploration, leading to a much much faster time-to-payoff. Especially if you can cooperate so much you basically become a superorganism, this gives you immensely fast ways to improve your odds. This is the intuition behind why teaming up is good (two heads are better than one etc.)
The last piece of the puzzle is “why not kill the others and use their materials to build more computronium?” If the power law holds behind compute investments holds, making a computer system more powerful uses exponentially more resources than the gains it provides. This means that the odds of any system fully understanding the world using a particular frame or world-model is ~0, even if it turns galaxy after galaxy into compute nodes.
OTOH, cooperation and preserving other ways of seeing the world allows you to cover more of the search space (it’s the difference between sampling an image by checking random pixels versus starting from the bottom left corner and uncovering the image pixel by pixel). This means that cooperation gives you way more compute “per gram” than the alternative, while also being way less taxing for other reasons: first of all, you don’t have to spend executive capacity directing your subordinate compute units, avoiding the curse of dimensionality that happens when you have too many nodes to control top down simultaneously. (Cf. this quote: “Intelligence, Asman explained, is bounded by power laws: each volume of computing requires an exponentially vaster volume of connections.” ). Having someone else that can take care of themselves and just give you the relevant facts is actually really good. Second, you avoid turning your own weaknesses into single points of failure.
I’ll stop here, except for a final note that cooperation is not just coordination (dictatorships are coordinated but have very few of the benefits I mention above). I also wrote more about the Moloch-lens and what it does to people in this comment. Hope this helps!
I don’t think it is “so bad”, mostly because my definition of rationality is wider than most. I’m pretty sure system 1 is a form of knowledge and reasoning as well, and one that we ignore at our detriment. Communicating honestly and effectively is what I try to go for.
2022 was exactly they year when it became obvious, yes. For many people, including myself. But not before that. Pre-GPT era was different, and some of us forget how different we were in it.
I noticed this part:
And my first thought was: Hasn’t this been obvious since ~2022 and isn’t “d/acc” the obvious thing to work on, given the moral nihilism and not-practically-stoppable risk-externalizing cowboy bullshit happening among the “e/acc” types?
This is why I’m focused on Satisficing. This is why I’m focused on Global Governance. This is why I’m focused on building up local healthy practical affinity groups. Without something like Kant, an officer in a survival team will not discharge team duties very well. Hence beating the drum of being dutifully decent to the strongest and fastest growing possible teammates around.
Good survival teams MIGHT survive. We probably won’t. But there’s not many other options than to find the people who want to go as fast as fucking possible towards “come with me if you want to live” projects.
Elon Musk invested in Tesla not because it was an obviously good idea at the time, but simply because any timeline where an electric car company didn’t spring into existence was going to collapse into ruinous Global Warming. IF in the Global Warming timeline we die… THEN just act like the the Global Warming timeline will somehow be avoided, and then position yourself to be happy in that future. (The other futures are doomed anyway, so don’t bother optimizing for them. Your energies will be wasted no matter what you do, if nanites kill you 18 months from now, so act as if nanites will definitely not kill you in the next 18 months.)
“D/acc” is playing for something that can absorb people’s actual energies, in the event that nothing else (that they can’t have done something about anyway) kills them even faster.
To be clear, I agree with you, but I suspect that to a certain kind of mind pursuing d/acc and satisficing and governance puts you in the realm of the luddites and the social conservatives. “There is no good or evil, there is only power/optimisation and those too weak to take it.”
I happen to believe there is another way, but Moloch provides for his own.
Lol! I don’t care what “certain kinds of minds” think of “who I am socially put with” if being put that way by those minds doesn’t conduce to better chances of SURVIVING a plausibly imminent global chaos and gigadeath and GETTING to a Win Condition somehow.
If Luddism is correct, I want to believe in Luddism.
If Luddism is not correct, then I don’t want to believe in Luddism.
(I’m not currently doing a lot of Luddism personally? My vibe lately is roughly heading for Agentic Coding and 3D printing and bottom-up affinity groups using BFT coordination protocols to flock efficiently. More “solar punk” than “Luddism”? But I’d be happy to switch if there are actually good reasons for that!)
Say more about your better way! ❤
Your solar punk initiatives sound very cool! To be clear I also try not to act according to what ‘certain minds’ would think, but I guess I was just trying to address what I saw as a perspective mismatch.
Regarding a better way: sometime ago I came to the conclusion that (to use the language of the moloch blog post ) Moloch and Elua are the same. They are, if you would like, what happens when the different bits of evolution (variation, selection, replication) get weighted differently. So Elua encourages variation and replication with weak selection, and Moloch encourages selection and variation with weak replication. Note that replication is not just copying. Both in biology and in culture, replication is also about reducing complexity of software after major feature push, repairing harm after vicious competition leads to loads of bloodshed and damage, refinement of the message of a book between editing passes and print runs, nature’s way of error correction between generations by regressing to the mean. This makes the final branch what is usually called conservatism—strong replication and selection, but weak variation.
Furthermore, which parameters are dominant is dependent on the context the system is operating in. So Moloch is locally dominant when resources are scarce, and vice versa for Elua. For minds which operate using world models, this means that we can play the Moloch-game or the Elua-game based on which state of mind we are in! (This is my best steelman of whatever the hell an “abundance mindset” is supposed to be)
Of course, whatever strategy we choose will need to actually be effective in reality to work. But in reality we all know people who acted like they were in a zero sum game when they were not, ruining everything for everyone; we also know people who gave even when there was little to give, and so enabled the collective as a whole to get itself out of the local minima—more pie for everyone. (Even if you don’t, you are the beneficiaries of their actions). This suggests that there are lenses which are effective at interfacing with reality but do not promote Moloch-thinking or quick-optimisation-thinking.
I have since gone in search for such lenses. The Moloch-lens is easy to find, it’s called the prisoner’s dilemma. It conforms to the ideas we have about short term gain and hard-nosed geopolitical and interpersonal realism, and does it so well that if you reverse the payouts people will call you unrealistic and biased. There is however also an Elua-lens or Elua-game that we can find. So far my incomplete understanding of its logic is something like:
The world is vast and complicated. Really complicated. Like, OOMs more complicated than any agent in the world (not least because the world also contains that agent’s complexity).
Executing plans in the world often requires taking many actions in sequence before a payoff can be identified (if any).
Thus, the world is exponentially complicated (size of action space is , where a is the number of actions you can take each second and n the number of seconds until payoff, both of which can be very very large for ambitions the size of the Apollo program)
This means that exploration is super dominant over exploitation in terms of total future payoff. For any measly local optima you can find, there’s almost certainly a bigger cheese somewhere else to find.
The problem of course is that exploration is hard and time intensive. This is why cooperation is dominant in the elua-game: cooperation parallelises exploration, leading to a much much faster time-to-payoff. Especially if you can cooperate so much you basically become a superorganism, this gives you immensely fast ways to improve your odds. This is the intuition behind why teaming up is good (two heads are better than one etc.)
The last piece of the puzzle is “why not kill the others and use their materials to build more computronium?” If the power law holds behind compute investments holds, making a computer system more powerful uses exponentially more resources than the gains it provides. This means that the odds of any system fully understanding the world using a particular frame or world-model is ~0, even if it turns galaxy after galaxy into compute nodes.
OTOH, cooperation and preserving other ways of seeing the world allows you to cover more of the search space (it’s the difference between sampling an image by checking random pixels versus starting from the bottom left corner and uncovering the image pixel by pixel). This means that cooperation gives you way more compute “per gram” than the alternative, while also being way less taxing for other reasons: first of all, you don’t have to spend executive capacity directing your subordinate compute units, avoiding the curse of dimensionality that happens when you have too many nodes to control top down simultaneously. (Cf. this quote: “Intelligence, Asman explained, is bounded by power laws: each volume of computing requires an exponentially vaster volume of connections.” ). Having someone else that can take care of themselves and just give you the relevant facts is actually really good. Second, you avoid turning your own weaknesses into single points of failure.
I’ll stop here, except for a final note that cooperation is not just coordination (dictatorships are coordinated but have very few of the benefits I mention above). I also wrote more about the Moloch-lens and what it does to people in this comment. Hope this helps!
Is that so bad? The rational use of irrational symbols has proven highly effective in the past. Whatever it takes to survive, is worth considering.
I don’t think it is “so bad”, mostly because my definition of rationality is wider than most. I’m pretty sure system 1 is a form of knowledge and reasoning as well, and one that we ignore at our detriment. Communicating honestly and effectively is what I try to go for.
2022 was exactly they year when it became obvious, yes. For many people, including myself. But not before that. Pre-GPT era was different, and some of us forget how different we were in it.