I noticed this part:
Native Americans lost to other humans, not to local predators. European empires lost to other European empires, not to the peoples they colonized. And transhumanists lost to other progressivists — that is, to AI accelerationists — not to traditionalists or conservatives.
...The most dangerous enemies are found among the most powerful agents, not the most ideologically distant ones. Each successive battle is fought among the previous round’s winners, and it never replays the prior distribution of sides.
And my first thought was: Hasn’t this been obvious since ~2022 and isn’t “d/acc” the obvious thing to work on, given the moral nihilism and not-practically-stoppable risk-externalizing cowboy bullshit happening among the “e/acc” types?
This is why I’m focused on Satisficing. This is why I’m focused on Global Governance. This is why I’m focused on building up local healthy practical affinity groups. Without something like Kant, an officer in a survival team will not discharge team duties very well. Hence beating the drum of being dutifully decent to the strongest and fastest growing possible teammates around.
Good survival teams MIGHT survive. We probably won’t. But there’s not many other options than to find the people who want to go as fast as fucking possible towards “come with me if you want to live” projects.
Elon Musk invested in Tesla not because it was an obviously good idea at the time, but simply because any timeline where an electric car company didn’t spring into existence was going to collapse into ruinous Global Warming. IF in the Global Warming timeline we die… THEN just act like the the Global Warming timeline will somehow be avoided, and then position yourself to be happy in that future. (The other futures are doomed anyway, so don’t bother optimizing for them. Your energies will be wasted no matter what you do, if nanites kill you 18 months from now, so act as if nanites will definitely not kill you in the next 18 months.)
“D/acc” is playing for something that can absorb people’s actual energies, in the event that nothing else (that they can’t have done something about anyway) kills them even faster.
Lol! I don’t care what “certain kinds of minds” think of “who I am socially put with” if being put that way by those minds doesn’t conduce to better chances of SURVIVING a plausibly imminent global chaos and gigadeath and GETTING to a Win Condition somehow.
If Luddism is correct, I want to believe in Luddism.
If Luddism is not correct, then I don’t want to believe in Luddism.
(I’m not currently doing a lot of Luddism personally? My vibe lately is roughly heading for Agentic Coding and 3D printing and bottom-up affinity groups using BFT coordination protocols to flock efficiently. More “solar punk” than “Luddism”? But I’d be happy to switch if there are actually good reasons for that!)
Say more about your better way! ❤