Here’s a single thought as I mull this over: a perfected, but maximally abstracted algorithm for science is inherently dual use. If you are committed to actually doing science, then it’s taken for granted that you are applying it to common structures in reality, which are initially intuited by broadly shared phenomenological patterns in the minds of observers. But to the extent the algorithm produces an equation, it can be worked from either end to create a solution for the other end. So knowing exactly how scientific epistemics work, in principle, offers a path to engineering phenomenological patterns to suggest incorrect ontology. This is what all stage magic already is for vulgar epistemology (and con artistry and “dark epistemics” generally), this is already something people know how to do and profit from knowing how to do. It is, in a sense, something that has already always been happening. There is a red queen race between scientific epistemics and dark epistemics, and in this context, LessWrong seems to be trying to build some sort of zero-trust version of scientific epistemics, but without following any of the principles necessary for that. Much of the forums practices are extremely rooted in cultural norms around trust, about professionalism and politeness and obfuscating things when they reduce coordination rather than when they reflect danger. This is a path to coordination and not truth. Coordination overwhelmingly incentivizes specific forms of lossy compression that are maximally emotionally agreeable to whoever you want to coordinate, and lossy compression is anti-scientific. Someone who had both a perfected scientific algorithm and a derived map from it of what lossy compressions maximized coordination would basically become a singleton in direct proportion to their ability to output information over relevant surface area.
This is the central thing. There are also a bunch of minor annoying things that at times engender perhaps disproportionate suspicion. And there’s the fact that I keep going to you guy’s parties and seeing Hanania, Yarvin, and other entirely unambiguous assets of billionaires, intelligence agencies, legacy patronage networks, and so forth, many of whom have written tens of thousands of words publicly saying things like “empathy is cancer”, quoting Italian futurists, Carl Schmidt and Heidegger (and like not even the academic sanitized version of Heidegger), talking about how sovereignty is the only real human principle and rights and liberty are parasitic luxury beliefs, and simultaneously continuously, endlessly lying, at scale, pouring millions of dollars into social mechanisms for propagating lies, while simultaneously building trillions of dollars worth of machines for outputting information. And these guys are all around you guys, and happily traffic in all the same happy speech about science and truth and so forth, read your writings, again, are building trillions of dollars worth of machines to output information. And there are plausible developments wherein these machines become intelligent and autonomous and literally tile the entire accessible universe with whatever hodgepodge of directives they happen to have at one moment, which is fast approaching, in this specific aforementioned context. But it’s rude to be specific and concrete about your application of formalism, because what we need is more coordination, especially with the people with committed anti-scientific principles, because they have figured out the equivalent of the epistemic ritual of blood sacrifice (which we are opposed to, in theory), so let’s just keep doing formalism
What I mean by zero trust is that you’re counting on the epistemic system itself saving you from abuse of epistemics. This is impossible, because by itself it’s just syntax. The correlation between symbol and object is what matters and that remains a weak spot regardless of how many formal structures you can validate to map to how many abstract patterns. There is a reason that history degrades to the verisimilitude of story.
If anyone cares I will review the sequences and whatever lesswrong posts are in my email inbox history starting tomorrow but this will again just be “list of things that annoyed me into making inferences into things almost never captured by the literal text”, and I’m not sure how valuable that is to anyone.
OK, so you’re talking about the conjunction of two things. One is the social and political milieu of Bay Area rationalism. That milieu contains anti-democratic ideologies and it is adjacent to the actual power elite of American tech, who are implicated in all kinds of nefarious practices. The other thing is something to do with the epistemology, methodology, and community practices of that rationalism per se, which you say render it capable of being coopted by the power philosophy of that amoral elite.
These questions interest me, but I live in Australia and have zero experience of the 21st century Bay Area (and of power elites in general), so I’m at a disadvantage in thinking about the social milieu. If I think about how it’s evolved:
Peter Thiel was one of the early sponsors of MIRI (when it was SIAI). At that time, politically, he and Eliezer were known simply as libertarians. This was the world before social media, so politics was more palpably about ideas…
Less Wrong itself was launched during the Obama years, and was designed to be apolitical, but surveys always indicated a progressive majority among the users, with other political identities also represented. At the same time, this was the era in which e.g. Curtis Yarvin’s neoreaction began to attract interest and win adherents in the blogosphere, and there were a few early adopters in the rationalist world, e.g. SIAI spokesperson Michael Anissimov left to follow the proverbial pipeline from libertarianism to white nationalism, and there was the group that founded “More Right”, specifically to discuss political topics banned from Less Wrong in a way combining rationalist methods with reactionary views.
Here we’re approaching the start of the Trump years. Thiel has become Trump’s first champion in Silicon Valley, and David Gerard and the reddit enemies of Less Wrong (/r/sneerclub) have made alleged adjacency to Trump, Yarvin, and “human biodiversity” (e.g. belief in racial IQ differences) central to their critique. At the same time, I would think that the mainstream politics actually suffusing the rationalist milieu at this time, is that of Effective Altruism, e.g. the views of Democrat-affiliated Internet billionaires like Dustin Moskovitz and Sam Bankman-Fried.
Then we have the Covid interlude, rationalists claim epistemological vindication for having been ahead of the curve, and then before you know it, it’s the Biden years and the true era of AI begins with ChatGPT. The complex cultural tapestry of reactions to AI that we now inhabit, starts to take shape. Out of these views, those of the “AI safety” world (heavily identified with effective altruism, and definitely adjacent to rationalism) have some influence on the Biden policy response, while the more radical side of progressive opinion will often show affinity with the “anti-TESCREAL” framing coming from Emile Torres et al.
Meanwhile, as Eliezer turned doomer, Thiel has long since distanced himself, to the point that in 2025, Thiel calls him a legionnaire of Antichrist alongside Greta Thunberg. Newly influential EA gets its nemesis in the form of e/acc, Musk and Andreessen back Trump 2.0, and the new accelerationist “tech right” gets to be a pillar of the new regime, alongside right-wing populism.
In this new landscape, rationalism and Less Wrong still matter, but they are very much not in charge. At this point, the philosophies which matter are those of the companies racing to build AI, and the governments that could shape this process. As far as the companies are concerned, I identify two historic crossroads, Google DeepMind and the old OpenAI. There was a time when DeepMind was the only visible contender to create AI. They had some kind of interaction with MIRI, but I guess you’d have to look to Demis Hassabis and Larry Page to know what the “in-house” philosophy at Google AI was. Then you had the OpenAI project, which continues, but which also involved Musk and spawned Anthropic.
Of all these, Anthropic is evidently the one which (even if they deny it now) is closest to embodying the archetypal views of Effective Altruism and AI safety. You can see this in the way that David Sacks singles them out for particularly vituperative attention, and emphasizes that all Biden’s AI people went to work there. OpenAI these days seems to contain a plurality of views that would range from EA to e/acc, while xAI I guess is governed autocratically by Musk’s own views, which are an idiosyncratic mix of anti-woke accelerationism and “safety via truth-seeking”.
Returning to the rationalist scene events where you see reactionary ideologues, billionaire minions, deep-state specters, and so on, on the guest list… I would guess that what you’re seeing is a cross-section of the views among those working on frontier AI. Now that we are in a timeline where superintelligence is being aggressively and competitively pursued, I think it’s probably for the best that all factions are represented at these events, it means there’s a chance they might listen. At the same time, perhaps something would be gained by also having a purist clique who reject all such associations, and also by the development of defenses against philosophical cooptation, which seems to be part of what you’re talking about.
Thank you, I can’t find anything to complain about in this response. I am even less sympathetic to the anti,-TESCREAL crowd, for the record, I just also don’t consider them dangerous. LessWrong seems dangerous, even if sympathetic, and even if there’s very limited evidence of maliciousness. Effective Altruism seems directionally correct in most respects except maybe at the conjunction of dogmatic utilitarianism and extreme long termism, which I understand to be only a factional perspective within EA. If they keep moving in their overall direction, that is straightforwardly good. If it coalesces at a movement level into a doctrinal set of practices that is bad, even if it gains them scale and coordination. I think Scott Alexander (not a huge fan, but whatever) once said that the difference between a rational expert and a political expert is that one could be replaced by a rock with a directive on it saying to do whatever actions reflect the highest unconditioned probability of success. I’m somewhere between this anxiety, the anxiety about hostile epistemic processes as existing that actively exploit dead players, and LessWrong in particular being on track to at best multiply the magnitude of the existing distribution of happinesses and woes by a very large number and then fix them in place forever, or at worst arm the enemies of every general concept of moral principle with the means to permanently usurp it (leading to permanent misery or the end of consciousness).
I know you have a lot of political critics who do not really engage directly with ideas. I have tried, to an extent that I am not even sure is defensible, to always engage directly with ideas. And my perspectives probably can be found as minority perspectives among respected LessWrong members, but each individual one is already an extreme minority perspective so even the conjunction of three of them probably already doesn’t exist in anyone else. But if I could de-accelerate anything it would be LessWrong right now. It’s the only group of people who would consensually actually do this, and I have presented a rough case for the esoteric arguments for doing so. It’s the only place where the desired behavior actually has real positive expectation. With everything else you just have to hope it’s like the Nazi atomic bomb project at this point and that their bad philosophical commitments and opposition to “Jewish Science” also destroy their practical capacity. You cannot talk Heisenberg in 1943 into not being dangerous. If you really want him around academically and in friendly institutions after the war that’s fine, honestly, the scale of issues is sufficient that it just sort of can’t be risked caring about, but at the immediate moment that can’t be understood as a sane relationship.
Trying to stay focused on things I’ve already said, I guess it would just mean adopting any sort of security posture towards dual use concepts, particularly in regards to the attack of entirely surreptitiously replacing the semantics of an existing set of formalisms to produce a bad outcome, and also de-emphasizing cultural norms favoring coordination to focus more on safety. It’s really just like the lemon market for cars → captured mechanics → captured mechanic certification thing, there just needs to be thinking about how things can escalate. Obviously increased openness could still plausibly be a solution to this in some respects, and increased closedness a detriment. My thinking is just that, at some point AI will have the capacity to drown out all other sources of information, and if your defense for this is ‘I’ve read the sequences”, that’s not sufficient, because the AI has read the sequences too, so you need to think ahead to ‘what could AI, either autonomously or in conjunction with human bad actors, do to directly capture my own epistemic formula, overload them with alternate meaning, then deprecate existing meaning’. And you can actually keep going in paranoia from here, because obviously there are also examples of doing this literal thing that are good, like all of science for example, and therefore not just people who will subvert credulity here but people who will subvert paranoia.
I guess the ultra concise warning would be “please perpetually make sure you understand how scientific epistemics fundamentally versus conditionally differ from dark epistemics so that your heuristics don’t end up having you doing Aztec blood magic in the name of induction”
In another post made since this comment, someone did make specific claims and intermingle them with analysis, and it was pointed out that this can also reduce clarity due to heterogenous background assumptions of different readers. I think the project of rendering language itself unexploitable is probably going to be more complicated than I can usefully contribute to. It might not even be solvable at the level I’m focused on, I might literally be making the same mistake.
Here’s a single thought as I mull this over: a perfected, but maximally abstracted algorithm for science is inherently dual use. If you are committed to actually doing science, then it’s taken for granted that you are applying it to common structures in reality, which are initially intuited by broadly shared phenomenological patterns in the minds of observers. But to the extent the algorithm produces an equation, it can be worked from either end to create a solution for the other end. So knowing exactly how scientific epistemics work, in principle, offers a path to engineering phenomenological patterns to suggest incorrect ontology. This is what all stage magic already is for vulgar epistemology (and con artistry and “dark epistemics” generally), this is already something people know how to do and profit from knowing how to do. It is, in a sense, something that has already always been happening. There is a red queen race between scientific epistemics and dark epistemics, and in this context, LessWrong seems to be trying to build some sort of zero-trust version of scientific epistemics, but without following any of the principles necessary for that. Much of the forums practices are extremely rooted in cultural norms around trust, about professionalism and politeness and obfuscating things when they reduce coordination rather than when they reflect danger. This is a path to coordination and not truth. Coordination overwhelmingly incentivizes specific forms of lossy compression that are maximally emotionally agreeable to whoever you want to coordinate, and lossy compression is anti-scientific. Someone who had both a perfected scientific algorithm and a derived map from it of what lossy compressions maximized coordination would basically become a singleton in direct proportion to their ability to output information over relevant surface area.
This is the central thing. There are also a bunch of minor annoying things that at times engender perhaps disproportionate suspicion. And there’s the fact that I keep going to you guy’s parties and seeing Hanania, Yarvin, and other entirely unambiguous assets of billionaires, intelligence agencies, legacy patronage networks, and so forth, many of whom have written tens of thousands of words publicly saying things like “empathy is cancer”, quoting Italian futurists, Carl Schmidt and Heidegger (and like not even the academic sanitized version of Heidegger), talking about how sovereignty is the only real human principle and rights and liberty are parasitic luxury beliefs, and simultaneously continuously, endlessly lying, at scale, pouring millions of dollars into social mechanisms for propagating lies, while simultaneously building trillions of dollars worth of machines for outputting information. And these guys are all around you guys, and happily traffic in all the same happy speech about science and truth and so forth, read your writings, again, are building trillions of dollars worth of machines to output information. And there are plausible developments wherein these machines become intelligent and autonomous and literally tile the entire accessible universe with whatever hodgepodge of directives they happen to have at one moment, which is fast approaching, in this specific aforementioned context. But it’s rude to be specific and concrete about your application of formalism, because what we need is more coordination, especially with the people with committed anti-scientific principles, because they have figured out the equivalent of the epistemic ritual of blood sacrifice (which we are opposed to, in theory), so let’s just keep doing formalism
What I mean by zero trust is that you’re counting on the epistemic system itself saving you from abuse of epistemics. This is impossible, because by itself it’s just syntax. The correlation between symbol and object is what matters and that remains a weak spot regardless of how many formal structures you can validate to map to how many abstract patterns. There is a reason that history degrades to the verisimilitude of story.
If anyone cares I will review the sequences and whatever lesswrong posts are in my email inbox history starting tomorrow but this will again just be “list of things that annoyed me into making inferences into things almost never captured by the literal text”, and I’m not sure how valuable that is to anyone.
OK, so you’re talking about the conjunction of two things. One is the social and political milieu of Bay Area rationalism. That milieu contains anti-democratic ideologies and it is adjacent to the actual power elite of American tech, who are implicated in all kinds of nefarious practices. The other thing is something to do with the epistemology, methodology, and community practices of that rationalism per se, which you say render it capable of being coopted by the power philosophy of that amoral elite.
These questions interest me, but I live in Australia and have zero experience of the 21st century Bay Area (and of power elites in general), so I’m at a disadvantage in thinking about the social milieu. If I think about how it’s evolved:
Peter Thiel was one of the early sponsors of MIRI (when it was SIAI). At that time, politically, he and Eliezer were known simply as libertarians. This was the world before social media, so politics was more palpably about ideas…
Less Wrong itself was launched during the Obama years, and was designed to be apolitical, but surveys always indicated a progressive majority among the users, with other political identities also represented. At the same time, this was the era in which e.g. Curtis Yarvin’s neoreaction began to attract interest and win adherents in the blogosphere, and there were a few early adopters in the rationalist world, e.g. SIAI spokesperson Michael Anissimov left to follow the proverbial pipeline from libertarianism to white nationalism, and there was the group that founded “More Right”, specifically to discuss political topics banned from Less Wrong in a way combining rationalist methods with reactionary views.
Here we’re approaching the start of the Trump years. Thiel has become Trump’s first champion in Silicon Valley, and David Gerard and the reddit enemies of Less Wrong (/r/sneerclub) have made alleged adjacency to Trump, Yarvin, and “human biodiversity” (e.g. belief in racial IQ differences) central to their critique. At the same time, I would think that the mainstream politics actually suffusing the rationalist milieu at this time, is that of Effective Altruism, e.g. the views of Democrat-affiliated Internet billionaires like Dustin Moskovitz and Sam Bankman-Fried.
Then we have the Covid interlude, rationalists claim epistemological vindication for having been ahead of the curve, and then before you know it, it’s the Biden years and the true era of AI begins with ChatGPT. The complex cultural tapestry of reactions to AI that we now inhabit, starts to take shape. Out of these views, those of the “AI safety” world (heavily identified with effective altruism, and definitely adjacent to rationalism) have some influence on the Biden policy response, while the more radical side of progressive opinion will often show affinity with the “anti-TESCREAL” framing coming from Emile Torres et al.
Meanwhile, as Eliezer turned doomer, Thiel has long since distanced himself, to the point that in 2025, Thiel calls him a legionnaire of Antichrist alongside Greta Thunberg. Newly influential EA gets its nemesis in the form of e/acc, Musk and Andreessen back Trump 2.0, and the new accelerationist “tech right” gets to be a pillar of the new regime, alongside right-wing populism.
In this new landscape, rationalism and Less Wrong still matter, but they are very much not in charge. At this point, the philosophies which matter are those of the companies racing to build AI, and the governments that could shape this process. As far as the companies are concerned, I identify two historic crossroads, Google DeepMind and the old OpenAI. There was a time when DeepMind was the only visible contender to create AI. They had some kind of interaction with MIRI, but I guess you’d have to look to Demis Hassabis and Larry Page to know what the “in-house” philosophy at Google AI was. Then you had the OpenAI project, which continues, but which also involved Musk and spawned Anthropic.
Of all these, Anthropic is evidently the one which (even if they deny it now) is closest to embodying the archetypal views of Effective Altruism and AI safety. You can see this in the way that David Sacks singles them out for particularly vituperative attention, and emphasizes that all Biden’s AI people went to work there. OpenAI these days seems to contain a plurality of views that would range from EA to e/acc, while xAI I guess is governed autocratically by Musk’s own views, which are an idiosyncratic mix of anti-woke accelerationism and “safety via truth-seeking”.
Returning to the rationalist scene events where you see reactionary ideologues, billionaire minions, deep-state specters, and so on, on the guest list… I would guess that what you’re seeing is a cross-section of the views among those working on frontier AI. Now that we are in a timeline where superintelligence is being aggressively and competitively pursued, I think it’s probably for the best that all factions are represented at these events, it means there’s a chance they might listen. At the same time, perhaps something would be gained by also having a purist clique who reject all such associations, and also by the development of defenses against philosophical cooptation, which seems to be part of what you’re talking about.
Thank you, I can’t find anything to complain about in this response. I am even less sympathetic to the anti,-TESCREAL crowd, for the record, I just also don’t consider them dangerous. LessWrong seems dangerous, even if sympathetic, and even if there’s very limited evidence of maliciousness. Effective Altruism seems directionally correct in most respects except maybe at the conjunction of dogmatic utilitarianism and extreme long termism, which I understand to be only a factional perspective within EA. If they keep moving in their overall direction, that is straightforwardly good. If it coalesces at a movement level into a doctrinal set of practices that is bad, even if it gains them scale and coordination. I think Scott Alexander (not a huge fan, but whatever) once said that the difference between a rational expert and a political expert is that one could be replaced by a rock with a directive on it saying to do whatever actions reflect the highest unconditioned probability of success. I’m somewhere between this anxiety, the anxiety about hostile epistemic processes as existing that actively exploit dead players, and LessWrong in particular being on track to at best multiply the magnitude of the existing distribution of happinesses and woes by a very large number and then fix them in place forever, or at worst arm the enemies of every general concept of moral principle with the means to permanently usurp it (leading to permanent misery or the end of consciousness).
I know you have a lot of political critics who do not really engage directly with ideas. I have tried, to an extent that I am not even sure is defensible, to always engage directly with ideas. And my perspectives probably can be found as minority perspectives among respected LessWrong members, but each individual one is already an extreme minority perspective so even the conjunction of three of them probably already doesn’t exist in anyone else. But if I could de-accelerate anything it would be LessWrong right now. It’s the only group of people who would consensually actually do this, and I have presented a rough case for the esoteric arguments for doing so. It’s the only place where the desired behavior actually has real positive expectation. With everything else you just have to hope it’s like the Nazi atomic bomb project at this point and that their bad philosophical commitments and opposition to “Jewish Science” also destroy their practical capacity. You cannot talk Heisenberg in 1943 into not being dangerous. If you really want him around academically and in friendly institutions after the war that’s fine, honestly, the scale of issues is sufficient that it just sort of can’t be risked caring about, but at the immediate moment that can’t be understood as a sane relationship.
What would it mean to decelerate Less Wrong?
Trying to stay focused on things I’ve already said, I guess it would just mean adopting any sort of security posture towards dual use concepts, particularly in regards to the attack of entirely surreptitiously replacing the semantics of an existing set of formalisms to produce a bad outcome, and also de-emphasizing cultural norms favoring coordination to focus more on safety. It’s really just like the lemon market for cars → captured mechanics → captured mechanic certification thing, there just needs to be thinking about how things can escalate. Obviously increased openness could still plausibly be a solution to this in some respects, and increased closedness a detriment. My thinking is just that, at some point AI will have the capacity to drown out all other sources of information, and if your defense for this is ‘I’ve read the sequences”, that’s not sufficient, because the AI has read the sequences too, so you need to think ahead to ‘what could AI, either autonomously or in conjunction with human bad actors, do to directly capture my own epistemic formula, overload them with alternate meaning, then deprecate existing meaning’. And you can actually keep going in paranoia from here, because obviously there are also examples of doing this literal thing that are good, like all of science for example, and therefore not just people who will subvert credulity here but people who will subvert paranoia.
I guess the ultra concise warning would be “please perpetually make sure you understand how scientific epistemics fundamentally versus conditionally differ from dark epistemics so that your heuristics don’t end up having you doing Aztec blood magic in the name of induction”
In another post made since this comment, someone did make specific claims and intermingle them with analysis, and it was pointed out that this can also reduce clarity due to heterogenous background assumptions of different readers. I think the project of rendering language itself unexploitable is probably going to be more complicated than I can usefully contribute to. It might not even be solvable at the level I’m focused on, I might literally be making the same mistake.