Rank: #10 out of 4859 in peer accuracy at Metaculus for the time period of 2016-2020.
ChristianKl
So I see much of Vassarism as claiming: These protections against high-energy memes are harmful. We need to break them down so that we can properly hold people in power accountable, and freely discuss important risks.
It’s been a while since this was written, but I don’t think this summaries what Vassar says well.
If anyone wants to get a good idea of what kind of arguments Vassar makes, his talk with Spencer Greenberg is a good source.
On of the reason you might see him as dangerous is that he advocates that people should see a lot more interactions as being about conflict theory. Switching from assuming good intent of other people to using conflict theory, can be quite disruptive to many interactions.
Getting someone to stop assuming that the people around them have good intent can be quite bad for their mental health even if it’s true.
My comment was written in 2023 ago, so I’m unsure what specific source I was referring to at the time but there’s Scott comment from 2024 saying:
But I wasn’t able to find any direct causal link between Michael and the psychotic breaks—people in this group sometimes had breaks before encountering him, or after knowing him for long enough that it didn’t seem triggered by meeting him, or triggered by obvious life events. I think there’s more reverse causation (mentally fragile people who are interested in psychedelics join, or get targeted for recruitment into, his group) than direct causation (he convinces people to take psychedelics and drives them insane), though I do think there’s a little minor direct causation in a few cases.
It got the impression that you asserted that Yudkowsky’s work somehow has a central in a religious way to rationality.
To me the fact that CFAR started out with the idea that Bayesian might be important and then found in their research that teaching Bayes formula isn’t, is a prime example that rationality works differently than religion.
Things aren’t just taken as given and religiously believed.
Conscious phenomenology should only arise in systems whose internal states model both the world and their own internal dynamics as an observer within that world. Neural or artificial systems that lack such recursive architectures should not report or behave as though they experience an “inner glow.”
It’s unclear to me what “inner glow” mean here. I have the impression that we see LLMs being trained on a lot of text of humans where some of it is humans speaking about being able to report they are conscious even if they don’t really have an recursive structure. It’s possible for an LLM to just repeat what they read and report the same thing that humans do.
Experimental manipulations that increase transparency of underlying mechanisms should not increase but rather degrade phenomenology.
This suggests that as people learn to meditate and get more transparency, on average they should become less convinced of consciousness being primary. While some people feel like the phenomenology gets degreed (and it sounds like you are one of them), I get the impression that there are more reports of people who are long-term meditators who get strengthened by their meditation experience in the belief that consciousness is ontology primitive.
No. Bayes’ theorem is a minor descriptive detail of Yudkowky’s focus on being more rational.
Your post uses “notably and especially” as adjectives to describe Bayes’ theorem and now you say “minor”.
Most of what you wrote is quite vague and thus hard to falsify. Your claim about Bayes’ theorem isn’t and you seem to agree that I have falsfied it when you now say ‘minor’.
Is talking about p(doom), p(anything), or “updating” in a certain direction a cultish and religion-like use of the language of Bayesian probability?
Each knowledge community whether scientific or elsewhere has their own terminology. Having it’s own terminology is not unique to religions.
If you believe “That people can become better at distinguishing true things, or if you prefer, become more rational, by a series of practices, notably and especially applying Bayes’ Theorem from probability theory to evaluating facts” is one of the main things that rationalism is about in a religious way, than how do you explain that so few posts on LessWrong are about using Bayes Theorem to evaluate facts?
When I look at the LessWrong.com frontpage I don’t see any post being about that and I think there are very if if any posts written on LessWrong in 2025 that are about that. How does that match your thesis?
I abstractly described a thing that would work
Why do you believe that it would work? Psychology is a field where abstract ideas that people think would work frequently turn out to not work. Why do you think you understand the underlying mechanics well enough to know what would work?
Then, the appropriate reaction is to relax the constraints we impose on ourselves, test if the relaxation is valid, and take best action we’ve got left. If we were able to do this reliably, we would find ourselves doing the best we can, and low self-esteem would be a non-issue.
Do you think that this is a strategy that someone who has imposter syndrome could use to make his imposter syndrome a non-issue? I would be surprised if that’s the case.
Actually, getting rid of a “I’m not good enough” belief, through methods like what’s described in Steve Andreas Transform Your Self, seems to me much more likely to help the person.
OpenAI said that they don’t want to train on the CoT. Given information about whether or not the CoT contains scheming to the user that presses buttons that affect training is training based on the CoT.
I think you can argue about the merits of “don’t train based on CoT” but it seems to be one of the few free safety relevant decision where OpenAI had a safety idea and manages to actually execute it.
It’s hard to know about how successful attempts to shut down technology development would look like, but I think that the research project that was MKUltra seemed to have stopped after it was shut down.
Why would he say this? What does he have to gain by emphasizing something so far away from reality? Wouldn’t he be found out by something so blatant? How could this be a reasonable thing to say? Is there a perspective from which he is being transparent?
I really was trying to make sense of what perspective would produce these words, but eventually it struck me: he’s saying it because it would be good for him if you believed it, and he thinks he can get away with saying it without you catching him out.
I think there are multiple things going on here. One of the things Evrart is advising the character to transparent about is that Evrart works to get the characters gun back. That means that he wants the character to tell Joyce something that suggests that the the character has a debt towards Evrart.
Evrart suggest that he is not going to violate the privacy of the character, which concretely might mean telling other people about the gun. If the character however decides not to tell Joyce about the gun after being explicitly told that he can do so, the character joins into being complicit with hiding information.
most senior researchers have more good ideas than they can execute
What do you mean with good idea?
My general impression of the field is that we lack ideas that are likely to solve AI alignment right now. To me that suggests that good ideas are scarce.
There seem to me different categories of being doomful.
There are people who think that for theoretic reasons AI alignment is hard or impossible.
There are also people who are more focused practical issues like AI companies being run in a profit maximizing way and having no incentives to care for most of the population.
Saying, “You can’t AI box for theoretical reasons” is different from saying “Nobody will AI box for economic reasons”.
The tip of their chromosome, the telomere, doesn’t get damaged through cell replication. This means they are not subject to senescence and could theoretically live forever in good condition.
No, it doesn’t. There are multiple different mechanisms that lead to senescence. Telomere shortening is just one of them.
First, why reproduce this way?
That question sounds like evolution has intent in a way it doesn’t.
The key questions are: (1) What benefit did marginal change in the direction of developing a feature provide? (2) Why is the feature evolutionary stable and a species doesn’t lose it? (3) Why didn’t other species outcompete the species and drive it to extinction so that we don’t observe it?
A key problem is that the responsibility for the problem is separated over multiple different stakeholders that all matter.
The printer company who’s usually in the business models of selling you printer ink.
The OS that manages the printer
Software that wants to print
Network sysadmin (different for each organization that has a printer)
Facility management responsible for locations of printers / paper availability (different for each organization that has a printer)
In your example, it sounds like the main problem is with the facility management that doesn’t seem to consider printers as something that they should organize in a sensible way.
When it comes to printing, many times I want to print, I don’t really want to get a piece of paper but want to send a letter. Software like Google Docs could just give me a “send a letter” and “send a fax” button and let me pay for the service.
That’s why we have already exhausted all major pathways for drug mechanisms.
That’s not true there’s research like Zampaloni et al 2024 that proposes new antibiotic classes.
What did we learn? We have almost all of human history to look back at, but we don’t care.
We do care. We care so much that we essentially made the development of new antibiotics commercially unviable. Companies that managed to bring new antibiotics to market like Achaogen, Tetraphase, Nabriva Therapeutics and Melinta Therapeutics all failed commercially.
We do fund the research of people like Zampaloni but we make rules that prevent companies for selling newly developed antibiotics to people who want to buy them and therefore BioTech companies and Big Pharma has little incentive to translate research findings such as that of Zampaloni et al into a commercial products. They don’t want to suffer the same fate as Achaogen, Tetraphase, Nabriva Therapeutics and Melinta Therapeutics.
This is similar to COVID-19 an example where experts do care a lot about the problem but then make bad policy. The reason “I’m a researcher, I know about a problem, so the solution should be more research on the problem”, even if it would take something else.
since it’s the only decision you make that could have infinite impact
No, if you assume that there’s might be a god that rewards certain choices with infinite impact, that god could chose to reward all sorts of different choices with infinite impact. The set of rules that various religions propose is not the total set of rules that a god might use to make his judgements.
In practice, it’s all a matter of trade negotiations. Trade deals specify on what grounds countries can pass goods from being sold. Plenty of trade agreements then have clauses for Investor-State-Dispute-Settlement to enforce what was negotiated which reduces countries sovereignty to just do what they want.
Trump did get the EU to allow US goods to be sold that were previously blocked form being sold because of regulation like car safety regulation.
EU’s rules are quite complex. The general rule is in Article 34 of the TFEU:
Quantitative restrictions on imports and all measures having equivalent effect shall be prohibited between Member States.
Article 36 TFEU than says:
The provisions of Articles 34 and 35 shall not preclude prohibitions or restrictions on imports, exports or goods in transit justified on grounds of public morality, public policy or public security; the protection of health and life of humans, animals or plants; the protection of national treasures possessing artistic, historic or archaeological value; or the protection of industrial and commercial property. Such prohibitions or restrictions shall not, however, constitute a means of arbitrary discrimination or a disguised restriction on trade between Member States.
Then there are plenty of more specific EU directives ChatGPT lists Regulation 2019⁄515 and Directive 1999/74/EC as mattering to the egg question.
Isn’t there an obvious solution to that: allow only early-screened eggs to be sold in Germany, no matter where they came from?
That violates the rules of a common market which is the core of what the EU is about. This is the logic why Dominic Cummings considered Brexit to be the obvious solution.
Recently, the Trump administration was arguing that some EU rules for things like car safety block US cars to be sold in Europe so as part of his tariff threads he pushed through rules so that now cars that EU rules used to consider to be unsafe to be sold. Trade agreements limits how countries can limit what’s sold in them and the common free market is a trade agreement that everything can be sold everywhere in the EU.
I remember one conversation at a LessWrong community weekend where I made a contrarian argument. The other person responded with something like “I don’t know the subject matter well enough to judge your arguments, I rather stay with believing the status quo. The topic isn’t really relevant enough for my to invest time into it.”
That’s the kind of answer you can get when speaking with rationalists but you don’t really get when talking to non-rationalists. That person wasn’t “glad to learn that they were wrong” but they were far from irrational. They had an idea about their own beliefs and how it makes sense to change their own beliefs that was the result of reasoning in a way that non-rationalist don’t tend to do.
Adam sounds to me naive about what goes into actually changing your mind. He seems to take “learn that you were wrong” in a goal in itself. The person I was speaking about in the above example didn’t have a goal to have a sophisticated understanding of the domain I was talking about that’s was probably completely in line with their utility function.
When it comes to issues where it’s actually important to change your mind, it’s complex in another way. Someone might give you a convincing rational argument but in the back of your mind there’s a part of you that feels wary. While you could ignore that part at the back of your mind and just update your belief, it’s not clear that this is always the best idea.
There are a few people who were faced with pretty convincing arguments about the central importance of AI safety and that it’s important for them to do everything they can to fight for AI safety. Then a year later, they have burnout because they invest all their energy into AI safety. They ignored a part of themselves and their ability to change their mind turned to their determinant. A lot of what CFAR did when it comes to Focusing and internal double crux is about listening to more internal information instead of suppressing it.
Another problem when it comes to teaching rationality is that even if someone does the right thing 99% but the 1% they are doing the wrong thing when it actually matters, the result is still a failure. Just because someone can do it in the dojo where they train kata’s doesn’t mean that they can do it when it’s actually important.
Julia Galef had the Scout vs. Soldier mindset as one alternative to the paradigm of teaching individual skills. The idea is that the problem often isn’t that people lack the skills but that they are in soldier mindset and thus don’t use skills they have.