How to respond to the recent condemnations of the rationalist community

On Twitter, there has been some concerning discourse about rationalism and AI. I’d rather not link to it directly, but just look up the term “TESCREAL” on Twitter. The R stands for rationalism. Here is a particularly strong tweet:

We need a support group for those of us who’ve been around the #TESCREAL bundle for years, watching them amass more power, start their various institutes, get all sorts of $$ & mainstream media megaphones, see what their various cults have been doing & just watching them expand.

From what I can tell, the “contra-TESCREAL” (as I am coining it) crowd do not seem interested in object level arguments about AI existential safety at this time. They are analyzing the situation purely from a social perspective.

So, what do we do? Do we get mad at them? What is the rational thing to do?

I think the answer is understanding and kindness.

AI safetyists and AI ethicists are aligned

The trigger seems to have been the difference between AI safety and AI ethics. However, I’d like to argue we are quite aligned each other, as the vast majority of human groups ultimately are.

  • I think most AI safety people agree that social issues are important. Although not specifically about AI, Elizer Yudkowsky published a post titled A Comprehensive Reboot of Law Enforcement that advocated such points as “Nationwide zero-tolerance for death of unarmed persons caused by law enforcement”, “Comprehensively decriminalize most victimless offenses”, and “Restore disparate enforcement as a judicial reason to challenge and strike down a law”. These are things Eliezer Yudkowsky actually wants in real life right now, not things that he is just hypothetically exploring the consequences of.

  • I think most AI ethicists agree that, hypothetically, if a machine turned their and everyone else’s bodies into paperclips, that would be at least somewhat worse than current social issues.

So I think it’s fair to say that we have a similar utility function; AI safetyists and AI ethicists just come into conflict because they have different beliefs about how likely the singularity is. In the specific case of contra-TESCREAL, they also don’t understand rationalist conversational norms. But I fear most of us don’t understand their conversational norms either.

This is why I am proposing the prefix “contra-” instead of “anti-”; we are not enemies, we are in disagreement.[1]

How to respond?

I feel like most rationalists first instinct is to respond like a straw vulcan. But like I said, they are not interested in object level arguments. Rationality is whatever causes winning, but what is winning in this scenario?

I think the teachings of Jesus apply well:

When they which were about him saw what would follow, they said unto him, Lord, shall we smite with the sword? And one of them smote the servant of the high priest, and cut off his right ear. And Jesus answered and said, Suffer ye thus far. And he touched his ear, and healed him.

How contra-TESCREAL feel about us is part of their map. Their map is not the territory, but it is part of the territory.

I decided to write the following email to Timnit Gebru:

Dear Timnit Gebru,

Although I am more familiar with AI existential safety, I do value your work and your voice. In particular, I found the paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” insightful. Recognizing that humans have agency with respect to the creation of AI is important, and illuminating the decision making processes behind them even more so. And analyzing the way in which AI can, through sheer usefulness alone, command large amounts of electricity demonstrates the dangers of AI. I am saddened by those in my community who have treated you harshly, but I just wanted to let you know, even when I don’t agree with everything you say, I value you and think your research is valuable. To say else wise would be irrational.

Sincerely,
Christopher King

And a similar one to Emily Bender.

Dear Emily Bender,

Although I am more familiar with AI existential safety, I do value your work and your voice. In particular, I found the editorial “Look behind the curtain: Don’t be dazzled by claims of ‘artificial intelligence’” insightful. Although I don’t agree with everything in it, I am glad that there are people who recognize and promote the fact that what LLMs do is entirely alien and likely has little relation to the human brain. People often assume that the LLM will do what is right because it is useful, which is an entirely inappropriate antrophormization.

I am saddened by those in my community who have treated you harshly, but I just wanted to let you know, even when I don’t agree with everything you say, I value you and think your research is valuable. To say else wise would be irrational.

Sincerely,
Christopher King

I invite all of you to send similar emails and messages to members of the contra-TESCREAL group. This is particularly important because I fear that there are members of our community sending harsh and condemning messages. Even if it is not their intention to cause problems (they just think they are presenting arguments), it needs to be balanced out.

Concluding thoughts

  • Members of the rationalist community should honestly engage with the AI ethics and bias discourse, perhaps becoming part of their groups. This is similar to the reason why many of us join AI companies; even if their mission is not literally the most effective thing in the world, making them more rational is a good thing. This should be done openly though, so that it is perceived as bridge building instead of infiltration.

  • There are probably things we can learn from AI ethics and bias research. We should do so.

  • There is a big, nasty, infohazard at play here. It is one of the oldest, but also one of the most powerful. It is the belief that you are not physically safe. Many members of contra-TESCREAL have voiced concerns about their physical safety. This can lead to many health issues, such as fear, depression, anxiety, and post-traumatic stress disorder. Eliezer Yudkowsky has attempted to address this infohazard, but I feel like more should be done. In particular, I think when we talk about this issue, empathy should be a top concern.

  1. ^

    Although I must admit there was some temptation to call them WaTESCREAL.