My name is Alex Turner. I’m a research scientist at Google DeepMind on the Scalable Alignment team. My views are strictly my own; I do not represent Google. Reach me at alex[at]turntrout.com
TurnTrout
Karma: 17,369
My name is Alex Turner. I’m a research scientist at Google DeepMind on the Scalable Alignment team. My views are strictly my own; I do not represent Google. Reach me at alex[at]turntrout.com
I think this is, unfortunately, true. One reason people might feel this way is because they view LessWrong posts through a social lens. Eliezer posts about how doomed alignment is and how stupid everyone else’s solution attempts are, that feels bad, you feel sheepish about disagreeing, etc.
But despite understandably having this reaction to the social dynamics, the important part of the situation is not the social dynamics. It is about finding technical solutions to prevent utter ruination. When I notice the status-calculators in my brain starting to crunch and chew on Eliezer’s posts, I tell them to be quiet, that’s not important, who cares whether he thinks I’m a fool. I enter a frame in which Eliezer is a generator of claims and statements, and often those claims and statements are interesting and even true, so I do pay attention to that generator’s outputs, but it’s still up to me to evaluate those claims and statements, to think for myself.
If Eliezer says everyone’s ideas are awful, that’s another claim to be evaluated. If Eliezer says we are doomed, that’s another claim to be evaluated. The point is not to argue Eliezer into agreement, or to earn his respect. The point is to win in reality, and I’m not going to do that by constantly worrying about whether I should shut up.
If I’m wrong on an object-level point, I’m wrong, and I’ll change my mind, and then keep working. The rest is distraction.