I’m curious how people are parsing this rumor (part of Connor’s tweets):
I recall a story of how a group of AI researchers at a leading org (consider this rumor completely fictional and illustrative, but if you wanted to find its source it’s not that hard to find in Berkeley) became extremely depressed about AGI and alignment, thinking that they were doomed if their company kept building AGI like this. So what did they do? Quit? Organize a protest? Petition the government? They drove out, deep into the desert, and did a shit ton of acid...and when they were back, they all just didn’t feel quite so stressed out about this whole AGI doom thing anymore, and there was no need for them to have to have a stressful confrontation with their big, scary, CEO.
Do people who are in proximity to the relevant community consider this anecdote fictional/not-pertinent/exaggerated/but-of-course with respect to AI safety?
No—the opposite. I was implying that there’s clearly a deeper underpinning to these patterns that any amount of rigor will be insufficient in solving, but my point has been articulated within KurtB’s excellent later comment, and solutions in the earlier comment by jsteinhardt.
I agree; that’s very true. However, this usually occurs in companies that are chasing zero-sum goals. Employees treated in this manner might often resort to a combination of complaining to HR, being bound by NDAs, or biting the bullet while waiting for their paydays. It’s just particularly disheartening to hear of this years-long pattern, especially given the induced discomfort in speaking out and the efforts to downplay, in an organization that publicly aims to save the world.