I’m curious how people are parsing this rumor (part of Connor’s tweets):
I recall a story of how a group of AI researchers at a leading org (consider this rumor completely fictional and illustrative, but if you wanted to find its source it’s not that hard to find in Berkeley) became extremely depressed about AGI and alignment, thinking that they were doomed if their company kept building AGI like this. So what did they do? Quit? Organize a protest? Petition the government? They drove out, deep into the desert, and did a shit ton of acid...and when they were back, they all just didn’t feel quite so stressed out about this whole AGI doom thing anymore, and there was no need for them to have to have a stressful confrontation with their big, scary, CEO.
Do people who are in proximity to the relevant community consider this anecdote fictional/not-pertinent/exaggerated/but-of-course with respect to AI safety?
I’m curious how people are parsing this rumor (part of Connor’s tweets):
I recall a story of how a group of AI researchers at a leading org (consider this rumor completely fictional and illustrative, but if you wanted to find its source it’s not that hard to find in Berkeley) became extremely depressed about AGI and alignment, thinking that they were doomed if their company kept building AGI like this. So what did they do? Quit? Organize a protest? Petition the government? They drove out, deep into the desert, and did a shit ton of acid...and when they were back, they all just didn’t feel quite so stressed out about this whole AGI doom thing anymore, and there was no need for them to have to have a stressful confrontation with their big, scary, CEO.
Do people who are in proximity to the relevant community consider this anecdote fictional/not-pertinent/exaggerated/but-of-course with respect to AI safety?