I am not surprised to hear this but also, this is insane.
All the lab heads are repeatedly publicly claiming they could cause human extinction, superintelligence is within reach, and a majority of people at their own labs don’t take them seriously on this.
I’m somewhat confused what causes a group of people who talk to each other everyday, work on the same projects, observe the same evidence, etc to come to such wildly different conclusions about the work they’re doing together and then be uninterested in resolving the disagreement.
Is there a taboo being enforced against discussing these disagreements inside the labs?
this is pretty normal? it’s really hard for leadership to make employees care about or believe specific things. do you really think the average Amazon employee or whatever has strong opinions on the future of delivery drones? does the average Waymo employee have extremely strong beliefs about the future of self driving?
for most people in the world, their job is just a job. people obviously avoid working on things they believe are completely doomed, and tend to work on cool trendy things. but generally most people do not really have strong beliefs about where the stuff they’re working on is going.
no specific taboo is required to ensure that people don’t really iron out deep philosophical disagreements with their coworkers. people care about all sorts of other things in life. they care about money, they care whether they’re enjoying the work, they care whether their coworkers are pleasant to be around, they care about their wife and kids and house.
once you have a company with more than 10 people, it requires constant effort to maintain culture. hiring is way harder if you can only hire people who are aligned, or if you insist on aligning people. if you grow very fast (and openai has grown very fast—it’s approximately doubled every single year I’ve been here), it’s inevitable that your culture will splinter. forget about having everyone on the same page; you’re going to have entire little googletowns and amazontowns and so on of people who bring Google or Amazon culture with them and agglomerate with other recent transplants from those companies.
I think it’s notably abnormal specifically because it wasn’t the “default” equilibrium for OpenAI specifically.
Like earlier you mentioned:
selective hiring is very real. lots of people who are xrisk pilled just refuse to join oai. people who care a lot often end up very stressed and leave in large part because of the stress.
and
most cases of xrisk people leaving are just because people felt sidelined/unhappy and chose to leave.
One model of this is “its normal that people at any company don’t have strong opinions about their work”. Another model is “lots of people in various positions did in fact have strong opinions about this given the stakes and left”.
If you send such strong signals about safety that people preemptively filter out of the hiring pipeline, then people who are already there with strong opinions on safety feel sidelined, IMO the obvious interpretation is “you actively filtered against people with strong views on safety”.
I am not surprised to hear this but also, this is insane.
All the lab heads are repeatedly publicly claiming they could cause human extinction, superintelligence is within reach, and a majority of people at their own labs don’t take them seriously on this.
I’m somewhat confused what causes a group of people who talk to each other everyday, work on the same projects, observe the same evidence, etc to come to such wildly different conclusions about the work they’re doing together and then be uninterested in resolving the disagreement.
Is there a taboo being enforced against discussing these disagreements inside the labs?
this is pretty normal? it’s really hard for leadership to make employees care about or believe specific things. do you really think the average Amazon employee or whatever has strong opinions on the future of delivery drones? does the average Waymo employee have extremely strong beliefs about the future of self driving?
for most people in the world, their job is just a job. people obviously avoid working on things they believe are completely doomed, and tend to work on cool trendy things. but generally most people do not really have strong beliefs about where the stuff they’re working on is going.
no specific taboo is required to ensure that people don’t really iron out deep philosophical disagreements with their coworkers. people care about all sorts of other things in life. they care about money, they care whether they’re enjoying the work, they care whether their coworkers are pleasant to be around, they care about their wife and kids and house.
once you have a company with more than 10 people, it requires constant effort to maintain culture. hiring is way harder if you can only hire people who are aligned, or if you insist on aligning people. if you grow very fast (and openai has grown very fast—it’s approximately doubled every single year I’ve been here), it’s inevitable that your culture will splinter. forget about having everyone on the same page; you’re going to have entire little googletowns and amazontowns and so on of people who bring Google or Amazon culture with them and agglomerate with other recent transplants from those companies.
I think it’s notably abnormal specifically because it wasn’t the “default” equilibrium for OpenAI specifically.
Like earlier you mentioned:
and
One model of this is “its normal that people at any company don’t have strong opinions about their work”. Another model is “lots of people in various positions did in fact have strong opinions about this given the stakes and left”.
If you send such strong signals about safety that people preemptively filter out of the hiring pipeline, then people who are already there with strong opinions on safety feel sidelined, IMO the obvious interpretation is “you actively filtered against people with strong views on safety”.