I am not surprised to hear this but also, this is insane.
All the lab heads are repeatedly publicly claiming they could cause human extinction, superintelligence is within reach, and a majority of people at their own labs don’t take them seriously on this.
I’m somewhat confused what causes a group of people who talk to each other everyday, work on the same projects, observe the same evidence, etc to come to such wildly different conclusions about the work they’re doing together and then be uninterested in resolving the disagreement.
Is there a taboo being enforced against discussing these disagreements inside the labs?
this is pretty normal? it’s really hard for leadership to make employees care about or believe specific things. do you really think the average Amazon employee or whatever has strong opinions on the future of delivery drones? does the average Waymo employee have extremely strong beliefs about the future of self driving?
for most people in the world, their job is just a job. people obviously avoid working on things they believe are completely doomed, and tend to work on cool trendy things. but generally most people do not really have strong beliefs about where the stuff they’re working on is going.
no specific taboo is required to ensure that people don’t really iron out deep philosophical disagreements with their coworkers. people care about all sorts of other things in life. they care about money, they care whether they’re enjoying the work, they care whether their coworkers are pleasant to be around, they care about their wife and kids and house.
once you have a company with more than 10 people, it requires constant effort to maintain culture. hiring is way harder if you can only hire people who are aligned, or if you insist on aligning people. if you grow very fast (and openai has grown very fast—it’s approximately doubled every single year I’ve been here), it’s inevitable that your culture will splinter. forget about having everyone on the same page; you’re going to have entire little googletowns and amazontowns and so on of people who bring Google or Amazon culture with them and agglomerate with other recent transplants from those companies.
I agree with your point. I think this tendency is especially visible in global conglomerates with rigid personnel structures. Take Samsung, for example — a company that supplies memory semiconductors while working with Nvidia and TSMC in the AI space. Samsung’s influence is comparable to that of many Silicon Valley giants, but its HR system is still based on traditional corporate aptitude tests and academic pedigree.
They hire on a massive scale, and the problem isn’t really about whether someone believes in X-risk or not — sometimes promotions are limited or even employment is terminated simply because of age. In more flexible job markets, like the Bay Area, the issue you described seems much less pronounced.
I’m writing this from the perspective of Seoul, where the job market is very isolated and rigid, so my comment doesn’t represent the whole world. I just wanted to say that, in that sense, what you wrote might actually make more sense on a global level. On the other side of the planet, there are people who worry less about whether the AI they’re building might end the world, and more about how they’ll feed their families tomorrow.
VojtaKovarik expressed that beautifully through the concept of fiducity, but from an individual’s point of view, it can be summed up more simply as: “One meal tomorrow matters more than the end of the world ten years from now.”
I think it’s notably abnormal specifically because it wasn’t the “default” equilibrium for OpenAI specifically.
Like earlier you mentioned:
selective hiring is very real. lots of people who are xrisk pilled just refuse to join oai. people who care a lot often end up very stressed and leave in large part because of the stress.
and
most cases of xrisk people leaving are just because people felt sidelined/unhappy and chose to leave.
One model of this is “its normal that people at any company don’t have strong opinions about their work”. Another model is “lots of people in various positions did in fact have strong opinions about this given the stakes and left”.
If you send such strong signals about safety that people preemptively filter out of the hiring pipeline, then people who are already there with strong opinions on safety feel sidelined, IMO the obvious interpretation is “you actively filtered against people with strong views on safety”.
do you really think the average Amazon employee or whatever has strong opinions on the future of delivery drones? does the average Waymo employee have extremely strong beliefs about the future of self driving?
I assume yes a lot of employees do believe technical capabilities are further ahead of what the rest of the public is aware of, because they have insider knowledge. And they are aware they are accelerating it.
They might not deeply care about drones but they are aware what is going on.
With AI, you can’t not care about ASI or extinction or catastrophe or dictatorship or any of the words thrown around, as they directly affect your life too.
once you have a company with more than 10 people, it requires constant effort to maintain culture. hiring is way harder if you can only hire people who are aligned, or if you insist on aligning people.
Then I expect OpenAI to splinter hard as we get closer to ASI.
With AI, you can’t not care about ASI or extinction or catastrophe or dictatorship or any of the words thrown around, as they directly affect your life too.
I think it is important to distinguish between caring in the sense of “it hugely affects your life” and caring in the sense of, for example, “having opinions on it, voicing them, or even taking costlier actions”. Presumably, if something affects you hugely, you should care about it in the second sense, but that doesn’t mean people do. My uninformed intuition is that this would apply to most employees of AI companies.
I am not surprised to hear this but also, this is insane.
All the lab heads are repeatedly publicly claiming they could cause human extinction, superintelligence is within reach, and a majority of people at their own labs don’t take them seriously on this.
I’m somewhat confused what causes a group of people who talk to each other everyday, work on the same projects, observe the same evidence, etc to come to such wildly different conclusions about the work they’re doing together and then be uninterested in resolving the disagreement.
Is there a taboo being enforced against discussing these disagreements inside the labs?
this is pretty normal? it’s really hard for leadership to make employees care about or believe specific things. do you really think the average Amazon employee or whatever has strong opinions on the future of delivery drones? does the average Waymo employee have extremely strong beliefs about the future of self driving?
for most people in the world, their job is just a job. people obviously avoid working on things they believe are completely doomed, and tend to work on cool trendy things. but generally most people do not really have strong beliefs about where the stuff they’re working on is going.
no specific taboo is required to ensure that people don’t really iron out deep philosophical disagreements with their coworkers. people care about all sorts of other things in life. they care about money, they care whether they’re enjoying the work, they care whether their coworkers are pleasant to be around, they care about their wife and kids and house.
once you have a company with more than 10 people, it requires constant effort to maintain culture. hiring is way harder if you can only hire people who are aligned, or if you insist on aligning people. if you grow very fast (and openai has grown very fast—it’s approximately doubled every single year I’ve been here), it’s inevitable that your culture will splinter. forget about having everyone on the same page; you’re going to have entire little googletowns and amazontowns and so on of people who bring Google or Amazon culture with them and agglomerate with other recent transplants from those companies.
I agree with your point. I think this tendency is especially visible in global conglomerates with rigid personnel structures. Take Samsung, for example — a company that supplies memory semiconductors while working with Nvidia and TSMC in the AI space. Samsung’s influence is comparable to that of many Silicon Valley giants, but its HR system is still based on traditional corporate aptitude tests and academic pedigree.
They hire on a massive scale, and the problem isn’t really about whether someone believes in X-risk or not — sometimes promotions are limited or even employment is terminated simply because of age. In more flexible job markets, like the Bay Area, the issue you described seems much less pronounced.
I’m writing this from the perspective of Seoul, where the job market is very isolated and rigid, so my comment doesn’t represent the whole world. I just wanted to say that, in that sense, what you wrote might actually make more sense on a global level. On the other side of the planet, there are people who worry less about whether the AI they’re building might end the world, and more about how they’ll feed their families tomorrow.
VojtaKovarik expressed that beautifully through the concept of fiducity, but from an individual’s point of view, it can be summed up more simply as: “One meal tomorrow matters more than the end of the world ten years from now.”
I think it’s notably abnormal specifically because it wasn’t the “default” equilibrium for OpenAI specifically.
Like earlier you mentioned:
and
One model of this is “its normal that people at any company don’t have strong opinions about their work”. Another model is “lots of people in various positions did in fact have strong opinions about this given the stakes and left”.
If you send such strong signals about safety that people preemptively filter out of the hiring pipeline, then people who are already there with strong opinions on safety feel sidelined, IMO the obvious interpretation is “you actively filtered against people with strong views on safety”.
I appreciate your answer, it adds useful info.
I assume yes a lot of employees do believe technical capabilities are further ahead of what the rest of the public is aware of, because they have insider knowledge. And they are aware they are accelerating it.
They might not deeply care about drones but they are aware what is going on.
With AI, you can’t not care about ASI or extinction or catastrophe or dictatorship or any of the words thrown around, as they directly affect your life too.
Then I expect OpenAI to splinter hard as we get closer to ASI.
I think it is important to distinguish between caring in the sense of “it hugely affects your life” and caring in the sense of, for example, “having opinions on it, voicing them, or even taking costlier actions”. Presumably, if something affects you hugely, you should care about it in the second sense, but that doesn’t mean people do. My uninformed intuition is that this would apply to most employees of AI companies.