Thanks for writing this post! I’m curious to hear more about this bit of your beliefs going in:
The existential risk argument is suspiciously aligned with the commercial incentives of AI executives. It simultaneously serves to hype up capabilities and coolness while also directing attention away from the real problems that are already emerging. It’s suspicious that the apparent solution to this problem is to do more AI research as opposed to doing anything that would actually hurt AI companies financially.
Are there arguments or evidence that would have convinced you the existential risk worries in the industry were real / sincere?
For context, I work at a frontier AI lab and from where I sit it’s very clear to me that the x-risk worries aren’t coming from a place of hype, and people who know more about the technology generally get more worried rather than less. (The executives still could be disingenuous in their expressed concern, but if so they’re doing it in order to placate their employees who have real concerns about the risks, not to sound cool to their investors.)
I don’t know what sorts of things would make that clearer from the outside, though. Curious if any of the following arguments would have been compelling to you:
The AI labs most willing to take costly actions now (like hire lots of safety researchers or support AI regulation that the rest of the industry opposes or make advance commitments about the preparations they’ll take before releasing future models) are also the ones talking the most about catastrophic or existential risks.
Like if you thought this stuff was an underhanded tactic to drum up hype and get commercial success by lying to the public, then it’s strange that Meta AI, not usually known for its tremendous moral integrity, is so principled about telling the truth that they basically never bring these risks up!
People often quit their well-paying jobs at AI companies in order to speak out about existential risk or for reasons of insufficient attention paid to AI safety from catastrophic or existential risks.
The standard trajectory is for lab executives to talk about existential risk a moderate amount early on, when they’re a small research organization, and then become much quieter about it over time as they become subject to more and more commercial pressure. You actually see much more discussion of existential risk among the lower-level employees whose statements are less scrutinized for being commercially unwise. This is a weird pattern for something whose main purpose is to attract hype and investment!
Thanks for writing this post! I’m curious to hear more about this bit of your beliefs going in:
Are there arguments or evidence that would have convinced you the existential risk worries in the industry were real / sincere?
For context, I work at a frontier AI lab and from where I sit it’s very clear to me that the x-risk worries aren’t coming from a place of hype, and people who know more about the technology generally get more worried rather than less. (The executives still could be disingenuous in their expressed concern, but if so they’re doing it in order to placate their employees who have real concerns about the risks, not to sound cool to their investors.)
I don’t know what sorts of things would make that clearer from the outside, though. Curious if any of the following arguments would have been compelling to you:
The AI labs most willing to take costly actions now (like hire lots of safety researchers or support AI regulation that the rest of the industry opposes or make advance commitments about the preparations they’ll take before releasing future models) are also the ones talking the most about catastrophic or existential risks.
Like if you thought this stuff was an underhanded tactic to drum up hype and get commercial success by lying to the public, then it’s strange that Meta AI, not usually known for its tremendous moral integrity, is so principled about telling the truth that they basically never bring these risks up!
People often quit their well-paying jobs at AI companies in order to speak out about existential risk or for reasons of insufficient attention paid to AI safety from catastrophic or existential risks.
The standard trajectory is for lab executives to talk about existential risk a moderate amount early on, when they’re a small research organization, and then become much quieter about it over time as they become subject to more and more commercial pressure. You actually see much more discussion of existential risk among the lower-level employees whose statements are less scrutinized for being commercially unwise. This is a weird pattern for something whose main purpose is to attract hype and investment!