This BOTEC attitude makes sense when you view the creation of AI technology and ai safety as a pure result of capital investment. The economic view of AI abstracts the development process to a black box which takes in investments as input and produces hardware/software out the other end. However, AI is still currently mostly driven by people. A large part of enabling AI development comes from a culture around AI, including hype, common knowledge, and social permissibility to pursue AI development as a startup/career pathway.
In that regard “AI safety people” starting AI companies, writing hype pieces that encourage natsec coded AI races, and investing in AI tech contribute far more than mere dollars. It creates a situation where AI safety as a movement becomes hopelessly confused about what someone “concerned about AI safety” should do with their life and career. The result is that someone “concerned about AI safety” can find groups and justifications for everything from protesting outside OpenAI to working as the CEO of OpenAI. I think this is intrinsically linked to the fundamental confusion behind the origins of the movement.
In short, whatever material and economic leverage investment plays produce may not be worth the dilution of the ideas and culture of AI safety as a whole. Is AI safety just going to become the next “ESG”, a thin flag of respectability draped over capabilities companies/racing companies?
This BOTEC attitude makes sense when you view the creation of AI technology and ai safety as a pure result of capital investment. The economic view of AI abstracts the development process to a black box which takes in investments as input and produces hardware/software out the other end. However, AI is still currently mostly driven by people. A large part of enabling AI development comes from a culture around AI, including hype, common knowledge, and social permissibility to pursue AI development as a startup/career pathway.
In that regard “AI safety people” starting AI companies, writing hype pieces that encourage natsec coded AI races, and investing in AI tech contribute far more than mere dollars. It creates a situation where AI safety as a movement becomes hopelessly confused about what someone “concerned about AI safety” should do with their life and career. The result is that someone “concerned about AI safety” can find groups and justifications for everything from protesting outside OpenAI to working as the CEO of OpenAI. I think this is intrinsically linked to the fundamental confusion behind the origins of the movement.
In short, whatever material and economic leverage investment plays produce may not be worth the dilution of the ideas and culture of AI safety as a whole. Is AI safety just going to become the next “ESG”, a thin flag of respectability draped over capabilities companies/racing companies?