Here is a similar post one could make about a different company:
A friend of mine has recently encountered a number of people with misconceptions about their employer, Phillip Morris International (PMI). Some common impressions are accurate, and others are not. He encouraged me to write a post intended to provide clarification on some of these points, to help people know what to expect from the organization and to figure out how to engage with it. It is not intended as a full explanation or evaluation of Phillip Morris’s strategy.
Common accurate impressions
Phillip Morris International is the world’s largest producer of cigarettes.
The majority of employees at Phillip Morris International work on tobacco production and marketing for the developing world.
The majority of Phillip Morris International’s employees did not join with the primary motivation of reducing harm from tobacco smoke specifically.
PMI is the largest tobacco company in the world when measuring by market capitalization or revenue. PMI has six multibillion US$ brands and ships tens of billions of units to (in order of volume) southeast Asia, the European Union, the Middle East and Africa, Eastern Europe, The Americas, and East Asia & Australia.
Common misconceptions
Incorrect: PMI is not working on alternatives to cigarettes.
Philip Morris International is one of the largest funders in the world of research and development on smoke-free products. PMI established and supported the Foundation for a Smoke-Free World in 2017, and pledged to provide it with over one billion dollars over the next twelve years. By 2018 it spent several billion dollars on developing alternatives like iQOS through its research house in Switzerland, and all told since its inception PMI has spent over 9 billion investing in research on smoke free alternatives.
Incorrect: Most people who were working on cigarette alternatives left during the Altria Group/PMI split.
Until a spin-off in March 2008, Philip Morris International was an operating company of Altria. Altria explained the spin-off by arguing PMI would have more “freedom” outside the responsibilities and standards of American corporate ownership in terms of potential litigation and legislative restrictions to “pursue sales growth in emerging markets”, while Altria focuses on the American domestic market. PMI kept the majority of its international research houses during the split.
Incorrect: PMI leadership is dismissive of risks caused by smoking.
Since public outcry in the 90s Phillip Morris spinoffs have been very public about their acknowledgement of the health problems that cigarettes cause. You can tell they care about their customers because their website is basically ~80% dedications to their publicly declared quest to end smoking, and reach a target of 50% of revenue coming from smokeless tobacco products by 2025.
Now that I have your blessing I shall do that! I was mostly worried cause I have a history of making unhelpfully aggressive AI safety-related comments and I didn’t want moderators to get frustrated with me again (which, to be clear, so far has happened only for very understandable reasons).
The parent seems to be redacted, but I wish to express that the satire angle did give quite a clear picture of some dynamics that could get watered down to the point of irrelevance. With the length and intensity it might have been unfriendlier than it could have been.
So in brief and abstract if an oil company promises carbon reductions because of social responcibility can be facing a conflict of interests and might not be pushing in both directions with the same gusto.
So with a organisation both making AI happen and not happen left hand spinning what the right hand is doing is relatively likely.
I obviously think there are many important disanalogies, but even if there weren’t, rhetoric like this seems like an excellent way to discourage OpenAI employees from ever engaging with the alignment community, which seems like a pretty bad thing to me.
Thank you for causing me to reconsider. I should have said “other OpenAI employees”. I do not intend to disengage from the alignment community because of critical rhetoric, and I apologize if my comment came across as a threat to do so. I am concerned about further breakdown of communication between the alignment community and AI labs where alignment solutions may need to be implemented.
I don’t immediately see any other reason why my comment might have been inappropriate, but I welcome your clarification if I am missing something.
Here is a similar post one could make about a different company:
I found this comment helpful for me as I was trying to understand AI labs’ roles in all this. Please consider retracting the retraction :)
Now that I have your blessing I shall do that! I was mostly worried cause I have a history of making unhelpfully aggressive AI safety-related comments and I didn’t want moderators to get frustrated with me again (which, to be clear, so far has happened only for very understandable reasons).
The parent seems to be redacted, but I wish to express that the satire angle did give quite a clear picture of some dynamics that could get watered down to the point of irrelevance. With the length and intensity it might have been unfriendlier than it could have been.
So in brief and abstract if an oil company promises carbon reductions because of social responcibility can be facing a conflict of interests and might not be pushing in both directions with the same gusto.
So with a organisation both making AI happen and not happen left hand spinning what the right hand is doing is relatively likely.
I obviously think there are many important disanalogies, but even if there weren’t, rhetoric like this seems like an excellent way to discourage OpenAI employees from ever engaging with the alignment community, which seems like a pretty bad thing to me.
I’d agree if somebody else wrote what you wrote but I don’t think it’s appropriate for you as an OpenAI employee to say that.
Thank you for causing me to reconsider. I should have said “other OpenAI employees”. I do not intend to disengage from the alignment community because of critical rhetoric, and I apologize if my comment came across as a threat to do so. I am concerned about further breakdown of communication between the alignment community and AI labs where alignment solutions may need to be implemented.
I don’t immediately see any other reason why my comment might have been inappropriate, but I welcome your clarification if I am missing something.
Thanks for the clarification.