For instance, quoting from the recent post on list of lethalities:
restricting yourself to doing X will not prevent Facebook AI Research from destroying the world six months later
Compare this to:
restricting yourself to doing X will not prevent another AI lab from destroying the world six months later
One problem with your suggested reformulation is that they do not mean the same thing. Naming FAIR conveys information above and beyond ‘another AI lab’, which additional information would be hard to sum up, much less put into an equally short phrase: there’s reasons it’s ‘FAIR’ which is used in that sentence about catch-up competitors & unilateralist curse, and why various other AI lab names which one might think could be dropped in like ‘DeepMind’ or ‘Sberbank’ or ‘MILA’ or ‘BAIR’ or ‘OpenAI’ or ‘Nvidia’ or ‘MSR’ would all be wrong in various ways*, but it’s hard to even explain. Similarly, there are things like the Google Brain vs DeepMind rivalry which affect how scaling research has been playing out, for the worse, and are relevant to safety, but it would be absurd to try to talk about it in a generic sense or use pseudonyms.
(To try to explain anyway: basically, they’re kind of a perpetual second-mover second-best player with a bit of a chip on their shoulder from poaching and scooping—consider Darkforest—with little commitment to AI safety either in the real AI risk sense or AI bias sense and mostly in denial that there is any problem (eg. LeCun: “There is no such thing as Artificial General Intelligence because there is no such thing as General Intelligence.”), and get looked down on for being part of Facebook with overly-applied focus, seen as the most ethically-compromised and amoral part of FANG eager to ship tech because of Zuckerberg’s conviction that more tech & transparency is always better (at least as long as it increases FB growth), and a historical backstory where FB struggled to hire DL people to begin with despite Zuckerberg strongly believing in DL potential from almost AlexNet, being spurned by the likes of Hassabis, settling for LeCun/Pesenti/etc, because despite access to huge resources & data & industrial application FB/Zuckerberg were already widely hated. This description takes up a whole paragraph, and you probably still don’t know a good chunk of what I’m referring to with Darkforest or Zuckerberg NIPS shopping, and if you disagreed, asking why this doesn’t characterize OA or GB or Baidu equally well, that would be another set of paragraphs, and so on. But if you already know this, as you should if you are going to discuss these things, then use of ‘FAIR’ immediately conjures the family of plausible complex-failure disaster scenarios where eg. DM or another top player succeeds in not-instantly-failed AGI, it gets controlled by the DM safety board overruling any commercial pressures, only to trigger Zuck into ordering a replication effort to leapfrog the over-cautious ‘dumb fucks’ at DM, about which there is zero internal mechanism or oversight to oppose the CEO/controlling shareholder himself nor anyone who cares, which succeeds because the mere fact of AGI success will tend to indicate what the solution is, but then fails.)
* This is one reason I invested so much effort in link-icons on gwern.net, incidentally. Research papers are not published in isolation, and there are lots of ‘flavors’ and long-standing projects and interests. Much like an expert in a field can often predict who wrote a paper despite blinding (ex ungue leonem), you can often tell which lab wrote a paper even with all affiliation information scrubbed, because they bring with them so much infrastructure, approaches, preferences, and interests. I can predict a remarkable amount of, say, the Gato paper simply from following Decision Transformer research + the title + ‘DeepMind’ rather than ‘Google’ or ‘OpenAI’ or ‘BAIR’. (For a natural experiment demonstrating lab flavor, simply compare Gato to its fraternal twin paper, GB’s Multi-Game Transformer.) So I find it very useful to have a super-quick icon denoting a Facebook vs OpenAI vs DM vs Nvidia etc paper. You the ordinary reader may not find such annotations that useful, yet, but you will if you git gud.
One problem with your suggested reformulation is that they do not mean the same thing. Naming FAIR conveys information above and beyond ‘another AI lab’, which additional information would be hard to sum up, much less put into an equally short phrase: there’s reasons it’s ‘FAIR’ which is used in that sentence about catch-up competitors & unilateralist curse, and why various other AI lab names which one might think could be dropped in like ‘DeepMind’ or ‘Sberbank’ or ‘MILA’ or ‘BAIR’ or ‘OpenAI’ or ‘Nvidia’ or ‘MSR’ would all be wrong in various ways*, but it’s hard to even explain. Similarly, there are things like the Google Brain vs DeepMind rivalry which affect how scaling research has been playing out, for the worse, and are relevant to safety, but it would be absurd to try to talk about it in a generic sense or use pseudonyms.
(To try to explain anyway: basically, they’re kind of a perpetual second-mover second-best player with a bit of a chip on their shoulder from poaching and scooping—consider Darkforest—with little commitment to AI safety either in the real AI risk sense or AI bias sense and mostly in denial that there is any problem (eg. LeCun: “There is no such thing as Artificial General Intelligence because there is no such thing as General Intelligence.”), and get looked down on for being part of Facebook with overly-applied focus, seen as the most ethically-compromised and amoral part of FANG eager to ship tech because of Zuckerberg’s conviction that more tech & transparency is always better (at least as long as it increases FB growth), and a historical backstory where FB struggled to hire DL people to begin with despite Zuckerberg strongly believing in DL potential from almost AlexNet, being spurned by the likes of Hassabis, settling for LeCun/Pesenti/etc, because despite access to huge resources & data & industrial application FB/Zuckerberg were already widely hated. This description takes up a whole paragraph, and you probably still don’t know a good chunk of what I’m referring to with Darkforest or Zuckerberg NIPS shopping, and if you disagreed, asking why this doesn’t characterize OA or GB or Baidu equally well, that would be another set of paragraphs, and so on. But if you already know this, as you should if you are going to discuss these things, then use of ‘FAIR’ immediately conjures the family of plausible complex-failure disaster scenarios where eg. DM or another top player succeeds in not-instantly-failed AGI, it gets controlled by the DM safety board overruling any commercial pressures, only to trigger Zuck into ordering a replication effort to leapfrog the over-cautious ‘dumb fucks’ at DM, about which there is zero internal mechanism or oversight to oppose the CEO/controlling shareholder himself nor anyone who cares, which succeeds because the mere fact of AGI success will tend to indicate what the solution is, but then fails.)
* This is one reason I invested so much effort in link-icons on gwern.net, incidentally. Research papers are not published in isolation, and there are lots of ‘flavors’ and long-standing projects and interests. Much like an expert in a field can often predict who wrote a paper despite blinding (ex ungue leonem), you can often tell which lab wrote a paper even with all affiliation information scrubbed, because they bring with them so much infrastructure, approaches, preferences, and interests. I can predict a remarkable amount of, say, the Gato paper simply from following Decision Transformer research + the title + ‘DeepMind’ rather than ‘Google’ or ‘OpenAI’ or ‘BAIR’. (For a natural experiment demonstrating lab flavor, simply compare Gato to its fraternal twin paper, GB’s Multi-Game Transformer.) So I find it very useful to have a super-quick icon denoting a Facebook vs OpenAI vs DM vs Nvidia etc paper. You the ordinary reader may not find such annotations that useful, yet, but you will if you git gud.