The Frontier Model Forum exists in part to solve exactly this problem
...
Support the Frontier Model Forum.
My sense is that industry groups like this vary pretty widely across industries in their practical effect. Some are just a cheap way to get good PR; others are actually powerful and independent bodies. I genuinely have no idea where in that range the FMF falls, but I think it’s worth noting that the mere existence on paper of such a body doesn’t tell us much.
Some factors that seem relevant:
How many full-time employees does the FMF have? Who do they answer to?
How many employees at frontier labs have FMF-related work as all or part of their job responsibilities?
What budget does the FMF have?
It looks like the AI Safety Fund is nominally under the auspices of the FMF; would it have happened if the FMF didn’t exist?
Who sits on the nonprofit board?
I note that the commitment member orgs make (other than financial support for the FMF) is ‘participating in FMF workshops and activities.’ What workshops & activities have there been in the past three years, and what impact have they had?
I want to be very clear that I’m not accusing the FMF of being powerless or safetywashing; they might be terrific, and I very much hope they are! I’m just saying that (especially given the obvious incentives for safetywashing), I would personally want to know more about them before deciding whether they warranted support from the AI safety community.
If anyone reading this is an AI safety researcher at one of the member labs and has opinions about how effective and independent the FMF is, I’d love to hear them!
You’re totally right to point this out, thank you! I found the FMF quite late on while writing, and my research was mostly limited to their own writing (e.g. the announcement of their facilitated agreement with frontier labs). I probably shouldn’t have gone as far as advocating support for a specific organisation without more independent verification of effectiveness at addressing the issues in this post, especially since the full agreement isn’t public, meaning I couldn’t dig into any specifics (e.g. how it will be enforced, if at all).
That said, I think bodies like the FMF could play an important coordination role between frontier labs if effective, and I’m glad they exist; for example, it seems possible they’re well-positioned to facilitate private inter-lab comms channels where sharing of safety research can occur without requiring full publication, which could lower the commercial-risk barrier for sharing sensitive research. I imagine decision-makers at labs (e.g. legal/comms) might be more willing to sign off on x-risk research being shared with other labs but not the wider public[1], since there’s less of a potential PR concern.
Perhaps a better call-to-action would have been “engage with the FMF and similar bodies, and push for them to be a success”—thanks for making this point!
...
My sense is that industry groups like this vary pretty widely across industries in their practical effect. Some are just a cheap way to get good PR; others are actually powerful and independent bodies. I genuinely have no idea where in that range the FMF falls, but I think it’s worth noting that the mere existence on paper of such a body doesn’t tell us much.
Some factors that seem relevant:
How many full-time employees does the FMF have? Who do they answer to?
How many employees at frontier labs have FMF-related work as all or part of their job responsibilities?
What budget does the FMF have?
It looks like the AI Safety Fund is nominally under the auspices of the FMF; would it have happened if the FMF didn’t exist?
Who sits on the nonprofit board?
I note that the commitment member orgs make (other than financial support for the FMF) is ‘participating in FMF workshops and activities.’ What workshops & activities have there been in the past three years, and what impact have they had?
I want to be very clear that I’m not accusing the FMF of being powerless or safetywashing; they might be terrific, and I very much hope they are! I’m just saying that (especially given the obvious incentives for safetywashing), I would personally want to know more about them before deciding whether they warranted support from the AI safety community.
If anyone reading this is an AI safety researcher at one of the member labs and has opinions about how effective and independent the FMF is, I’d love to hear them!
You’re totally right to point this out, thank you! I found the FMF quite late on while writing, and my research was mostly limited to their own writing (e.g. the announcement of their facilitated agreement with frontier labs). I probably shouldn’t have gone as far as advocating support for a specific organisation without more independent verification of effectiveness at addressing the issues in this post, especially since the full agreement isn’t public, meaning I couldn’t dig into any specifics (e.g. how it will be enforced, if at all).
That said, I think bodies like the FMF could play an important coordination role between frontier labs if effective, and I’m glad they exist; for example, it seems possible they’re well-positioned to facilitate private inter-lab comms channels where sharing of safety research can occur without requiring full publication, which could lower the commercial-risk barrier for sharing sensitive research. I imagine decision-makers at labs (e.g. legal/comms) might be more willing to sign off on x-risk research being shared with other labs but not the wider public[1], since there’s less of a potential PR concern.
Perhaps a better call-to-action would have been “engage with the FMF and similar bodies, and push for them to be a success”—thanks for making this point!
Not that I’m endorsing this kind of work being kept from the public...