I haven’t followed this in great detail, but I do remember hearing from many AI policy people (including people at the UKAISI) that such commitments had been made.
It’s plausible to me that this was an example of “miscommunication” rather than “explicit lying.” I hope someone who has followed this more closely provides details.
But note that I personally think that AGI labs have a responsibility to dispel widely-believed myths. It would shock me if OpenAI/Anthropic/Google DeepMind were not aware that people (including people in government) believed that they had made this commitment. If you know that a bunch of people think you committed to sending them your models, and your response is “well technically we never said that but let’s just leave it ambiguous and then if we defect later we can just say we never committed”, I still think it’s fair for people to be disappointed in the labs.
(I do think this form of disappointment should not be conflated with “you explicitly said X and went back on it”, though.)
Could consider “frontier AI watch”, “frontier AI company watch”, or “AGI watch.”
Most people in the world (including policymakers) have a much broader conception of AI. AI means machine learning, AI is the thing that 1000s of companies are using and 1000s of academics are developing, etc etc.