In my opinion, this position was sensible when the labs themselves were branded as MIRI-like but with added emphasis on technical experimentation. The second it became clear that these ‘labs’ principal reward function was not their claimed preferences (for alignment research—which OAI was explicitly communicating under the moniker of safety), our personal semantic landscapes were already trained enough by the narrative, that we missed the major conflict of interest here before establishing the norm.
They will continue to use epistemic asymmetry and leverage over information advantage to make the claim that having any other group at the table is a fruitless endeavour, and use the risks of foreign adversarial advantage to continue to maintain that position strategically. Given that all regulation is domestic, they end up regulating themselves (which is even implied by your question), which IMO can be a worst case scenario from a humanist/existential risk perspective.
In my opinion, this position was sensible when the labs themselves were branded as MIRI-like but with added emphasis on technical experimentation. The second it became clear that these ‘labs’ principal reward function was not their claimed preferences (for alignment research—which OAI was explicitly communicating under the moniker of safety), our personal semantic landscapes were already trained enough by the narrative, that we missed the major conflict of interest here before establishing the norm.
They will continue to use epistemic asymmetry and leverage over information advantage to make the claim that having any other group at the table is a fruitless endeavour, and use the risks of foreign adversarial advantage to continue to maintain that position strategically. Given that all regulation is domestic, they end up regulating themselves (which is even implied by your question), which IMO can be a worst case scenario from a humanist/existential risk perspective.