I’m not remarkably well-versed in AI Governance and politics, but I tend to see this as a good sign. Some general thoughts, from the standpoint of a non-expert:
1-I think saying something akin to “we’ll discuss and decide those risks together” is a good signal. It frames the AI Safety community as collaborative. And that’s (in my view) true, there’s a positive sum relationship between the different factions. IASEAI is another example of the sort of bridging I think is necessary to get legitimacy. I think it’s a better signal than [Shut up and listen] “ban AGI” [this is not to be discussed] (where [] represents what opponents may infer from the ‘raw’ Ban AGI message). People do change their mind, if you let them speak theirs first.
2-I think this is the ‘right level’ of action. I don’t believe in heroes stepping in last minute to save the day in frontier companies. I’m somewhat worried about the “large civil movement for AI Safety” agenda, in that it may turn out less controlable than expected. This sort of intervention is “broad, but focused on high-profile people”, which seems to limit slip ups, while having a greater degree of commitment than the CAIS statement.
3-”Red Lines” are a great concept (as long as someone finds liability down the road) and offer ‘customization’. This sounds empowering for whoever will sit at the table -they get to discuss where to set the red line. This shows that their autonomy is valued.
Caveat: I informally knew about the red lines project before they were published, so this may bias my impression.
I’m not remarkably well-versed in AI Governance and politics, but I tend to see this as a good sign. Some general thoughts, from the standpoint of a non-expert:
1-I think saying something akin to “we’ll discuss and decide those risks together” is a good signal. It frames the AI Safety community as collaborative. And that’s (in my view) true, there’s a positive sum relationship between the different factions. IASEAI is another example of the sort of bridging I think is necessary to get legitimacy. I think it’s a better signal than [Shut up and listen] “ban AGI” [this is not to be discussed] (where [] represents what opponents may infer from the ‘raw’ Ban AGI message). People do change their mind, if you let them speak theirs first.
2-I think this is the ‘right level’ of action. I don’t believe in heroes stepping in last minute to save the day in frontier companies. I’m somewhat worried about the “large civil movement for AI Safety” agenda, in that it may turn out less controlable than expected. This sort of intervention is “broad, but focused on high-profile people”, which seems to limit slip ups, while having a greater degree of commitment than the CAIS statement.
3-”Red Lines” are a great concept (as long as someone finds liability down the road) and offer ‘customization’. This sounds empowering for whoever will sit at the table -they get to discuss where to set the red line. This shows that their autonomy is valued.
Caveat: I informally knew about the red lines project before they were published, so this may bias my impression.