I am worried that many of your suggestions are anathema to open source. For example, your Favor structured access suggestion is forbidden by the Open Source Initiative.
I would recommend suggestions that are more likely to be acted on, and to play to the open source community’s strengths. For example, I imagine the open source community would be receptive to and especially good at violet teaming.
On ‘violet teaming’: I think this phrase is a careless analogy which drags attention in an unhelpful direction, and dislike the way the phrase analogizes this to red-teaming—they’re actually very different kinds of activities and occur at different levels of abstraction / system-scope and use different skills.
Working to make institutions etc. more resilient seems great to me, but I wouldn’t want to assume that using the same technology is necessarily the best way to do that. Set up a great ‘resilience incubator’ and let them use whatever tools and approaches they want!
Thank you for your comment. I am not entirely convinced that open-sourcing advanced AI, akin to nukes, is a good idea. My preference is for such powerful technologies to remain difficult to access in order to mitigate potential risks.
That being said, I agree that it’s important to explore solutions that align with the open source community’s strengths, such as violet teaming. I’ll consider your input as I continue to refine my thoughts on this matter.
I am worried that many of your suggestions are anathema to open source. For example, your Favor structured access suggestion is forbidden by the Open Source Initiative.
I would recommend suggestions that are more likely to be acted on, and to play to the open source community’s strengths. For example, I imagine the open source community would be receptive to and especially good at violet teaming.
On ‘violet teaming’: I think this phrase is a careless analogy which drags attention in an unhelpful direction, and dislike the way the phrase analogizes this to red-teaming—they’re actually very different kinds of activities and occur at different levels of abstraction / system-scope and use different skills.
Working to make institutions etc. more resilient seems great to me, but I wouldn’t want to assume that using the same technology is necessarily the best way to do that. Set up a great ‘resilience incubator’ and let them use whatever tools and approaches they want!
Hi Chris,
Thank you for your comment. I am not entirely convinced that open-sourcing advanced AI, akin to nukes, is a good idea. My preference is for such powerful technologies to remain difficult to access in order to mitigate potential risks.
That being said, I agree that it’s important to explore solutions that align with the open source community’s strengths, such as violet teaming. I’ll consider your input as I continue to refine my thoughts on this matter.