“The Center for AI Standards and Innovation (CAISI) at the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has published a Request for Information (RFI) seeking insights from industry, academia, and the security community regarding the secure development and deployment of AI agent systems.”
″The RFI poses questions on topics including:
Unique security threats affecting AI agent systems, and how these threats may change over time.
Methods for improving the security of AI agent systems in development and deployment.
Promise of and possible gaps in existing cybersecurity approaches when applied to AI agent systems.
Methods for measuring the security of AI agent systems and approaches to anticipating risks during development.
Interventions in deployment environments to address security risks affecting AI agent systems, including methods to constrain and monitor the extent of agent access in the deployment environment.
Input from AI agent deployers, developers, and computer security researchers, among others, will inform future work on voluntary guidelines and best practices related to AI agent security. It will also contribute to CAISI’s ongoing research and evaluations of agent security. Respondents are encouraged to provide concrete examples, best practices, case studies and actionable recommendations based on their experience with AI agent systems. The full RFI can be found here.”
This seems like a way to potentially positively impact legislation on agentic AI: CAISI Issues Request for Information About Securing AI Agent Systems | NIST. I’ll definitely be filling this in.
“The Center for AI Standards and Innovation (CAISI) at the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has published a Request for Information (RFI) seeking insights from industry, academia, and the security community regarding the secure development and deployment of AI agent systems.”
″The RFI poses questions on topics including:
Unique security threats affecting AI agent systems, and how these threats may change over time.
Methods for improving the security of AI agent systems in development and deployment.
Promise of and possible gaps in existing cybersecurity approaches when applied to AI agent systems.
Methods for measuring the security of AI agent systems and approaches to anticipating risks during development.
Interventions in deployment environments to address security risks affecting AI agent systems, including methods to constrain and monitor the extent of agent access in the deployment environment.
Input from AI agent deployers, developers, and computer security researchers, among others, will inform future work on voluntary guidelines and best practices related to AI agent security. It will also contribute to CAISI’s ongoing research and evaluations of agent security. Respondents are encouraged to provide concrete examples, best practices, case studies and actionable recommendations based on their experience with AI agent systems. The full RFI can be found here.”