This request for information does cover some of the possible existential risks, in the first section. I am going to submit a few responses of my own, and I am hopeful others will do the same.
1. What specific measures – such as standards, regulations, investments, and improved trust and safety practices – are needed to ensure that AI systems are designed, developed, and deployed in a manner that protects people’s rights and safety? Which specific entities should develop and implement these measures?
2. How can the principles and practices for identifying and mitigating risks from AI, as outlined in the Blueprint for an AI Bill of Rights and the AI Risk Management Framework, be leveraged most effectively to tackle harms posed by the development and use of specific types of AI systems, such as large language models?
3. Are there forms of voluntary or mandatory oversight of AI systems that would help mitigate risk? Can inspiration be drawn from analogous or instructive models of risk management in other sectors, such as laws and policies that promote oversight through registration, incentives, certification, or licensing?
4. What are the national security benefits associated with AI? What can be done to maximize those benefits?
5. How can AI, including large language models, be used to generate and maintain more secure software and hardware, including software code incorporating best practices in design, coding and post deployment vulnerabilities?
6. How can AI rapidly identify cyber vulnerabilities in existing critical infrastructure systems and accelerate addressing them?
7. What are the national security risks associated with AI? What can be done to mitigate these risks?
8. How does AI affect the United States’ commitment to cut greenhouse gases by 50-52% by 2030, and the Administration’s objective of net-zero greenhouse gas emissions no later than 2050? How does it affect other aspects of environmental quality?