We moved our blog to Substack! We think this platform has many advantages, and we’re excited for the blog to live here. You can now easily subscribe to the blog to receive regular newsletters as well as various thoughts and observations related to AI.
AI Impacts wiki
All AI Impacts research pages now reside on the AI Impacts Wiki. The wiki aims to document what we know so far about decision-relevant questions about the future of AI. Our pages have always been wiki-like: updatable reference pages organized by topic. We hope that making it an actual wiki will make it clearer to everyone what’s going on, as well as better to use for this purpose, for both us and readers. We are actively looking for ways to make the wiki even better, and you can help with this by sharing your thoughts in our feedback form or in the comments of this blog post!
New office
We recently moved to a new office that we are sharing with FAR AI and other partner organizations. We’re extremely grateful to the team at FAR for organizing this office space, as well as to the Lightcone team for hosting us over the last year and a half.
Katja Grace talks about forecasting AI risk at EA Global
“Let’s think about slowing down AI” argues that those who are concerned about existential risks from AI should think about strategies that could slow the progress of AI (Katja)
“Framing AI strategy” discusses ten frameworks for thinking about AI strategy. (Zach)
“How popular is ChatGPT?”: Part 1 looks at trends in AI-related search volume, and Part 2 refutes a widespread claim about the growth of ChatGPT. (Harlan and Rick)
The state of AI today: funding, hardware, and capabilities
“How bad a future do ML researchers expect?” compares experts’ answers in 2016 and 2022 to the question “How positive or negative will the impacts of high-level machine intelligence on humanity be in the long run?” (Katja)
“We don’t trade with ants” (crosspost) disputes the common claim that advanced AI systems won’t trade with humans for the same reason that humans don’t trade with ants. (Katja)
Funding
We’re actively seeking financial support to continue our research and operations for the rest of the year. Previous funding allowed us to expand our research team and hold a summer internship program.
If you want to talk to us about why we should be funded or hear more details about what we would do with money, please write to Elizabeth, Rick, or Katja at [firstname]@aiimpacts.org.
If you’d like to donate to AI Impacts, you can do so here. (And we thank you!)
AI Impacts Quarterly Newsletter, Jan-Mar 2023
Link post
News
AI Impacts blog
We moved our blog to Substack! We think this platform has many advantages, and we’re excited for the blog to live here. You can now easily subscribe to the blog to receive regular newsletters as well as various thoughts and observations related to AI.
AI Impacts wiki
All AI Impacts research pages now reside on the AI Impacts Wiki. The wiki aims to document what we know so far about decision-relevant questions about the future of AI. Our pages have always been wiki-like: updatable reference pages organized by topic. We hope that making it an actual wiki will make it clearer to everyone what’s going on, as well as better to use for this purpose, for both us and readers. We are actively looking for ways to make the wiki even better, and you can help with this by sharing your thoughts in our feedback form or in the comments of this blog post!
New office
We recently moved to a new office that we are sharing with FAR AI and other partner organizations. We’re extremely grateful to the team at FAR for organizing this office space, as well as to the Lightcone team for hosting us over the last year and a half.
Katja Grace talks about forecasting AI risk at EA Global
At EA Global Bay Area 2023, Katja gave a talk titled Will AI end everything? A guide to guessing in which she outlined a way to roughly estimate the extent of AI risk.
AI Impacts in the Media
AI Impacts’ 2022 Expert Survey on Progress in AI was cited in an NBC Nightly News segment, an op-ed in Bloomberg, an op-ed in The New York Times, an article in Our World in Data, and an interview with Kelsey Piper.
Ezra Klein quoted Katja and separately cited the survey in his New York Times op-ed This Changes Everything.
Sigal Samuel interviewed Katja for the Vox article The case for slowing down AI.
Research and writing highlights
AI Strategy
“Let’s think about slowing down AI” argues that those who are concerned about existential risks from AI should think about strategies that could slow the progress of AI (Katja)
“Framing AI strategy” discusses ten frameworks for thinking about AI strategy. (Zach)
“Product safety is a poor model for AI governance” argues that a common type of policy proposal is inadequate to address the risks of AI. (Rick)
“Alexander Fleming and Antibiotic Resistance” is a research report about early efforts to prevent antibiotic resistance and relevant lessons for AI risk. (Harlan)
Resisted technological temptations: how much economic value has been forgone for safety and ethics in past technologies?
“What we’ve learned so far from our technological temptations project” is a blog post that summarizes the Technological Temptations project and some possible takeaways. (Rick)
Geoengineering, nuclear power, and vaccine challenge trials were evaluated for the amount of value that may have been forgone by not using them. (Jeffrey)
Public awareness and opinions about AI
“The public supports regulating AI for safety” summarizes the results from a survey of the American public about AI. (Zach)
“How popular is ChatGPT?”: Part 1 looks at trends in AI-related search volume, and Part 2 refutes a widespread claim about the growth of ChatGPT. (Harlan and Rick)
The state of AI today: funding, hardware, and capabilities
“Recent trends in funding for AI companies” analyzes data about the amount of funding AI companies have received. (Rick)
“How much computing capacity exists in GPUs and TPUs in Q1 2023?” uses a back-of-the-envelope calculation to estimate the total amount of compute that exists on all GPUs and TPUs. (Harlan)
“Capabilities of state-of-the-art AI, 2023” is a list of some noteworthy things that state-of-the-art AI can do. (Harlan and Zach)
Arguments for AI risk
Still in progress, “Is AI an existential risk to humanity?” is a partially complete page summarizing various arguments for concern about existential risk from AI. A couple of specific arguments are examined more closely in “Argument for AI x-risk from competent malign agents” and “Argument for AI x-risk from large impacts” (Katja)
Chaos theory and what it means for AI safety
“AI Safety Arguments Affected by Chaos” reasons about ways in which chaos theory could be relevant to predictions about AI, and “Chaos in Humans” explores the theoretical limits to predicting human behavior. The report “Chaos and Intrinsic Unpredictability” provides background, and a blog post summarizes the project. (Jeffrey and Aysja)
Miscellany
“How bad a future do ML researchers expect?” compares experts’ answers in 2016 and 2022 to the question “How positive or negative will the impacts of high-level machine intelligence on humanity be in the long run?” (Katja)
“We don’t trade with ants” (crosspost) disputes the common claim that advanced AI systems won’t trade with humans for the same reason that humans don’t trade with ants. (Katja)
Funding
We’re actively seeking financial support to continue our research and operations for the rest of the year. Previous funding allowed us to expand our research team and hold a summer internship program.
If you want to talk to us about why we should be funded or hear more details about what we would do with money, please write to Elizabeth, Rick, or Katja at [firstname]@aiimpacts.org.
If you’d like to donate to AI Impacts, you can do so here. (And we thank you!)
Image credit: Midjourney