Desired articles on AI risk?

I’ve once again updated my list of forthcoming and desired articles on AI risk, which currently names 17 forthcoming articles and books about AGI risk, and also names 26 desired articles that I wish researchers were currently writing.

But I’d like to hear your suggestions, too. Which articles not already on the list as “forthcoming” or “desired” would you most like to see written, on the subject of AGI risk?

Book/​article titles reproduced below for convenience...

Forthcoming

  • Superintelligence: Groundwork for a Strategic Analysis by Nick Bostrom

  • Singularity Hypotheses, edited by Amnon Eden et al.

  • Singularity Hypotheses, Vol. 2, edited by Vic Callaghan

  • “General Purpose Intelligence: Arguing the Orthogonality Thesis” by Stuart Armstrong

  • “Responses to AGI Risk” by Kaj Sotala et al.

  • “How we’re predicting AI… or failing to” by Stuart Armstrong & Kaj Sotala

  • “A Comparison of Decision Algorithms on Newcomblike Problems” by Alex Altair

  • “A Representation Theorem for Decisions about Causal Models” by Daniel Dewey

  • “Reward Function Integrity in Artificially Intelligent Systems” by Roman Yampolskiy

  • “Bounding the impact of AGI” by Andras Kornai

  • “Minimizing Risks in Developing Artificial General Intelligence” by Ted Goertzel

  • “Limitations and Risks of Machine Ethics” by Miles Brundage

  • “Universal empathy and ethical bias for artificial general intelligence” by Alexey Potapov & Sergey Rodiono

  • “Could we use untrustworthy human brain emulations to make trustworthy ones?” by Carl Shulman

  • “Ethics and Impact of Brain Emulations” by Anders Sandberg

  • “Envisioning The Economy, and Society, of Whole Brain Emulations” by Robin Hanson

  • “Autonomous Technology and the Greater Human Good” by Steve Omohundro

Desired

  • “AI Risk Reduction: Key Strategic Questions”

  • “Predicting Machine Superintelligence”

  • “Self-Modification and Löb’s Theorem”

  • “Solomonoff Induction and Second-Order Logic”

  • “The Challenge of Preference Extraction”

  • “Value Extrapolation”

  • “Losses in Hardscrabble Hell”

  • “Will Values Converge?”

  • “AI Takeoff Scenarios”

  • “AI Will Be Maleficent by Default”

  • “Biases in AI Research”

  • “Catastrophic Risks and Existential Risks”

  • “Uncertainty and Decision Theories”

  • “Intelligence Explosion: The Proportionality Thesis”

  • “Hazards from Large Scale Computation”

  • “Tool Oracles for Safe AI Development”

  • “Stable Attractors for Technologically Advanced Civilizations”

  • “AI Risk: Private Projects vs. Government Projects”

  • “Why AI researchers will fail to hit the narrow target of desirable AI goal systems”

  • “When will whole brain emulation be possible?”

  • “Is it desirable to accelerate progress toward whole brain emulation?”

  • “Awareness of nanotechnology risks: Lessons for AI risk mitigation”

  • “AI and Physical Effects”

  • “Moore’s Law of Mad Science”

  • “What Would AIXI Do With Infinite Computing Power and a Halting Oracle?”

  • “AI Capability vs. AI Safety”