AI Risk & Opportunity: Questions We Want Answered

Part of the series AI Risk and Opportunity: A Strategic Analysis.

(You can leave anonymous feedback on posts in this series here. I alone will read the comments, and may use them to improve past and forthcoming posts in this series.)

This post provides a list of questions about AI risk strategy — questions we want answered. Please suggest additional questions (a paragraph of explanation is preferred but not necessary); I may add them to the list. You can submit questions anonymously here.

Also, please identify which 3-5 of these questions you think are low-hanging fruit for productive strategic analysis on Less Wrong.

The list is in no particular order, but question numbers will remain unchanged (so that you can reliably refer to questions by their number):

  1. What methods can we use to predict technological development? We don’t yet have reliable methods for long-term technological forecasting. But not all methods have been examined yet. Perhaps technology futures have a good track record. Perhaps we could look at historical technological predictions and see if there is any pattern in the data suggesting that certain character traits and contexts lend themselves to accurate technological predictions. Perhaps there are creative solutions we haven’t thought of yet.

  2. Which kinds of differential technological development should we encourage, and how? Should we “push” on WBE, or not? Are some kinds of AI research risk-reducing, and other kinds risk-increasing? How can we achieve such effects, if they are desired?

  3. Which open problems are safe to discuss, and which are potentially dangerous? AI risk research may itself produce risk in some cases, in the form of information hazards (Bostrom 2011). Is it safe to discuss decision theories? Acausal trade? Certain kinds of strategic questions, for example involving government intervention?

  4. What can we do to reduce the risk of an AI arms race?

  5. What can we do to raise the “sanity waterline,” and how much will this help?

  6. What can we do to attract more funding, support, and research to x-risk reduction and to the specific sub-problems of successful Singularity navigation?

  7. Which interventions should we prioritize?

  8. How should x-risk reducers and AI safety researchers interact with governments and corporations? Does Drexler’s interaction with the U.S. government regarding molecular nanotechnology provide any lessons for how AI risk researchers should act?

  9. How can optimal philanthropists get the most x-risk reduction for their philanthropic buck?

  10. How does AI risk compare to other existential risks?

  11. Which problems do we need to solve, and which ones can we have an AI solve?

  12. How can we develop microeconomic models of WBEs and self-improving systems?

  13. How can we be sure a Friendly AI development team will be altruistic?

  14. How hard is it to create Friendly AI?

  15. What is the strength of feedback from neuroscience to AI rather than brain emulation?

  16. Is there a safe way to do uploads, where they don’t turn into neuromorphic AI?

  17. How much must we spend on security when developing a Friendly AI team?

  18. What’s the best way to recruit talent toward working on AI risks?

  19. How difficult is stabilizing the world so we can work on Friendly AI slowly?

  20. How hard will a takeoff be? To what degree is “intelligence” (as efficient cross-domain optimization) a matter of content vs. algorithms? How much does takeoff depend on slow, real-world experiments?

  21. What is the value of strategy vs. object-level progress toward a positive Singularity?

  22. What different kinds of Oracle AI are there, and are any of them both safe and feasible?

  23. How much should we be worried about “metacomputational hazards”? E.g. should we worry about nonperson predicates? Oracle AIs engaging in self-fulfilling prophecies? Acausal hijacking?

  24. What improvements can we make to the way we go about answering strategy questions? Wei Dai’s notes on this question: “For example, should we differentiate between “strategic insights” (such as Carl Shulman’s insight that WBE-based Singletons may be feasible) and “keeping track of the big picture” (forming the overall strategy and updating it based on new insights and evidence), and aim to have people specialize in each, so that people deciding strategy won’t be tempted to overweigh their own insights? Another example: is there a better way to combine probability estimates from multiple people?”

  25. How do people in other fields answer strategy questions? Wei Dai’s notes on this question: “Is there such a thing as a science or art of strategy that we can copy from (and perhaps improve upon with ideas from x-rationality)?”

[more questions to come, as they are posted to the comments section]