This post briefly presents three ways that power can become centralized in a world with <@Comprehensive AI Services@>(@Reframing Superintelligence: Comprehensive AI Services as General Intelligence@), argues that under risk aversion “logical” risks can be more concerning than physical risks because they are more correlated, proposes combining human imitations and oracles to remove the human in the loop and become competitive, and suggests doing research to generate evidence of difficulty of a particular strand of research.
Planned summary for the Alignment Newsletter: