Contents:
AI PolicyDelayed Singularity: an overview of arguments for and against superintelligence postponementSuperintelligence skepticismWhat can we learn from how democracies work, for AGI alignment?SuperintelligenceEasy goals: controlling superintelligence using non-optimizing utility functionsUnbasic AI drives: when does evolutionary pressure stop?Personal strategies in the AGI centuryRisk quantificationQuantifying AGI riskWhat would be residual existential risks in case aligned AGI is developed technically?
AI Policy
Delayed Singularity: an overview of arguments for and against superintelligence postponement
Superintelligence skepticism
What can we learn from how democracies work, for AGI alignment?
Superintelligence
Easy goals: controlling superintelligence using non-optimizing utility functions
Unbasic AI drives: when does evolutionary pressure stop?
Personal strategies in the AGI century
Risk quantification
Quantifying AGI risk
What would be residual existential risks in case aligned AGI is developed technically?
Contents: