Intro to AI safety

Introduction

This section will explain and build a case for Existential risk from AI. It’s too short to give more than a rough overview, but will link to other aisafety.info articles when more detail is needed.

As an alternative, we also have a self-contained narrative introduction.

Summary

AI is ad­vanc­ing fast

AI may at­tain hu­man-level soon

Hu­man-level is not the limit

The road from hu­man-level to su­per­in­tel­li­gent AI may be short

AI may pur­sue goals

AI’s goals may not match ours

Differ­ent goals may bring AI into con­flict with us

AI can win a con­flict against us

Defeat may be ir­re­versibly catastrophic

Ad­vanced AI is a big deal even if we don’t lose control

If we get things right, AI could have huge benefits