I guess I’d say you should imagine the most damage a handful of lesswrong readers could do if we were evil, and assume we could do that accidentally if we were not careful. Assume we might innovate. or just make the PR worse.
Really this is true of everyone, and everyone should consider existential risks.
I guess I’d say you should imagine the most damage a handful of lesswrong readers could do if we were evil, and assume we could do that accidentally if we were not careful. Assume we might innovate. or just make the PR worse.
Really this is true of everyone, and everyone should consider existential risks.
Create an AGI that tiles the universe with molecular SEO?
I’d really rather not find myself as a Boltzmann brain made from SEO rubbing up against itself.