We should look closely at the ethical and existential risk implications of what we’re doing.
Making money? It would have to a significantly evil money making scheme for you to increase existential risk by doing it. (In particular I am observing that the market will do similar things anyway and you are just making it incrementally more efficient.)
I guess I’d say you should imagine the most damage a handful of lesswrong readers could do if we were evil, and assume we could do that accidentally if we were not careful. Assume we might innovate. or just make the PR worse.
Really this is true of everyone, and everyone should consider existential risks.
Making money? It would have to a significantly evil money making scheme for you to increase existential risk by doing it. (In particular I am observing that the market will do similar things anyway and you are just making it incrementally more efficient.)
I guess I’d say you should imagine the most damage a handful of lesswrong readers could do if we were evil, and assume we could do that accidentally if we were not careful. Assume we might innovate. or just make the PR worse.
Really this is true of everyone, and everyone should consider existential risks.
Create an AGI that tiles the universe with molecular SEO?
I’d really rather not find myself as a Boltzmann brain made from SEO rubbing up against itself.