We should gauge interest first, and see what everyone’s needs are. I get the impression that we are aiming for low-but-scalable-hours, high flexibility, high reliability, and relatively low-income. (high income if we can get it of course)
If we know roughly who is involved, we can list out our skills, and start brainstorming things we might be good at collectively. We should make sure to look for non-obvious things.
If we have any confidence that we can act more rationally than normal, we should look for areas in which this could be an advantage. (prediction markets?)
We should look closely at the ethical and existential risk implications of what we’re doing.
We should look closely at the ethical and existential risk implications of what we’re doing.
Making money? It would have to a significantly evil money making scheme for you to increase existential risk by doing it. (In particular I am observing that the market will do similar things anyway and you are just making it incrementally more efficient.)
I guess I’d say you should imagine the most damage a handful of lesswrong readers could do if we were evil, and assume we could do that accidentally if we were not careful. Assume we might innovate. or just make the PR worse.
Really this is true of everyone, and everyone should consider existential risks.
Just some thoughts:
We should gauge interest first, and see what everyone’s needs are. I get the impression that we are aiming for low-but-scalable-hours, high flexibility, high reliability, and relatively low-income. (high income if we can get it of course)
If we know roughly who is involved, we can list out our skills, and start brainstorming things we might be good at collectively. We should make sure to look for non-obvious things.
If we have any confidence that we can act more rationally than normal, we should look for areas in which this could be an advantage. (prediction markets?)
We should look closely at the ethical and existential risk implications of what we’re doing.
Making money? It would have to a significantly evil money making scheme for you to increase existential risk by doing it. (In particular I am observing that the market will do similar things anyway and you are just making it incrementally more efficient.)
I guess I’d say you should imagine the most damage a handful of lesswrong readers could do if we were evil, and assume we could do that accidentally if we were not careful. Assume we might innovate. or just make the PR worse.
Really this is true of everyone, and everyone should consider existential risks.
Create an AGI that tiles the universe with molecular SEO?
I’d really rather not find myself as a Boltzmann brain made from SEO rubbing up against itself.