The worst thing you could possibly do is work for the capabilities section of existing AGI enterprises like Google Brain, DeepMind or OpenAI. This includes, obviously, the “AI alignment” companies that really just do capabilities research, and does not include the sections within these companies that do genuine alignment research. Dan Heydricks has an excellent sequence here on how not to fuck this up. Use your critical thinking and and ask simple questions to find out which position is which.
The second worst thing in terms of expected impact would be to work at or support pioneering ML research at a general company like Facebook’s division, that isn’t necessarily explicitly trying to engineer AGI but effectively involves a dayjob of burning the capabilities commons.
Below that would be to work on straightforward ML tooling that has generalist applications; things like working on frameworks (PyTorch, wandb.ai, etc.), computer hardware designed explicitly for ML, or for companies like Scale.
Somewhere deep below that is making money for or investing in the parent companies that pioneer these things (Facebook, Microsoft, Google). Depending on specifics you can lump in certain more general types of computer engineering work here.
After that though, I think if you just donate a reasonable fraction of your income to charity, or AI alignment enterprises, you’re probably net positive. It’s really not that complicated: if you’re making or contributing to research that pushes the boundary of artificial intelligence, then… stop doing that.
if you’re making or contributing to research that pushes the boundary of artificial intelligence, then… stop doing that.
Given that we currently don’t know how to build aligned AI, solving the AI Alignment problem by definition is going to require research that pushes the bounds of artificial intelligence. The advice you’re giving is basically that anyone concerned about AI Alignment should self-select out of doing that research. Which seems like the opposite of help.
Given that we currently don’t know how to build aligned AI, solving the AI Alignment problem by definition is going to require research that pushes the bounds of artificial intelligence.
This is an extraordinarily vague statement that is technically true but doesn’t imply anything you seem to think it means. There’s a fairly clear venn diagram between alignment research and capabilities research. On one side of the diagram is most things that make OpenAI more money and on the other side is Paul Christiano’s transparency stuff.
The advice you’re giving is basically that anyone concerned about AI Alignment should self-select out of doing that research.
If it’s the research that burns the capabilities commons while there’s lots of alignment tasks left to be done, or people to convince, then yes, that seems prudent.
There’s a fairly clear venn diagram between alignment research and capabilities research.
This appears to be the crux of our disagreement. I do not think the venn diagram is clear at all. But if I had to guess, I think there is a large overlap between “make an AI that doesn’t spew out racist garbage” and “make an AI that doesn’t murder us all”.
The worst thing you could possibly do is work for the capabilities section of existing AGI enterprises like Google Brain, DeepMind or OpenAI. This includes, obviously, the “AI alignment” companies that really just do capabilities research, and does not include the sections within these companies that do genuine alignment research. Dan Heydricks has an excellent sequence here on how not to fuck this up. Use your critical thinking and and ask simple questions to find out which position is which.
The second worst thing in terms of expected impact would be to work at or support pioneering ML research at a general company like Facebook’s division, that isn’t necessarily explicitly trying to engineer AGI but effectively involves a dayjob of burning the capabilities commons.
Below that would be to work on straightforward ML tooling that has generalist applications; things like working on frameworks (PyTorch, wandb.ai, etc.), computer hardware designed explicitly for ML, or for companies like Scale.
Somewhere deep below that is making money for or investing in the parent companies that pioneer these things (Facebook, Microsoft, Google). Depending on specifics you can lump in certain more general types of computer engineering work here.
After that though, I think if you just donate a reasonable fraction of your income to charity, or AI alignment enterprises, you’re probably net positive. It’s really not that complicated: if you’re making or contributing to research that pushes the boundary of artificial intelligence, then… stop doing that.
Given that we currently don’t know how to build aligned AI, solving the AI Alignment problem by definition is going to require research that pushes the bounds of artificial intelligence. The advice you’re giving is basically that anyone concerned about AI Alignment should self-select out of doing that research. Which seems like the opposite of help.
This is an extraordinarily vague statement that is technically true but doesn’t imply anything you seem to think it means. There’s a fairly clear venn diagram between alignment research and capabilities research. On one side of the diagram is most things that make OpenAI more money and on the other side is Paul Christiano’s transparency stuff.
If it’s the research that burns the capabilities commons while there’s lots of alignment tasks left to be done, or people to convince, then yes, that seems prudent.
This appears to be the crux of our disagreement. I do not think the venn diagram is clear at all. But if I had to guess, I think there is a large overlap between “make an AI that doesn’t spew out racist garbage” and “make an AI that doesn’t murder us all”.