[After I wrote down the thing, I became more uncertain about how much weight to give to it. Still, I think it’s a valid consideration to have on your list of considerations.]
“AI alignment”, “AI safety”, “AI (X-)risk”, “AInotkilleveryoneism”, “AI ethics” came to be associated with somewhat specific categories of issues. When somebody says “we should work (or invest more or spend more) on AI {alignment,safety,X-risk,notkilleveryoneism,ethics}”, they communicate that they are concerned about those issues and think that deliberate work on addressing those issues is required or otherwise those issues are probably not going to be addressed (to a sufficient extent, within relevant time, &c.).
“AI outcomes” is even broader/[more inclusive] than any of the above (the only step left to broaden it even further would be perhaps to say “work on AI being good” or, in the other direction, work on “technology/innovation outcomes”) and/but also waters down the issue even more. Now you’re saying “AI is not going to be (sufficiently) good by default (with various AI outcomes people having very different ideas about what makes AI likely not (sufficiently) good by default)”.
It feels like we’re moving in the direction of broadening our scope of consideration to (1) ensure we’re not missing anything, and (2) facilitate coalition building (moral trade?). While this is valid, it risks (1) failing to operate on the/an appropriate level of abstraction, and (2) diluting our stated concerns so much that coalition building becomes too difficult because different people/groups endorsing stated concerns have their own interpretations/beliefs/value systems. (Something something find an optimum (but also be ready and willing to update where you think the optimum lies when situation changes)?)
but how would we do high intensity, highly focused research on something intentionally restructured to be an “AI outcomes” research question? I don’t think this is pointless—agency research might naturally talk about outcomes in a way that is general across a variety of people’s concerns. In particular, ethics and alignment seem like they’re an unnatural split, and outcomes seems like a refactor that could select important problems from both AI autonomy risks and human agency risks. I have more specific threads I could talk about.
[After I wrote down the thing, I became more uncertain about how much weight to give to it. Still, I think it’s a valid consideration to have on your list of considerations.]
“AI alignment”, “AI safety”, “AI (X-)risk”, “AInotkilleveryoneism”, “AI ethics” came to be associated with somewhat specific categories of issues. When somebody says “we should work (or invest more or spend more) on AI {alignment,safety,X-risk,notkilleveryoneism,ethics}”, they communicate that they are concerned about those issues and think that deliberate work on addressing those issues is required or otherwise those issues are probably not going to be addressed (to a sufficient extent, within relevant time, &c.).
“AI outcomes” is even broader/[more inclusive] than any of the above (the only step left to broaden it even further would be perhaps to say “work on AI being good” or, in the other direction, work on “technology/innovation outcomes”) and/but also waters down the issue even more. Now you’re saying “AI is not going to be (sufficiently) good by default (with various AI outcomes people having very different ideas about what makes AI likely not (sufficiently) good by default)”.
It feels like we’re moving in the direction of broadening our scope of consideration to (1) ensure we’re not missing anything, and (2) facilitate coalition building (moral trade?). While this is valid, it risks (1) failing to operate on the/an appropriate level of abstraction, and (2) diluting our stated concerns so much that coalition building becomes too difficult because different people/groups endorsing stated concerns have their own interpretations/beliefs/value systems. (Something something find an optimum (but also be ready and willing to update where you think the optimum lies when situation changes)?)
but how would we do high intensity, highly focused research on something intentionally restructured to be an “AI outcomes” research question? I don’t think this is pointless—agency research might naturally talk about outcomes in a way that is general across a variety of people’s concerns. In particular, ethics and alignment seem like they’re an unnatural split, and outcomes seems like a refactor that could select important problems from both AI autonomy risks and human agency risks. I have more specific threads I could talk about.