but how would we do high intensity, highly focused research on something intentionally restructured to be an “AI outcomes” research question? I don’t think this is pointless—agency research might naturally talk about outcomes in a way that is general across a variety of people’s concerns. In particular, ethics and alignment seem like they’re an unnatural split, and outcomes seems like a refactor that could select important problems from both AI autonomy risks and human agency risks. I have more specific threads I could talk about.
but how would we do high intensity, highly focused research on something intentionally restructured to be an “AI outcomes” research question? I don’t think this is pointless—agency research might naturally talk about outcomes in a way that is general across a variety of people’s concerns. In particular, ethics and alignment seem like they’re an unnatural split, and outcomes seems like a refactor that could select important problems from both AI autonomy risks and human agency risks. I have more specific threads I could talk about.