I was just thinking about writing a post that overlaps with this, inspired by a recent Drexler post. I’ll turn it into a comment.
Leopold Aschenbrenner’s framing of a drop-in remote worker anthropomorphizes AI in a way that risks causing AI labs to make AIs more agenty than is optimal.
Anthropomorphizing AI is often productive. I use that framing a fair amount to convince myself to treat AIs as more capable than I’d expect if I thought of them as mere tools. I collaborate better when I think of the AI as a semi-equal entity.
But it feels important to be able to switch back and forth between the tool framing and the worker framing. Both framings have advantages and disadvantages. The ideal framing is likely somewhere in between that seems harder to articulate.
I see some risk that AI labs turning AIs into agents, when if they were less focused on replacing humans they might lean more toward Drexler’s (safer) services model.
Please, AI labs, don’t anthropomorphize AIs without carefully considering when that’s an appropriate framing.
I would like to extend this slightly by switching perspective to the other side of the coin. The drop-in remote worker is not a problem of anthropomorphizing AI, so much as it is anthropomorphizing the need in the first place. Companies create roles with the expectation people will fill them, but that is the habit of the org, not the threshold of the need.
Adoption is being slowed down considerably by people asking for AI to be like a person, so we can ask that person to do some task. Most companies and people are not asking more directly for an AI to meet a need. Figuring out how to do that is a problem to solve by itself, and there hasn’t been much call for it to date.
I was just thinking about writing a post that overlaps with this, inspired by a recent Drexler post. I’ll turn it into a comment.
Leopold Aschenbrenner’s framing of a drop-in remote worker anthropomorphizes AI in a way that risks causing AI labs to make AIs more agenty than is optimal.
Anthropomorphizing AI is often productive. I use that framing a fair amount to convince myself to treat AIs as more capable than I’d expect if I thought of them as mere tools. I collaborate better when I think of the AI as a semi-equal entity.
But it feels important to be able to switch back and forth between the tool framing and the worker framing. Both framings have advantages and disadvantages. The ideal framing is likely somewhere in between that seems harder to articulate.
I see some risk that AI labs turning AIs into agents, when if they were less focused on replacing humans they might lean more toward Drexler’s (safer) services model.
Please, AI labs, don’t anthropomorphize AIs without carefully considering when that’s an appropriate framing.
I would like to extend this slightly by switching perspective to the other side of the coin. The drop-in remote worker is not a problem of anthropomorphizing AI, so much as it is anthropomorphizing the need in the first place. Companies create roles with the expectation people will fill them, but that is the habit of the org, not the threshold of the need.
Adoption is being slowed down considerably by people asking for AI to be like a person, so we can ask that person to do some task. Most companies and people are not asking more directly for an AI to meet a need. Figuring out how to do that is a problem to solve by itself, and there hasn’t been much call for it to date.