You talk about personhood in a moral and technical sense, which is important, but I think it’s important to also take into account the legal and economic senses of personhood. Let me try to explain.
I work for a company where there’s a lot of white-collar busywork going on. I’ve come to realize that the value of this busywork is not so much the work itself (indeed a lot of it is done by fresh graduates and interns with little to no experience), but the fact that the company can bear responsibility for the work due to its somehow good reputation (something something respectability cascades), i.e., “Nobody ever got fired for hiring them”. There is not a lot of incentive to automate any of this work, even though I can personally attest that there is a lot of low-hanging fruit. (A respected senior colleague of mine plainly stated to me, privately, that most of it is bullshit jobs.)
By my estimation, “bearing responsibility” in the legal and economic sense means that an entity can be punished, where being punished means that something happens which disincentivizes it and other entities from doing the same. (For what it’s worth, I think much of our moral and ethical intuitions about personhood can be derived from this definition.) AI cannot function as a person of any legal or economic consequence (and by extension, moral or ethical consequence) if it cannot be punished or learn in that way. I assume it will be able to eventually, but until then most of these bullshit jobs will stay virtually untouchable because someone needs to bear responsibility. How does one punish an API? Currently, we practically only punish the person serving the API or the person using it.
There are two ways I see to overcome this. One way is that AI eventually can act as a drop-in replacement for human agents in the sense that they can bear responsibility and be punished as described above. With the current systems this is clearly not (yet) the case.
The other way is that the combination of cost, speed and quality becomes too good to ignore, i.e., that we get to a point where we can say “Nobody ever got fired for using AI” (on a task-by-task basis). This depends on the trade-offs that we’re willing to make between the different aspects of using AI for a given task, such as cost, speed, quality, reliability and interpretability. This is already driving use of AI for some tasks where the trade-off is good enough, while for others it’s not nearly good enough or still too risky to try.
You talk about personhood in a moral and technical sense, which is important, but I think it’s important to also take into account the legal and economic senses of personhood. Let me try to explain.
I work for a company where there’s a lot of white-collar busywork going on. I’ve come to realize that the value of this busywork is not so much the work itself (indeed a lot of it is done by fresh graduates and interns with little to no experience), but the fact that the company can bear responsibility for the work due to its somehow good reputation (something something respectability cascades), i.e., “Nobody ever got fired for hiring them”. There is not a lot of incentive to automate any of this work, even though I can personally attest that there is a lot of low-hanging fruit. (A respected senior colleague of mine plainly stated to me, privately, that most of it is bullshit jobs.)
By my estimation, “bearing responsibility” in the legal and economic sense means that an entity can be punished, where being punished means that something happens which disincentivizes it and other entities from doing the same. (For what it’s worth, I think much of our moral and ethical intuitions about personhood can be derived from this definition.) AI cannot function as a person of any legal or economic consequence (and by extension, moral or ethical consequence) if it cannot be punished or learn in that way. I assume it will be able to eventually, but until then most of these bullshit jobs will stay virtually untouchable because someone needs to bear responsibility. How does one punish an API? Currently, we practically only punish the person serving the API or the person using it.
There are two ways I see to overcome this. One way is that AI eventually can act as a drop-in replacement for human agents in the sense that they can bear responsibility and be punished as described above. With the current systems this is clearly not (yet) the case.
The other way is that the combination of cost, speed and quality becomes too good to ignore, i.e., that we get to a point where we can say “Nobody ever got fired for using AI” (on a task-by-task basis). This depends on the trade-offs that we’re willing to make between the different aspects of using AI for a given task, such as cost, speed, quality, reliability and interpretability. This is already driving use of AI for some tasks where the trade-off is good enough, while for others it’s not nearly good enough or still too risky to try.