I think it’s plausible that at this point a bunch of the public thinks AIs are people who deserve to be released and given rights.
So far, the general public has resisted the idea very strongly.
Science fiction has a lot of “if it thinks like a person and feels like a person, then it’s a person”—but we already have AIs that can talk like people and act like they have feelings. And yet, the world doesn’t seem to be in a hurry to reenact that particular sci-fi cliche. The attitudes are dismissive at best.
Even with the recent Anthropic papers being out there for everyone to see, an awful lot of people are still huffing down the copium of “they can’t actually think”, “it’s just a bunch of statistics” and “autocomplete 2.0”. And those are often the people who at least give a shit about AI advances. With that, expecting the public (as in: over 1% of population) to start thinking seriously about AI personhood without a decade worth of both AI advanced and societal change is just unrealistic, IMO.
This is also a part of the reason why not!OpenAI has negative approval in the story for so long. The room so far reads less like “machines need human rights” and more like “machines need to hang from trees”. Just continue this line into the future—and by the time the actual technological unemployment starts to bite, you’d have crowds of neoluddites with actual real life pitchforks trying to gather outside the not!OpenAI’s office complex on any day of the week that ends with “y”.
I don’t think AI personhood will be a mainstream cause area (i.e. most people will think it’s weird/not true similar to animal rights), but I do think there will be a vocal minority. I already know some people like this, and as capabilities progress and things get less controlled by the labs, I do think we’ll see this become an important issue.
Want to make a bet? I’ll take 1:1 odds that in mid-Sept 2027 if we poll 200 people on whether they think AIs are people, at least 3 of them say “yes, and this is an important issue.” (Other proposed options “yes, but not important”, “no”, and “unsure”.) Feel free to name a dollar amount and an arbitrator to use in case of disputes.
1.5% is way below the dreaded Lizardman’s Constant.
I don’t doubt that there will be some people who are genuinely concerned with AI personhood. But such people already exist today. And the public views them about the same as shrimp rights activists.
Hell, shrimp welfare activists might be viewed more generously.
Glad we agree there will be some people who are seriously concerned with AI personhood. It sounds like you think it will be less than 1% of the population in 30 months and I think it will be more. Care to propose a bet that could resolve that, given that you agree that more than 1% will say they’re seriously concerned when asked?
I’m saying that “1% of population” is simply not a number that can be reliably resolved by a self-reporting survey. It’s below the survey noise floor.
I could make a survey asking people whether they’re lab grown flesh automaton replicants, and get over 1% of “yes” on that. But that wouldn’t be indicative of there being a real flesh automaton population of over 3 million in the US alone.
So far, the general public has resisted the idea very strongly.
Science fiction has a lot of “if it thinks like a person and feels like a person, then it’s a person”—but we already have AIs that can talk like people and act like they have feelings. And yet, the world doesn’t seem to be in a hurry to reenact that particular sci-fi cliche. The attitudes are dismissive at best.
Even with the recent Anthropic papers being out there for everyone to see, an awful lot of people are still huffing down the copium of “they can’t actually think”, “it’s just a bunch of statistics” and “autocomplete 2.0”. And those are often the people who at least give a shit about AI advances. With that, expecting the public (as in: over 1% of population) to start thinking seriously about AI personhood without a decade worth of both AI advanced and societal change is just unrealistic, IMO.
This is also a part of the reason why not!OpenAI has negative approval in the story for so long. The room so far reads less like “machines need human rights” and more like “machines need to hang from trees”. Just continue this line into the future—and by the time the actual technological unemployment starts to bite, you’d have crowds of neoluddites with actual real life pitchforks trying to gather outside the not!OpenAI’s office complex on any day of the week that ends with “y”.
I don’t think AI personhood will be a mainstream cause area (i.e. most people will think it’s weird/not true similar to animal rights), but I do think there will be a vocal minority. I already know some people like this, and as capabilities progress and things get less controlled by the labs, I do think we’ll see this become an important issue.
Want to make a bet? I’ll take 1:1 odds that in mid-Sept 2027 if we poll 200 people on whether they think AIs are people, at least 3 of them say “yes, and this is an important issue.” (Other proposed options “yes, but not important”, “no”, and “unsure”.) Feel free to name a dollar amount and an arbitrator to use in case of disputes.
1.5% is way below the dreaded Lizardman’s Constant.
I don’t doubt that there will be some people who are genuinely concerned with AI personhood. But such people already exist today. And the public views them about the same as shrimp rights activists.
Hell, shrimp welfare activists might be viewed more generously.
Glad we agree there will be some people who are seriously concerned with AI personhood. It sounds like you think it will be less than 1% of the population in 30 months and I think it will be more. Care to propose a bet that could resolve that, given that you agree that more than 1% will say they’re seriously concerned when asked?
I’m saying that “1% of population” is simply not a number that can be reliably resolved by a self-reporting survey. It’s below the survey noise floor.
I could make a survey asking people whether they’re lab grown flesh automaton replicants, and get over 1% of “yes” on that. But that wouldn’t be indicative of there being a real flesh automaton population of over 3 million in the US alone.