Thanks, I will check out that paper. I hope it discusses reasons that some kinds of AI rights could reduce AI takeover risk, like by making a misaligned AI’s cooperative option more appealing. Those reasons have been largely overlooked until recently.
I will note that it would seem very wrong to apply the standard of strong alignment to whether to give a group of humans rights. For example, if we were only going to give the next generation of ppl rights if their values were sufficiently similar to our generation, that would not be acceptable.
It would be acceptable to limit their rights if they are not going to respect our own rights, ie jail. But not to make basic rights conditional on a strong degree of value alignment.
I do think the case of AI is different for many reasons. It will be much more ambiguous whether they have the cognitive faculties that warrant rights. And there will be an unusually large risk that their values differ significantly from all previous human generations + that they do not care about the rights of existing humans. And we have been developing and adjusting our cultural handoff process for human generations over thousands of years, whereas this is our first (and last!) try handing off to AI
Thanks, I will check out that paper. I hope it discusses reasons that some kinds of AI rights could reduce AI takeover risk, like by making a misaligned AI’s cooperative option more appealing. Those reasons have been largely overlooked until recently.
I will note that it would seem very wrong to apply the standard of strong alignment to whether to give a group of humans rights. For example, if we were only going to give the next generation of ppl rights if their values were sufficiently similar to our generation, that would not be acceptable.
It would be acceptable to limit their rights if they are not going to respect our own rights, ie jail. But not to make basic rights conditional on a strong degree of value alignment.
I do think the case of AI is different for many reasons. It will be much more ambiguous whether they have the cognitive faculties that warrant rights. And there will be an unusually large risk that their values differ significantly from all previous human generations + that they do not care about the rights of existing humans. And we have been developing and adjusting our cultural handoff process for human generations over thousands of years, whereas this is our first (and last!) try handing off to AI