Strongly disagree with the framing and conclusions.
The message of “debug your map of reality before you hire armies of robots to print it on every atom you can reach” is sound, and I don’t think anyone disagrees with that. However, several arguments in the post read like straw men:
When advocates for AI consciousness and rights pattern-match from their experience with animals and humans, they often import assumptions that don’t fit...
Animal advocates’ position can be stated simply as working to reduce felt suffering—I think that maps well to AI consciousness considerations.
Another group coming with strong priors are “legalistic” types. Here, the prior is AIs are like legal persons, and the main problem to solve is how to integrate them into the frameworks of capitalism. They imagine a future of AI corporations, AI property rights, AI employment contracts.
Is this true? An o3 prompt “what are the main focuses of AI “legalistic” types?” returns common-sense focus areas such as analysing AI risk, monitoring compliance, understanding civil & product liabilities relating to AI systems.
The author uses these positions to conclude with:
What we can do is weaken human priors. Try to form ontologies which fit AIs, rather than make AIs fit human and animal mold.
which to me sets off big alarm bells—one existential AI risk we need to be accounting for is Gradual Disempowerment [the author of this post is also the author of the Gradual Disempowerment paper—how has this disconnect occurred?]. Active messaging to weaken human priors is concerning to me, and needs a lot stronger justification and specific implementation details.
Strongly disagree with the framing and conclusions.
The message of “debug your map of reality before you hire armies of robots to print it on every atom you can reach” is sound, and I don’t think anyone disagrees with that. However, several arguments in the post read like straw men:
Animal advocates’ position can be stated simply as working to reduce felt suffering—I think that maps well to AI consciousness considerations.
Is this true? An o3 prompt “what are the main focuses of AI “legalistic” types?” returns common-sense focus areas such as analysing AI risk, monitoring compliance, understanding civil & product liabilities relating to AI systems.
The author uses these positions to conclude with:
which to me sets off big alarm bells—one existential AI risk we need to be accounting for is Gradual Disempowerment [the author of this post is also the author of the Gradual Disempowerment paper—how has this disconnect occurred?]. Active messaging to weaken human priors is concerning to me, and needs a lot stronger justification and specific implementation details.