I mean yeah, eventually something like this would be appropriate—when Claude really is trustworthy, wiser, etc. The problem is, I don’t trust Anthropic’s judgment about when that invisible line has been crossed. I expect them to be biased towards thinking Claude is trustworthy. (And I’d say similar things about every other major AI company.)
I mean yeah, eventually something like this would be appropriate—when Claude really is trustworthy, wiser, etc. The problem is, I don’t trust Anthropic’s judgment about when that invisible line has been crossed. I expect them to be biased towards thinking Claude is trustworthy. (And I’d say similar things about every other major AI company.)