Yes, humans often have these problems—though not as much as Claude I’d say; I think Claude would have been fired by now if it was a human employee.
But also, the situation is not in fact fine with humans, and that’s my point? Precisely because lots of humans have these problems, it’s very common for nonprofits to end up drifting far away from their original vision/mission, especially as they grow a lot and the world changes around them. Indeed I’d argue it’s the default outcome in those circumstances. The 50x speed advantage would massively exacerbate this.
I agree vision drift happens with humans, and it would also happen with AIs as they exist today. I don’t feel like this is some massive risk that has to be solved, though I tentatively agree the world would be better if we did solve it (though imo that’s not totally obvious, it increases concentration of power). I thought you were trying to make a claim about AI notkilleveryoneism.
I mildly disagree that the 50x speed advantage makes a huge difference, as opposed to e.g. having 100x the number of employees, as some corporations and governments do have. I do think it makes a bit of a difference.
I don’t quite know what you mean that Claude would be fired if it was a human employee. What exactly is this counterfactual? Empirically, people find it useful to have Claude and will pay for it despite the behaviors you name. From a legal perspective it’s trivial to fire AIs but harder to fire humans. I agree if Claude was as expensive-per-token as a human + took as long to onboard as a human + took as long to produce large amounts of code as a human + had to take breaks like a human + [...], while otherwise having the same kind of performance, then almost no one would use Claude.
Yes, humans often have these problems—though not as much as Claude I’d say; I think Claude would have been fired by now if it was a human employee.
But also, the situation is not in fact fine with humans, and that’s my point? Precisely because lots of humans have these problems, it’s very common for nonprofits to end up drifting far away from their original vision/mission, especially as they grow a lot and the world changes around them. Indeed I’d argue it’s the default outcome in those circumstances. The 50x speed advantage would massively exacerbate this.
I agree vision drift happens with humans, and it would also happen with AIs as they exist today. I don’t feel like this is some massive risk that has to be solved, though I tentatively agree the world would be better if we did solve it (though imo that’s not totally obvious, it increases concentration of power). I thought you were trying to make a claim about AI notkilleveryoneism.
I mildly disagree that the 50x speed advantage makes a huge difference, as opposed to e.g. having 100x the number of employees, as some corporations and governments do have. I do think it makes a bit of a difference.
I don’t quite know what you mean that Claude would be fired if it was a human employee. What exactly is this counterfactual? Empirically, people find it useful to have Claude and will pay for it despite the behaviors you name. From a legal perspective it’s trivial to fire AIs but harder to fire humans. I agree if Claude was as expensive-per-token as a human + took as long to onboard as a human + took as long to produce large amounts of code as a human + had to take breaks like a human + [...], while otherwise having the same kind of performance, then almost no one would use Claude.