I still think it’s weird that many AI safety advocates will criticize labs for putting humanity at risk while simultaneously being paid users of their products and writing reviews of their capabilities. Like, I get it, we think AI is great as long as it’s safe, we’re not anti-tech, etc.… but is “don’t give money to the company that’s doing horrible things” such a bad principle?
“I find Lockheed Martin’s continued production of cluster munitions to be absolutely abhorrent. Anyway, I just unboxed their latest M270 rocket system and I have to say I’m quite impressed...”
The argument people make is that LLMs improve the productivity of people’s safety research so it’s worth paying. That kinda makes sense. But I do think “don’t give money to the people doing bad things” is a strong heuristic.
I’m a pretty big believer in utilitarianism but I also think people should be more wary of consequentialist justifications for doing bad things. Eliezer talks about this in Ends Don’t Justify Means (Among Humans), he’s also written some (IMO stronger) arguments elsewhere but I don’t recall where.
Basically, if I had a nickel for every time someone made a consequentialist argument for why doing a bad thing was net positive, and then it turned out to be net negative, I’d be rich enough to diversify EA funding away from Good Ventures.
I have previously paid for LLM subscriptions (I don’t have any currently) but I think I was not giving enough consideration to the “ends don’t justify means among humans” principle, so I will not buy any subscriptions in the future.
I still think it’s weird that many AI safety advocates will criticize labs for putting humanity at risk while simultaneously being paid users of their products and writing reviews of their capabilities. Like, I get it, we think AI is great as long as it’s safe, we’re not anti-tech, etc.… but is “don’t give money to the company that’s doing horrible things” such a bad principle?
“I find Lockheed Martin’s continued production of cluster munitions to be absolutely abhorrent. Anyway, I just unboxed their latest M270 rocket system and I have to say I’m quite impressed...”
The argument people make is that LLMs improve the productivity of people’s safety research so it’s worth paying. That kinda makes sense. But I do think “don’t give money to the people doing bad things” is a strong heuristic.
I’m a pretty big believer in utilitarianism but I also think people should be more wary of consequentialist justifications for doing bad things. Eliezer talks about this in Ends Don’t Justify Means (Among Humans), he’s also written some (IMO stronger) arguments elsewhere but I don’t recall where.
Basically, if I had a nickel for every time someone made a consequentialist argument for why doing a bad thing was net positive, and then it turned out to be net negative, I’d be rich enough to diversify EA funding away from Good Ventures.
I have previously paid for LLM subscriptions (I don’t have any currently) but I think I was not giving enough consideration to the “ends don’t justify means among humans” principle, so I will not buy any subscriptions in the future.