But the evils weren’t a price that was paid in exchange for the good.
The evils were a price that was paid to deny, delay, and weaken the good.
Yep, that is not in conflict with what I am saying. Or like, I am saying that really a lot of the evils were just evils, and genuinely corrupting, and quite plausibly you should have spent your time righting them. But that doesn’t mean that it would have been right to stop the whole thing.
(To again make the analogy to my current life choices clear: It is clear to me that people around me are doing, in addition to a bunch of stuff that is clearly good and might give humanity a shot of navigating the next century successfully, a bunch of stuff that is really bad and is against the good and is not intrinsically tied to the good. Should I stay and try to fix it, or should I abandon the project and try to build something new? How much evil should you tolerate in the pursuit of goodness? Clearly it can’t be none!)
Suppose you know that your friend is a brilliant doctor; and also that your friend’s parent brutally abused her throughout her childhood.
A good friend would not say, “The abuse was worth it, because she is a brilliant doctor.”
A good friend might say, instead, “I am glad that she survived the abuse; and that it did not prevent her from achieving greatness.”
It is clear to me that people around me are doing, in addition to a bunch of stuff that is clearly good and might give humanity a shot of navigating the next century successfully, a bunch of stuff that is really bad and is against the good and is not intrinsically tied to the good.
Um … are we talking about capabilities research, or something else?
I mean, if you were to know that a great AI-safety genius was going around committing serious crimes that harm people in the community, then yes, you should be taking steps to stop it and bring them to justice, even if that would impair their AI-safety work.
Um … are we talking about capabilities research, or something else?
We are talking about capabilities research, in part. We are also talking about stuff like FTX and things adjacent to it (of which there has been a good amount in my retelling of this ecosystem!).
I mean, if you were to know that a great AI-safety genius was going around committing serious crimes that harm people in the community, then yes, you should be taking steps to stop it and bring them to justice, even if that would impair their AI-safety work.
I mean, sure, I am probably the last person someone could try to accuse of “not having tried to take steps to bring the relevant people to justice”. But if the “taking people to justice” step isn’t working, then you maybe want to think about quitting.
Yep, that is not in conflict with what I am saying. Or like, I am saying that really a lot of the evils were just evils, and genuinely corrupting, and quite plausibly you should have spent your time righting them. But that doesn’t mean that it would have been right to stop the whole thing.
(To again make the analogy to my current life choices clear: It is clear to me that people around me are doing, in addition to a bunch of stuff that is clearly good and might give humanity a shot of navigating the next century successfully, a bunch of stuff that is really bad and is against the good and is not intrinsically tied to the good. Should I stay and try to fix it, or should I abandon the project and try to build something new? How much evil should you tolerate in the pursuit of goodness? Clearly it can’t be none!)
Suppose you know that your friend is a brilliant doctor; and also that your friend’s parent brutally abused her throughout her childhood.
A good friend would not say, “The abuse was worth it, because she is a brilliant doctor.”
A good friend might say, instead, “I am glad that she survived the abuse; and that it did not prevent her from achieving greatness.”
Um … are we talking about capabilities research, or something else?
I mean, if you were to know that a great AI-safety genius was going around committing serious crimes that harm people in the community, then yes, you should be taking steps to stop it and bring them to justice, even if that would impair their AI-safety work.
We are talking about capabilities research, in part. We are also talking about stuff like FTX and things adjacent to it (of which there has been a good amount in my retelling of this ecosystem!).
I mean, sure, I am probably the last person someone could try to accuse of “not having tried to take steps to bring the relevant people to justice”. But if the “taking people to justice” step isn’t working, then you maybe want to think about quitting.
Okay, good. That’s what I thought, I just wanted to make sure I wasn’t making a not-knowing-what-the-conversation-was-really-about error. (“Never give anyone wise advice unless you know exactly what you’re both talking about. Got it.”)