Man, I’m a pretty committed utilitarian, but I feel like your ethical framework here seems way more naive consequentialist than I’m willing to be. “Don’t collaborate with evil” seems like a very clear Chesterton’s fence that I’d very suspicious about removing. I think you should be really, really skeptical if you think you’ve argued yourself out of it.
Attending an event with someone else is not “collaborating with evil”!
I think people working at frontier companies are causing vastly more harm and are much stronger candidates for being moral monsters than Cremieux is (even given his recent IMO quite dickish behavior). I think it would be quite dumb of me to ban all frontier lab employees from Lightcone events, and my guess is you would agree with this even if you agreed with my beliefs on frontier AI labs.
Many events exist to negotiate and translate between different worldviews and perspectives. LessOnline more so than most. Yes, think about when you are supporting evil, or giving it legitimacy, and it’s messy, but especially given your position at a leading frontier lab, I don’t think you would consider a blanket position of “don’t collaborate with evil” in a way that would extend as far as “attending an event with someone else” as tenable.
A possible reason to treat “this guy is racist in ways that both the broader culture and I agree is bad” more harshly than “this guy works on AI capabilities” is something like Be Nice Until You Can Coordinate Meanness—it makes sense to act differently when you’re enforcing an existing norm vs. trying to create a new one or just judging someone without engaging with norms.
A possible issue with that is that at least some broader-society norms about racism are bad actually and shouldn’t be enforced. I think a possible crux here is whether any norms against racism are just and worth enforcing, or whether the whole complex of such norms is unjust.
(For myself I take a meta-level stance approximately like yours but I also don’t really object to people taking stances more like eukaryote’s.)
To be clear, I’m responding to John’s more general ethical stance here of “working with moral monsters”, not anything specific about Cremieux. I’m not super interested in the specific situation with Cremieux (though generally it seems bad to me).
On the AI lab point, I do think people should generally avoid working for organizations that they think are evil, or at least think really carefully about it before they do it. I do not think Anthropic is evil—in fact I think Anthropic is the main force for good on the present gameboard.
I think John’s comment, in the context of this thread, was describing a level of “working with” that was in the reference class of “attending an event with” and less “working for an organization” and the usual commitments and relationship that entails, so extending it to that case feels a bit like a non-sequitur. He explicitly mentioned attending an event as the example of the kind of “working with” he was talking about, so responding to only a non-central case of it feels weird.
It is also otherwise the case that in our social circle, the position of “work for organizations that you think are very bad for the world in order to make it better” is a relatively common take (though in that case I think we two appear be in rough agreement that it’s rarely worth it), and I hope you also advocate for it when it’s harder to defend.
Given common beliefs about AI companies in our extended social circle, I think it illustrates pretty nicely why extending an attitude about association-policing that extends all the way to “mutual event attendance” would void a huge number of potential trades and opportunities for compromise and surface area to change one’s mind, and is a bad idea.
I agree that attending an event with someone obviously shouldn’t count as endorsement/collaboration/etc. Inviting someone to an event seems somewhat closer, though.
I’m also not really sure what you’re hinting at with “I hope you also advocate for it when it’s harder to defend.” I assume something about what I think about working at AI labs? I feel like my position on that was fairly clear in my previous comment.
Inviting someone to an event seems somewhat closer, though.
Yeah, in this case we are talking about “attending an event where someone you think is evil is invited to attend”, which is narrower, but also strikes me as an untenable position (e.g. in the case of the lab case, this would prevent me from attending almost any conference I can think of wanting to attend in the Bay Area, almost all of which routinely invite frontier lab employees as speakers or featured guests).
To be clear, I think it’s reasonable to be frustrated with Lightcone if you think we legitimize people who you think will misuse that legitimacy, but IMO refusing to attend any events where an organizer makes that kind of choice seems very intense to me (though of course, if someone was already considering attending an event as being of marginal value, such a thing could push you over the edge, though I think this would produce a different top-level comment).
I’m also not really sure what you’re hinting at with “I hope you also advocate for it when it’s harder to defend.” I assume something about what I think about working at AI labs? I feel like my position on that was fairly clear in my previous comment.
It’s mostly an expression of hope. For example, I hope it’s a genuine commitment that will result in you saying so, even if you might end up in the unfortunate position of updating negatively on Anthropic, or being friends and allies with lots of people at other organizations that you updated negatively on.
As a reason for this being hope instead of confidence: I do not remember you (or almost anyone else in your position) calling for people to leave their positions at OpenAI when it became more clear the organization was likely harming the world, though maybe I just missed it. I am not intending this to be some confident “gotcha”, just me hinting that people often like to do moral grandstanding in this domain without actual backing deep commitments.
To be clear, this wasn’t an intention to drag the whole topic into this conversation, but was trying to be a low-key and indirect expression of me viewing some of the things you say here with some skepticism. I don’t super want to put you on the spot to justify your whole position here, but also would have felt amiss to not give any hints of how I relate to them. So feel free to not respond, as I am sure we will find better contexts in which we can discuss these things.
I’m responding to John’s more general ethical stance here of “working with moral monsters”, not anything specific about Cremieux
For what it’s worth I interpreted it as being about Cremieux in particular based on the comment it was directly responding to; probably others also interpreted it that way
Man, I’m a pretty committed utilitarian, but I feel like your ethical framework here seems way more naive consequentialist than I’m willing to be. “Don’t collaborate with evil” seems like a very clear Chesterton’s fence that I’d very suspicious about removing. I think you should be really, really skeptical if you think you’ve argued yourself out of it.
Attending an event with someone else is not “collaborating with evil”!
I think people working at frontier companies are causing vastly more harm and are much stronger candidates for being moral monsters than Cremieux is (even given his recent IMO quite dickish behavior). I think it would be quite dumb of me to ban all frontier lab employees from Lightcone events, and my guess is you would agree with this even if you agreed with my beliefs on frontier AI labs.
Many events exist to negotiate and translate between different worldviews and perspectives. LessOnline more so than most. Yes, think about when you are supporting evil, or giving it legitimacy, and it’s messy, but especially given your position at a leading frontier lab, I don’t think you would consider a blanket position of “don’t collaborate with evil” in a way that would extend as far as “attending an event with someone else” as tenable.
A possible reason to treat “this guy is racist in ways that both the broader culture and I agree is bad” more harshly than “this guy works on AI capabilities” is something like Be Nice Until You Can Coordinate Meanness—it makes sense to act differently when you’re enforcing an existing norm vs. trying to create a new one or just judging someone without engaging with norms.
A possible issue with that is that at least some broader-society norms about racism are bad actually and shouldn’t be enforced. I think a possible crux here is whether any norms against racism are just and worth enforcing, or whether the whole complex of such norms is unjust.
(For myself I take a meta-level stance approximately like yours but I also don’t really object to people taking stances more like eukaryote’s.)
The “greater evil” may be worse, but the “more legible evil” is easier to coordinate against.
To be clear, I’m responding to John’s more general ethical stance here of “working with moral monsters”, not anything specific about Cremieux. I’m not super interested in the specific situation with Cremieux (though generally it seems bad to me).
On the AI lab point, I do think people should generally avoid working for organizations that they think are evil, or at least think really carefully about it before they do it. I do not think Anthropic is evil—in fact I think Anthropic is the main force for good on the present gameboard.
I think John’s comment, in the context of this thread, was describing a level of “working with” that was in the reference class of “attending an event with” and less “working for an organization” and the usual commitments and relationship that entails, so extending it to that case feels a bit like a non-sequitur. He explicitly mentioned attending an event as the example of the kind of “working with” he was talking about, so responding to only a non-central case of it feels weird.
It is also otherwise the case that in our social circle, the position of “work for organizations that you think are very bad for the world in order to make it better” is a relatively common take (though in that case I think we two appear be in rough agreement that it’s rarely worth it), and I hope you also advocate for it when it’s harder to defend.
Given common beliefs about AI companies in our extended social circle, I think it illustrates pretty nicely why extending an attitude about association-policing that extends all the way to “mutual event attendance” would void a huge number of potential trades and opportunities for compromise and surface area to change one’s mind, and is a bad idea.
I agree that attending an event with someone obviously shouldn’t count as endorsement/collaboration/etc. Inviting someone to an event seems somewhat closer, though.
I’m also not really sure what you’re hinting at with “I hope you also advocate for it when it’s harder to defend.” I assume something about what I think about working at AI labs? I feel like my position on that was fairly clear in my previous comment.
Yeah, in this case we are talking about “attending an event where someone you think is evil is invited to attend”, which is narrower, but also strikes me as an untenable position (e.g. in the case of the lab case, this would prevent me from attending almost any conference I can think of wanting to attend in the Bay Area, almost all of which routinely invite frontier lab employees as speakers or featured guests).
To be clear, I think it’s reasonable to be frustrated with Lightcone if you think we legitimize people who you think will misuse that legitimacy, but IMO refusing to attend any events where an organizer makes that kind of choice seems very intense to me (though of course, if someone was already considering attending an event as being of marginal value, such a thing could push you over the edge, though I think this would produce a different top-level comment).
It’s mostly an expression of hope. For example, I hope it’s a genuine commitment that will result in you saying so, even if you might end up in the unfortunate position of updating negatively on Anthropic, or being friends and allies with lots of people at other organizations that you updated negatively on.
As a reason for this being hope instead of confidence: I do not remember you (or almost anyone else in your position) calling for people to leave their positions at OpenAI when it became more clear the organization was likely harming the world, though maybe I just missed it. I am not intending this to be some confident “gotcha”, just me hinting that people often like to do moral grandstanding in this domain without actual backing deep commitments.
To be clear, this wasn’t an intention to drag the whole topic into this conversation, but was trying to be a low-key and indirect expression of me viewing some of the things you say here with some skepticism. I don’t super want to put you on the spot to justify your whole position here, but also would have felt amiss to not give any hints of how I relate to them. So feel free to not respond, as I am sure we will find better contexts in which we can discuss these things.
For what it’s worth I interpreted it as being about Cremieux in particular based on the comment it was directly responding to; probably others also interpreted it that way