This is a bit of a random-ass take, but, I think I care more about Joe not taking equity than you not taking equity, because I think Joe is more likely to be a person where it ends up important that he legibly have as little COI as possible (this is maybe making up a bunch of stuff about Joe’s future role in the world, but, it’s where my Joe headcanon is at).
From a pure signaling perspective (the ”legibly” part of ”legibly have as little COI as possible”) there’s also a counter consideration: if someone says that there’s danger, and calls for prioritizing safety, that might be even more credible if that’s going against their financial motivations.
I don’t think this matters much for company-external comms. There, I think it’s better to just be as legibly free of COIs as possible, because listeners struggle to tell what’s actually in the company’s best interests. (I might once have thought differently, but empirically ”they just say that superintelligence might cause extinction because that’s good for business” is a very common take.)
But for company-internal comms, I can imagine that someone would be more persuasive if they could say ”look, I know this isn’t good for your equity, it’s not good for mine either. we’re in the same boat. but we gotta do what’s right”.
Agreed—I do think the case for doing this for signaling reasons is stronger for Joe and I think it’s plausible he should have avoided this for that reason. I just don’t think it’s clear that it would be particularly helpful on the object level for his epistemics, which is what I took the parent comment to be saying.
This is a bit of a random-ass take, but, I think I care more about Joe not taking equity than you not taking equity, because I think Joe is more likely to be a person where it ends up important that he legibly have as little COI as possible (this is maybe making up a bunch of stuff about Joe’s future role in the world, but, it’s where my Joe headcanon is at).
From a pure signaling perspective (the ”legibly” part of ”legibly have as little COI as possible”) there’s also a counter consideration: if someone says that there’s danger, and calls for prioritizing safety, that might be even more credible if that’s going against their financial motivations.
I don’t think this matters much for company-external comms. There, I think it’s better to just be as legibly free of COIs as possible, because listeners struggle to tell what’s actually in the company’s best interests. (I might once have thought differently, but empirically ”they just say that superintelligence might cause extinction because that’s good for business” is a very common take.)
But for company-internal comms, I can imagine that someone would be more persuasive if they could say ”look, I know this isn’t good for your equity, it’s not good for mine either. we’re in the same boat. but we gotta do what’s right”.
Agreed—I do think the case for doing this for signaling reasons is stronger for Joe and I think it’s plausible he should have avoided this for that reason. I just don’t think it’s clear that it would be particularly helpful on the object level for his epistemics, which is what I took the parent comment to be saying.