Speaking for myself as someone who works at Anthropic and holds equity: I think I just bite the bullet that this doesn’t affect my decisionmaking that much and the benefits of directing the resources from that equity to good ends are worth it.
(I did think somewhat seriously about finding a way to irrevocably commit all of my equity to donations, or to fully avoid taking possession of it, but mainly for the signaling benefits of there being an employee who was legibly not biased in this particular way in case that was useful when things got crazy; I don’t think it would have done much on the object level.)
Some reasons I think this is basically not a concern for me personally:
Deciding to pledge half my equity to 501(c)3 charities felt like a pretty easy decision; I now think it’s possible this was a mistake because the value of political giving may outweigh the tax advantages and donation match, but I don’t really remember my personal wealth being a driving factor there. And effects on Anthropic-as-a-whole have a way higher ratio of altruistic value to personal wealth than that!
Of course having donation-pledged dollars riding on Anthropic’s success is still a source of bias, but my own equity changes that very little, because my donation preferences are extremely correlated with vastly larger pools of equity from other employees; I already had 99% as much of an altruistic incentive for Anthropic to succeed commercially, and I think most people reading this comment are in a similar boat.
Empirically when I advocate internally for things that would be commercially costly to Anthropic I don’t notice this weighing on my decisionmaking basically at all, like I’m not sure I’ve literally ever thought about it in that setting?
If I were well-modeled as an actor whose equity value steered their actions in significant ways, I think I would be putting much more effort into tax optimization than I do now.
The epistemic distortions from one’s social and professional environment seem vastly larger to me. This isn’t directly an argument that the equity thing isn’t useful on the margin, but it just seems like a weird area of intervention when there’s so much lower hanging fruit. I think decisions like “live in berkeley or SF” have easily an order of magnitude more impact on a person’s orientation to these questions.
Others might vary a lot in how they orient to such things, though; I don’t claim this is universal.
“Empirically when I advocate internally for things that would be commercially costly to Anthropic I don’t notice this weighing on my decisionmaking basically at all, like I’m not sure I’ve literally ever thought about it in that setting?”
With respect, one of the dangers of being a flawed human is the fact that you aren’t aware of every factor that influences your decision making.
I’m not sure that a lack of consciously thinking about financial loss/gain is good empirical evidence that it isn’t affecting your choices.
Yep, I agree that’s a risk, and one that should seem fairly plausible to external readers. (This is why I included other bullet points besides that one.) I’m not sure I can offer something compelling over text that other readers will find convincing, but I do think I’m in a pretty epistemically justified state here even if I don’t think you should think that based on what you know of me.
And TBC, I’m not saying I’m unbiased! I think I am biased in a ton of ways—my social environment, possession of a stable high-status job, not wanting to say something accidentally wrong or hurting people’s feelings, inner ring dynamics of being in the know about things, etc are all ways I think my epistemics face pressure here—but I feel quite sure that “the value of my equity goes down if Anthropic is less commercially successful” contributes a tiny tiny fraction to that state of affairs. You’re well within your rights to not believe me, though.
This is a bit of a random-ass take, but, I think I care more about Joe not taking equity than you not taking equity, because I think Joe is more likely to be a person where it ends up important that he legibly have as little COI as possible (this is maybe making up a bunch of stuff about Joe’s future role in the world, but, it’s where my Joe headcanon is at).
From a pure signaling perspective (the ”legibly” part of ”legibly have as little COI as possible”) there’s also a counter consideration: if someone says that there’s danger, and calls for prioritizing safety, that might be even more credible if that’s going against their financial motivations.
I don’t think this matters much for company-external comms. There, I think it’s better to just be as legibly free of COIs as possible, because listeners struggle to tell what’s actually in the company’s best interests. (I might once have thought differently, but empirically ”they just say that superintelligence might cause extinction because that’s good for business” is a very common take.)
But for company-internal comms, I can imagine that someone would be more persuasive if they could say ”look, I know this isn’t good for your equity, it’s not good for mine either. we’re in the same boat. but we gotta do what’s right”.
Agreed—I do think the case for doing this for signaling reasons is stronger for Joe and I think it’s plausible he should have avoided this for that reason. I just don’t think it’s clear that it would be particularly helpful on the object level for his epistemics, which is what I took the parent comment to be saying.
I’ve made a legally binding pledge to allocate half of it to 501(c)3 charities, the maximum that my employer’s donation match covers; I expect to donate the majority of the remainder but have had no opportunities to liquidate any of it yet.
Speaking for myself as someone who works at Anthropic and holds equity: I think I just bite the bullet that this doesn’t affect my decisionmaking that much and the benefits of directing the resources from that equity to good ends are worth it.
(I did think somewhat seriously about finding a way to irrevocably commit all of my equity to donations, or to fully avoid taking possession of it, but mainly for the signaling benefits of there being an employee who was legibly not biased in this particular way in case that was useful when things got crazy; I don’t think it would have done much on the object level.)
Some reasons I think this is basically not a concern for me personally:
Deciding to pledge half my equity to 501(c)3 charities felt like a pretty easy decision; I now think it’s possible this was a mistake because the value of political giving may outweigh the tax advantages and donation match, but I don’t really remember my personal wealth being a driving factor there. And effects on Anthropic-as-a-whole have a way higher ratio of altruistic value to personal wealth than that!
Of course having donation-pledged dollars riding on Anthropic’s success is still a source of bias, but my own equity changes that very little, because my donation preferences are extremely correlated with vastly larger pools of equity from other employees; I already had 99% as much of an altruistic incentive for Anthropic to succeed commercially, and I think most people reading this comment are in a similar boat.
Empirically when I advocate internally for things that would be commercially costly to Anthropic I don’t notice this weighing on my decisionmaking basically at all, like I’m not sure I’ve literally ever thought about it in that setting?
If I were well-modeled as an actor whose equity value steered their actions in significant ways, I think I would be putting much more effort into tax optimization than I do now.
The epistemic distortions from one’s social and professional environment seem vastly larger to me. This isn’t directly an argument that the equity thing isn’t useful on the margin, but it just seems like a weird area of intervention when there’s so much lower hanging fruit. I think decisions like “live in berkeley or SF” have easily an order of magnitude more impact on a person’s orientation to these questions.
Others might vary a lot in how they orient to such things, though; I don’t claim this is universal.
“Empirically when I advocate internally for things that would be commercially costly to Anthropic I don’t notice this weighing on my decisionmaking basically at all, like I’m not sure I’ve literally ever thought about it in that setting?”
With respect, one of the dangers of being a flawed human is the fact that you aren’t aware of every factor that influences your decision making.
I’m not sure that a lack of consciously thinking about financial loss/gain is good empirical evidence that it isn’t affecting your choices.
Yep, I agree that’s a risk, and one that should seem fairly plausible to external readers. (This is why I included other bullet points besides that one.) I’m not sure I can offer something compelling over text that other readers will find convincing, but I do think I’m in a pretty epistemically justified state here even if I don’t think you should think that based on what you know of me.
And TBC, I’m not saying I’m unbiased! I think I am biased in a ton of ways—my social environment, possession of a stable high-status job, not wanting to say something accidentally wrong or hurting people’s feelings, inner ring dynamics of being in the know about things, etc are all ways I think my epistemics face pressure here—but I feel quite sure that “the value of my equity goes down if Anthropic is less commercially successful” contributes a tiny tiny fraction to that state of affairs. You’re well within your rights to not believe me, though.
This is a bit of a random-ass take, but, I think I care more about Joe not taking equity than you not taking equity, because I think Joe is more likely to be a person where it ends up important that he legibly have as little COI as possible (this is maybe making up a bunch of stuff about Joe’s future role in the world, but, it’s where my Joe headcanon is at).
From a pure signaling perspective (the ”legibly” part of ”legibly have as little COI as possible”) there’s also a counter consideration: if someone says that there’s danger, and calls for prioritizing safety, that might be even more credible if that’s going against their financial motivations.
I don’t think this matters much for company-external comms. There, I think it’s better to just be as legibly free of COIs as possible, because listeners struggle to tell what’s actually in the company’s best interests. (I might once have thought differently, but empirically ”they just say that superintelligence might cause extinction because that’s good for business” is a very common take.)
But for company-internal comms, I can imagine that someone would be more persuasive if they could say ”look, I know this isn’t good for your equity, it’s not good for mine either. we’re in the same boat. but we gotta do what’s right”.
Agreed—I do think the case for doing this for signaling reasons is stronger for Joe and I think it’s plausible he should have avoided this for that reason. I just don’t think it’s clear that it would be particularly helpful on the object level for his epistemics, which is what I took the parent comment to be saying.
Have you donated any of your equity yet? If not, why not?
I’ve made a legally binding pledge to allocate half of it to 501(c)3 charities, the maximum that my employer’s donation match covers; I expect to donate the majority of the remainder but have had no opportunities to liquidate any of it yet.