Sure, but humanity currently has so little ability to measure or mitigate AI risk that I doubt it will be obvious in any given case that the survival of the human race is at stake, or that any given action would help. And I think even honorable humans tend to be vulnerable to rationalization amidst such ambiguity, which (as I model it) is why society generally prefers that people in positions of substantial power not have extreme conflicts of interest.
In a previous discussion about this, an argument mentioned was “having all your friends and colleagues believe in a thing is probably more epistemically compromising than the equity.”
Which seems maybe true. But, I update in the other direction of “you shouldn’t take equity, and, also, you should have some explicit plan for dealing with the biases of ’the people I spend the most time with think this,
(This also applies to AI pessimists to be clear, but I think it’s reasonable to hold people extra accountable about it when they’re working at a company who’s product has double-digit-odds of destroying the world)
Yeah, certainly there are other possible forms of bias besides financial conflicts of interest; as you say, I think it’s worth trying to avoid those too.
Feels like something has gone wrong way before when one cares more about money than survival of the human race.
If a man’s judgement is really swayable by equity one cant stop to wonder whether he is the right man for the job in the first place.
Sure, but humanity currently has so little ability to measure or mitigate AI risk that I doubt it will be obvious in any given case that the survival of the human race is at stake, or that any given action would help. And I think even honorable humans tend to be vulnerable to rationalization amidst such ambiguity, which (as I model it) is why society generally prefers that people in positions of substantial power not have extreme conflicts of interest.
In a previous discussion about this, an argument mentioned was “having all your friends and colleagues believe in a thing is probably more epistemically compromising than the equity.”
Which seems maybe true. But, I update in the other direction of “you shouldn’t take equity, and, also, you should have some explicit plan for dealing with the biases of ’the people I spend the most time with think this,
(This also applies to AI pessimists to be clear, but I think it’s reasonable to hold people extra accountable about it when they’re working at a company who’s product has double-digit-odds of destroying the world)
Yeah, certainly there are other possible forms of bias besides financial conflicts of interest; as you say, I think it’s worth trying to avoid those too.