The Time article is materially wrong about a bunch of stuff—for example, there is a large difference between incentives and duties; all board members have the same duties but LTBT appointees are likely to have a very different equity stake to whoever is in the CEO board seat.
I really don’t want to get into pedantic details, but there’s no “supposed to” time for LTBT board appointments, I think you’re counting from the first day they were legally able to appoint someone. Also https://www.anthropic.com/company lists five board members out of five seats, and four Trustees out of a maximum five. IMO it’s fine to take a few months to make sure you’ve found the right person!
More broadly, the corporate governance discussions (not just about Anthropic) I see on LessWrong and in the EA community are very deeply frustrating, because almost nobody seems to understand how these structures normally function or why they’re designed that way or the failure modes that occur in practise. Personally, I spent about a decade serving on nonprofit boards, oversight committes which appointed nonprofit boards, and set up the goverance for a for-profit company I founded.
I know we love first-principles thinking around here, but this is a domain with an enormous depth of practice, crystalized from long experience of (often) very smart people in sometimes-adversarial situations.
The Time article is materially wrong about a bunch of stuff
Agreed which is why I noted this in my comment.[1] I think it’s a bad sign that Anthropic seemingly actively sought out an article that ended up being wrong/misleading in a way which was convenient for Anthropic at the time and then didn’t correct it.
I really don’t want to get into pedantic details, but there’s no “supposed to” time for LTBT board appointments, I think you’re counting from the first day they were legally able to appoint someone. Also https://www.anthropic.com/company lists five board members out of five seats, and four Trustees out of a maximum five. IMO it’s fine to take a few months to make sure you’ve found the right person!
First, I agree that there isn’t a “supposed to” time, my wording here was sloppy, sorry about that.
My understanding was a that there was a long delay (e.g. much longer than a few months) between the LTBT being able to appoint a board member and actually appointing such a member and a long time where the LTBT only had 3 members. I think this long of a delay is somewhat concerning.
My understanding is that the LTBT could still decide one more seat (so that it determines a majority of the board). (Or maybe appoint 2 additional seats?) And that it has been able to do this for almost a year at this point. Maybe the LTBT thinks the current board composition is good such that appointments aren’t needed, but the lack of any external AI safety expertise on the board or LTBT concerns me...
More broadly, the corporate governance discussions (not just about Anthropic) I see on LessWrong and in the EA community are very deeply frustrating, because almost nobody seems to understand how these structures normally function or why they’re designed that way or the failure modes that occur in practise. Personally, I spent about a decade serving on nonprofit boards, oversight committes which appointed nonprofit boards, and set up the goverance for a for-profit company I founded.
I certainly don’t have particular expertise in corporate governance and I’d be interested in whether corporate governance experts who are unconflicted and very familiar with the AI situation think that the LTBT has the de facto power needed to govern the company through transformative AI. (And whether the public evidence should make me much less concerned about the LTBT than I would be about the OpenAI board.)
My view is that the normal functioning of a structure like the LTBT or a board would be dramatically insufficient for governing transformative AI (boards normally have a much weaker function in practice than the ostensible purposes of the LTBT and the Anthropic board), so I’m not very satisfied by “the LTBT is behaving how a body of this sort would/should normally behave”.
I said something weaker: “For what it’s worth, I think the implication of the article is wrong and the LTBT actually has very strong de jure power”, because I didn’t see anything which is literally false as stated as opposed to being misleading. But you’d know better.
I honestly haven’t thought especially in depth or meaningfully about the LTBT and this is zero percent a claim about the LTBT, but as someone who has written a decent number of powerpoint decks that went to boards and used to be a management consultant and corporate strategy team member, I would generally be dissatisfied with the claim that a board’s most relevant metric is how many seats it currently has filled (so long as it has enough filled to meet quorum).
As just one example, it is genuinely way easier than you think for a board to have a giant binder full of “people we can emergency appoint to the board, if we really gotta” and be choosing not to exercise that binder because, conditional on no-emergency, they genuinely and correctly prefer waiting for someone being appointed to the board who has an annoying conflict that they’re in the process of resolving (e.g., selling off shares in a competitor or waiting out a post-government-employment “quiet period” or similar).
My view is that the normal functioning of a structure like the LTBT or a board would be dramatically insufficient for governing transformative AI (boards normally have a much weaker function in practice than the ostensible purposes of the LTBT and the Anthropic board), so I’m not very satisfied by “the LTBT is behaving how a body of this sort would/should normally behave”.
I basically completely agree. For a related intuition pump, I have very little confidence that auditing AI capabilities company will meaningfully assist in governing transformative AI.
I think it’s a bad sign that Anthropic seemingly actively sought out an article that ended up being wrong/misleading in a way which was convenient for Anthropic at the time and then didn’t correct it.
Yep; misleading the public about this doesn’t exactly boost confidence in how much Anthropic would prioritize integrity/commitments/etc. when their interests are on the line.
The Time article is materially wrong about a bunch of stuff—for example, there is a large difference between incentives and duties; all board members have the same duties but LTBT appointees are likely to have a very different equity stake to whoever is in the CEO board seat.
I really don’t want to get into pedantic details, but there’s no “supposed to” time for LTBT board appointments, I think you’re counting from the first day they were legally able to appoint someone. Also https://www.anthropic.com/company lists five board members out of five seats, and four Trustees out of a maximum five. IMO it’s fine to take a few months to make sure you’ve found the right person!
More broadly, the corporate governance discussions (not just about Anthropic) I see on LessWrong and in the EA community are very deeply frustrating, because almost nobody seems to understand how these structures normally function or why they’re designed that way or the failure modes that occur in practise. Personally, I spent about a decade serving on nonprofit boards, oversight committes which appointed nonprofit boards, and set up the goverance for a for-profit company I founded.
I know we love first-principles thinking around here, but this is a domain with an enormous depth of practice, crystalized from long experience of (often) very smart people in sometimes-adversarial situations.
In any case, I think I’m done with this thread.
Agreed which is why I noted this in my comment.[1] I think it’s a bad sign that Anthropic seemingly actively sought out an article that ended up being wrong/misleading in a way which was convenient for Anthropic at the time and then didn’t correct it.
First, I agree that there isn’t a “supposed to” time, my wording here was sloppy, sorry about that.
My understanding was a that there was a long delay (e.g. much longer than a few months) between the LTBT being able to appoint a board member and actually appointing such a member and a long time where the LTBT only had 3 members. I think this long of a delay is somewhat concerning.
My understanding is that the LTBT could still decide one more seat (so that it determines a majority of the board). (Or maybe appoint 2 additional seats?) And that it has been able to do this for almost a year at this point. Maybe the LTBT thinks the current board composition is good such that appointments aren’t needed, but the lack of any external AI safety expertise on the board or LTBT concerns me...
I certainly don’t have particular expertise in corporate governance and I’d be interested in whether corporate governance experts who are unconflicted and very familiar with the AI situation think that the LTBT has the de facto power needed to govern the company through transformative AI. (And whether the public evidence should make me much less concerned about the LTBT than I would be about the OpenAI board.)
My view is that the normal functioning of a structure like the LTBT or a board would be dramatically insufficient for governing transformative AI (boards normally have a much weaker function in practice than the ostensible purposes of the LTBT and the Anthropic board), so I’m not very satisfied by “the LTBT is behaving how a body of this sort would/should normally behave”.
I said something weaker: “For what it’s worth, I think the implication of the article is wrong and the LTBT actually has very strong de jure power”, because I didn’t see anything which is literally false as stated as opposed to being misleading. But you’d know better.
I honestly haven’t thought especially in depth or meaningfully about the LTBT and this is zero percent a claim about the LTBT, but as someone who has written a decent number of powerpoint decks that went to boards and used to be a management consultant and corporate strategy team member, I would generally be dissatisfied with the claim that a board’s most relevant metric is how many seats it currently has filled (so long as it has enough filled to meet quorum).
As just one example, it is genuinely way easier than you think for a board to have a giant binder full of “people we can emergency appoint to the board, if we really gotta” and be choosing not to exercise that binder because, conditional on no-emergency, they genuinely and correctly prefer waiting for someone being appointed to the board who has an annoying conflict that they’re in the process of resolving (e.g., selling off shares in a competitor or waiting out a post-government-employment “quiet period” or similar).
I basically completely agree. For a related intuition pump, I have very little confidence that auditing AI capabilities company will meaningfully assist in governing transformative AI.
Yep; misleading the public about this doesn’t exactly boost confidence in how much Anthropic would prioritize integrity/commitments/etc. when their interests are on the line.