I certainly agree that the LTBT has de jure control (or as you say “formal control”).
By “strong control” I meant more precisely something like: “lots of influence in practice, e.g. the influence of Anthropic leadership is comparable to the influence that the LTBT itself is exerting in practice over appointments or comparable (though probably less) to the influence that (e.g.) Sam Altman has had over recent board appointments at OpenAI”. Perhaps “it seems like they have a bunch of control” would have been a more accurate way to put things.
I think it would be totally unsurprising for the LTBT to have de jure power but not that much de facto power (given the influence of Anthropic leadership) and from the outside it sure looks like this is the case at the moment.
See this Time article, which was presumably explicitly sought out by Anthropic to reassure investors in the aftermath of the OpenAI board crisis, in which Brian Israel (at the time general counsel at Anthropic) is paraphrased as repeatedly saying (to investors) “what happened at OpenAI can’t happen to us”. The article (again, likely explicitly sought out by Anthropic as far as I can tell) also says “it also means that the LTBT ultimately has a limited influence on the company: while it will eventually have the power to select and remove a majority of board members, those members will in practice face similar incentives to the rest of the board.” For what it’s worth, I think the implication of the article is wrong and the LTBT actually has very strong de jure power (optimistically, the journalist misinterpreted Brian Israel and wrote a misleading article), but it sure seems like Anthropic leadership wanted to create the impression that the power of the LTBT is limited to reassure shareholders (which does actually weaken the LTBT: the power of institutions is partially based on perception, see e.g. the OpenAI board).
I find the board appointments of the LTBT to not be reassuring; these hires seem unlikely to result in serious oversight of the company due to insufficient expertise and not being dedicated full-time board members. I also don’t find it reassuring that these hires were made far after when they were supposed to be made and that the LTBT hasn’t filled its empty seats. (At least based on public information.)
All these concerns wouldn’t be a big deal if this was a normal company rather than a company aiming to build AGI: probably the single largest danger to humanity as well as the most dangerous and important technology ever (as I expect people at Anthropic would agree).
(See also my discussion of the LTBT in this comment, though I think I say strictly more here.)
I could imagine the LTBT stepping up to take on a more serious oversight role and it seems plausible this will happen in the future, but as it stands public evidence makes it look like the de facto power being exerted by the LTBT is very limited. It’s hard for me to have much confidence either way with my limited knowledge.
To be clear, my view is that this situation is substantially the fault of current LTBT members (who in my view should probably think of governing Anthropic as their top priority and do this full time).
Here are some (possibly costly) actions that Anthropic or the LTBT could take which would (partially) reassure me:
Designate a member of the alignment science team to report periodically to the LTBT directly. Ideally, this person would be employed by the LTBT rather than by Anthropic (e.g. Anthropic can’t fire them) and wouldn’t have equity so they are less financially conflicted. It should be public who this is. This could be someone on the alignment stress testing sub-team. I think DMZ is the person with the position at Anthropic that is most naturally suited to do this. I have other specific candidates in mind and could share privately on request.
The LTBT generally acquires more full-time staff who are independent from the company.
Dario states internally in a clear way that he wouldn’t aggressively maneuver against the board (or the LTBT) if they were trying to remove him or otherwise do something he disagreed with. And, that Anthropic employees shouldn’t join in efforts to undermine the (de jure) power of the board if this happened. This wouldn’t be verifiable externally (unless Dario said this publicly), but I do think it meaningfully ties Dario’s hands (because a major source of power Dario has is strong employee loyalty). As far as I know, Dario could have already done this, but I’m skeptical this has happened on priors.
Some member(s) of the LTBT become full time on the job of being an LTBT member and spend a bunch of time talking to employees and external experts etc. This would ideally be a new LTBT member who has domain expertise. Substantially more time would also help.
The hiring process for new board members by the LTBT is changed to enforce strong separation between the LTBT and the existing board/leadership via not providing any information to the existing board or Anthropic leadership until the hire is decided. This seems very costly and I’m not sure I’d recommend this, but it would address my particular concerns. I think this could be a good choice if the LTBT had independent staff and full-time LTBT members.
To be clear, my view is that Anthropic is currently overall the best governed/managed company trying to build AGI, but this is due to my views about Dario and other Anthropic executives (which are partially based on connections and private knowledge) rather than due to the LTBT. And I don’t think “best governed/managed AGI company” is a very high bar.
The Time article is materially wrong about a bunch of stuff—for example, there is a large difference between incentives and duties; all board members have the same duties but LTBT appointees are likely to have a very different equity stake to whoever is in the CEO board seat.
I really don’t want to get into pedantic details, but there’s no “supposed to” time for LTBT board appointments, I think you’re counting from the first day they were legally able to appoint someone. Also https://www.anthropic.com/company lists five board members out of five seats, and four Trustees out of a maximum five. IMO it’s fine to take a few months to make sure you’ve found the right person!
More broadly, the corporate governance discussions (not just about Anthropic) I see on LessWrong and in the EA community are very deeply frustrating, because almost nobody seems to understand how these structures normally function or why they’re designed that way or the failure modes that occur in practise. Personally, I spent about a decade serving on nonprofit boards, oversight committes which appointed nonprofit boards, and set up the goverance for a for-profit company I founded.
I know we love first-principles thinking around here, but this is a domain with an enormous depth of practice, crystalized from long experience of (often) very smart people in sometimes-adversarial situations.
The Time article is materially wrong about a bunch of stuff
Agreed which is why I noted this in my comment.[1] I think it’s a bad sign that Anthropic seemingly actively sought out an article that ended up being wrong/misleading in a way which was convenient for Anthropic at the time and then didn’t correct it.
I really don’t want to get into pedantic details, but there’s no “supposed to” time for LTBT board appointments, I think you’re counting from the first day they were legally able to appoint someone. Also https://www.anthropic.com/company lists five board members out of five seats, and four Trustees out of a maximum five. IMO it’s fine to take a few months to make sure you’ve found the right person!
First, I agree that there isn’t a “supposed to” time, my wording here was sloppy, sorry about that.
My understanding was a that there was a long delay (e.g. much longer than a few months) between the LTBT being able to appoint a board member and actually appointing such a member and a long time where the LTBT only had 3 members. I think this long of a delay is somewhat concerning.
My understanding is that the LTBT could still decide one more seat (so that it determines a majority of the board). (Or maybe appoint 2 additional seats?) And that it has been able to do this for almost a year at this point. Maybe the LTBT thinks the current board composition is good such that appointments aren’t needed, but the lack of any external AI safety expertise on the board or LTBT concerns me...
More broadly, the corporate governance discussions (not just about Anthropic) I see on LessWrong and in the EA community are very deeply frustrating, because almost nobody seems to understand how these structures normally function or why they’re designed that way or the failure modes that occur in practise. Personally, I spent about a decade serving on nonprofit boards, oversight committes which appointed nonprofit boards, and set up the goverance for a for-profit company I founded.
I certainly don’t have particular expertise in corporate governance and I’d be interested in whether corporate governance experts who are unconflicted and very familiar with the AI situation think that the LTBT has the de facto power needed to govern the company through transformative AI. (And whether the public evidence should make me much less concerned about the LTBT than I would be about the OpenAI board.)
My view is that the normal functioning of a structure like the LTBT or a board would be dramatically insufficient for governing transformative AI (boards normally have a much weaker function in practice than the ostensible purposes of the LTBT and the Anthropic board), so I’m not very satisfied by “the LTBT is behaving how a body of this sort would/should normally behave”.
I said something weaker: “For what it’s worth, I think the implication of the article is wrong and the LTBT actually has very strong de jure power”, because I didn’t see anything which is literally false as stated as opposed to being misleading. But you’d know better.
I honestly haven’t thought especially in depth or meaningfully about the LTBT and this is zero percent a claim about the LTBT, but as someone who has written a decent number of powerpoint decks that went to boards and used to be a management consultant and corporate strategy team member, I would generally be dissatisfied with the claim that a board’s most relevant metric is how many seats it currently has filled (so long as it has enough filled to meet quorum).
As just one example, it is genuinely way easier than you think for a board to have a giant binder full of “people we can emergency appoint to the board, if we really gotta” and be choosing not to exercise that binder because, conditional on no-emergency, they genuinely and correctly prefer waiting for someone being appointed to the board who has an annoying conflict that they’re in the process of resolving (e.g., selling off shares in a competitor or waiting out a post-government-employment “quiet period” or similar).
My view is that the normal functioning of a structure like the LTBT or a board would be dramatically insufficient for governing transformative AI (boards normally have a much weaker function in practice than the ostensible purposes of the LTBT and the Anthropic board), so I’m not very satisfied by “the LTBT is behaving how a body of this sort would/should normally behave”.
I basically completely agree. For a related intuition pump, I have very little confidence that auditing AI capabilities company will meaningfully assist in governing transformative AI.
I think it’s a bad sign that Anthropic seemingly actively sought out an article that ended up being wrong/misleading in a way which was convenient for Anthropic at the time and then didn’t correct it.
Yep; misleading the public about this doesn’t exactly boost confidence in how much Anthropic would prioritize integrity/commitments/etc. when their interests are on the line.
I certainly agree that the LTBT has de jure control (or as you say “formal control”).
By “strong control” I meant more precisely something like: “lots of influence in practice, e.g. the influence of Anthropic leadership is comparable to the influence that the LTBT itself is exerting in practice over appointments or comparable (though probably less) to the influence that (e.g.) Sam Altman has had over recent board appointments at OpenAI”. Perhaps “it seems like they have a bunch of control” would have been a more accurate way to put things.
I think it would be totally unsurprising for the LTBT to have de jure power but not that much de facto power (given the influence of Anthropic leadership) and from the outside it sure looks like this is the case at the moment.
See this Time article, which was presumably explicitly sought out by Anthropic to reassure investors in the aftermath of the OpenAI board crisis, in which Brian Israel (at the time general counsel at Anthropic) is paraphrased as repeatedly saying (to investors) “what happened at OpenAI can’t happen to us”. The article (again, likely explicitly sought out by Anthropic as far as I can tell) also says “it also means that the LTBT ultimately has a limited influence on the company: while it will eventually have the power to select and remove a majority of board members, those members will in practice face similar incentives to the rest of the board.” For what it’s worth, I think the implication of the article is wrong and the LTBT actually has very strong de jure power (optimistically, the journalist misinterpreted Brian Israel and wrote a misleading article), but it sure seems like Anthropic leadership wanted to create the impression that the power of the LTBT is limited to reassure shareholders (which does actually weaken the LTBT: the power of institutions is partially based on perception, see e.g. the OpenAI board).
I find the board appointments of the LTBT to not be reassuring; these hires seem unlikely to result in serious oversight of the company due to insufficient expertise and not being dedicated full-time board members. I also don’t find it reassuring that these hires were made far after when they were supposed to be made and that the LTBT hasn’t filled its empty seats. (At least based on public information.)
All these concerns wouldn’t be a big deal if this was a normal company rather than a company aiming to build AGI: probably the single largest danger to humanity as well as the most dangerous and important technology ever (as I expect people at Anthropic would agree).
(See also my discussion of the LTBT in this comment, though I think I say strictly more here.)
I could imagine the LTBT stepping up to take on a more serious oversight role and it seems plausible this will happen in the future, but as it stands public evidence makes it look like the de facto power being exerted by the LTBT is very limited. It’s hard for me to have much confidence either way with my limited knowledge.
To be clear, my view is that this situation is substantially the fault of current LTBT members (who in my view should probably think of governing Anthropic as their top priority and do this full time).
Here are some (possibly costly) actions that Anthropic or the LTBT could take which would (partially) reassure me:
Designate a member of the alignment science team to report periodically to the LTBT directly. Ideally, this person would be employed by the LTBT rather than by Anthropic (e.g. Anthropic can’t fire them) and wouldn’t have equity so they are less financially conflicted. It should be public who this is. This could be someone on the alignment stress testing sub-team. I think DMZ is the person with the position at Anthropic that is most naturally suited to do this. I have other specific candidates in mind and could share privately on request.
The LTBT generally acquires more full-time staff who are independent from the company.
Dario states internally in a clear way that he wouldn’t aggressively maneuver against the board (or the LTBT) if they were trying to remove him or otherwise do something he disagreed with. And, that Anthropic employees shouldn’t join in efforts to undermine the (de jure) power of the board if this happened. This wouldn’t be verifiable externally (unless Dario said this publicly), but I do think it meaningfully ties Dario’s hands (because a major source of power Dario has is strong employee loyalty). As far as I know, Dario could have already done this, but I’m skeptical this has happened on priors.
Some member(s) of the LTBT become full time on the job of being an LTBT member and spend a bunch of time talking to employees and external experts etc. This would ideally be a new LTBT member who has domain expertise. Substantially more time would also help.
The hiring process for new board members by the LTBT is changed to enforce strong separation between the LTBT and the existing board/leadership via not providing any information to the existing board or Anthropic leadership until the hire is decided. This seems very costly and I’m not sure I’d recommend this, but it would address my particular concerns. I think this could be a good choice if the LTBT had independent staff and full-time LTBT members.
To be clear, my view is that Anthropic is currently overall the best governed/managed company trying to build AGI, but this is due to my views about Dario and other Anthropic executives (which are partially based on connections and private knowledge) rather than due to the LTBT. And I don’t think “best governed/managed AGI company” is a very high bar.
The Time article is materially wrong about a bunch of stuff—for example, there is a large difference between incentives and duties; all board members have the same duties but LTBT appointees are likely to have a very different equity stake to whoever is in the CEO board seat.
I really don’t want to get into pedantic details, but there’s no “supposed to” time for LTBT board appointments, I think you’re counting from the first day they were legally able to appoint someone. Also https://www.anthropic.com/company lists five board members out of five seats, and four Trustees out of a maximum five. IMO it’s fine to take a few months to make sure you’ve found the right person!
More broadly, the corporate governance discussions (not just about Anthropic) I see on LessWrong and in the EA community are very deeply frustrating, because almost nobody seems to understand how these structures normally function or why they’re designed that way or the failure modes that occur in practise. Personally, I spent about a decade serving on nonprofit boards, oversight committes which appointed nonprofit boards, and set up the goverance for a for-profit company I founded.
I know we love first-principles thinking around here, but this is a domain with an enormous depth of practice, crystalized from long experience of (often) very smart people in sometimes-adversarial situations.
In any case, I think I’m done with this thread.
Agreed which is why I noted this in my comment.[1] I think it’s a bad sign that Anthropic seemingly actively sought out an article that ended up being wrong/misleading in a way which was convenient for Anthropic at the time and then didn’t correct it.
First, I agree that there isn’t a “supposed to” time, my wording here was sloppy, sorry about that.
My understanding was a that there was a long delay (e.g. much longer than a few months) between the LTBT being able to appoint a board member and actually appointing such a member and a long time where the LTBT only had 3 members. I think this long of a delay is somewhat concerning.
My understanding is that the LTBT could still decide one more seat (so that it determines a majority of the board). (Or maybe appoint 2 additional seats?) And that it has been able to do this for almost a year at this point. Maybe the LTBT thinks the current board composition is good such that appointments aren’t needed, but the lack of any external AI safety expertise on the board or LTBT concerns me...
I certainly don’t have particular expertise in corporate governance and I’d be interested in whether corporate governance experts who are unconflicted and very familiar with the AI situation think that the LTBT has the de facto power needed to govern the company through transformative AI. (And whether the public evidence should make me much less concerned about the LTBT than I would be about the OpenAI board.)
My view is that the normal functioning of a structure like the LTBT or a board would be dramatically insufficient for governing transformative AI (boards normally have a much weaker function in practice than the ostensible purposes of the LTBT and the Anthropic board), so I’m not very satisfied by “the LTBT is behaving how a body of this sort would/should normally behave”.
I said something weaker: “For what it’s worth, I think the implication of the article is wrong and the LTBT actually has very strong de jure power”, because I didn’t see anything which is literally false as stated as opposed to being misleading. But you’d know better.
I honestly haven’t thought especially in depth or meaningfully about the LTBT and this is zero percent a claim about the LTBT, but as someone who has written a decent number of powerpoint decks that went to boards and used to be a management consultant and corporate strategy team member, I would generally be dissatisfied with the claim that a board’s most relevant metric is how many seats it currently has filled (so long as it has enough filled to meet quorum).
As just one example, it is genuinely way easier than you think for a board to have a giant binder full of “people we can emergency appoint to the board, if we really gotta” and be choosing not to exercise that binder because, conditional on no-emergency, they genuinely and correctly prefer waiting for someone being appointed to the board who has an annoying conflict that they’re in the process of resolving (e.g., selling off shares in a competitor or waiting out a post-government-employment “quiet period” or similar).
I basically completely agree. For a related intuition pump, I have very little confidence that auditing AI capabilities company will meaningfully assist in governing transformative AI.
Yep; misleading the public about this doesn’t exactly boost confidence in how much Anthropic would prioritize integrity/commitments/etc. when their interests are on the line.