Today we announced that Reed Hastings, Chairman and co-founder of Netflix who served as its CEO for over 25 years, has been appointed to Anthropic’s board of directors by our Long Term Benefit Trust. Hastings brings extensive experience from founding and scaling Netflix into a global entertainment powerhouse, along with his service on the boards of Facebook, Microsoft, and Bloomberg.
“The Long Term Benefit Trust appointed Reed because his impressive leadership experience, deep philanthropic work, and commitment to addressing AI’s societal challenges make him uniquely qualified to guide Anthropic at this critical juncture in AI development,” said Buddy Shah, Chair of Anthropic’s Long Term Benefit Trust. [...]
Hastings said: “Anthropic is very optimistic about the AI benefits for humanity, but is also very aware of the economic, social, and safety challenges. I’m joining Anthropic’s board because I believe in their approach to AI development, and to help humanity progress.”
Personally, I’m excited to add Reed’s depth of business and philanthropic experience to the board, and that more of the LTBT’s work is now public.
Does he have anything public about his thoughts on AI risk? The announcement concerningly focuses on job displacement, which seems largely irrelevant to what I (and I think most other people who have thought about this hard) consider most important to supervise about Anthropic’s actions. Has he ever said or written anything about catastrophic or existential risk, or risk of substantial human disempowerment?
What do you know of his level of understanding of AGI, existential risk, super intelligence, etc? It seems that choosing board members is one of the main levers of influence that LTBT members have for influencing Anthropic, and extremely important that they choose neutral board members who will prioritise what matters in the long term and not just what seems safest in the short term. I generally assume anyone is bad at this unless I observe good evidence to the contrary, so by default this seems like a concerning choice. But I would love to hear counter evidence
The Anthropic announcement mentions that he “recently made a $50 million gift to Bowdoin College to establish a research initiative on AI and Humanity,” but that isn’t focused on AI safety (let alone AI existential risk). Instead, the college vaguely says “We are thrilled and so grateful to receive this remarkable support from Reed, who shares our conviction that the AI revolution makes the liberal arts and a Bowdoin education more essential to society.”
Does anyone understand the real motivation here? Who at Anthropic makes the call to appoint a random CEO who (presumably) doesn’t care about x-risk, and what do they get out of it?
I’d guess it looks more stable to investors. Unlike if you have a bunch of EAs on the OpenAI board and they confusingly try to fire the CEO for as unimportant a crime as lying; that’s quite hard for investors to predict.
Sincere request that Anthropic work with Reed to bring Person of Interest back to Netflix and give it the Suits promotional push this summer. (I am not kidding. I mean this, actually.)
LTBT appoints Reed Hastings to Anthropic’s board of directors.
Personally, I’m excited to add Reed’s depth of business and philanthropic experience to the board, and that more of the LTBT’s work is now public.
Does he have anything public about his thoughts on AI risk? The announcement concerningly focuses on job displacement, which seems largely irrelevant to what I (and I think most other people who have thought about this hard) consider most important to supervise about Anthropic’s actions. Has he ever said or written anything about catastrophic or existential risk, or risk of substantial human disempowerment?
What do you know of his level of understanding of AGI, existential risk, super intelligence, etc? It seems that choosing board members is one of the main levers of influence that LTBT members have for influencing Anthropic, and extremely important that they choose neutral board members who will prioritise what matters in the long term and not just what seems safest in the short term. I generally assume anyone is bad at this unless I observe good evidence to the contrary, so by default this seems like a concerning choice. But I would love to hear counter evidence
The Wikipedia section about his donations resembles the donations of someone who doesn’t believe AI existential risk.[1]
We can only hope he is a rational person and learns about it quickly.
The Anthropic announcement mentions that he “recently made a $50 million gift to Bowdoin College to establish a research initiative on AI and Humanity,” but that isn’t focused on AI safety (let alone AI existential risk). Instead, the college vaguely says “We are thrilled and so grateful to receive this remarkable support from Reed, who shares our conviction that the AI revolution makes the liberal arts and a Bowdoin education more essential to society.”
Does anyone understand the real motivation here? Who at Anthropic makes the call to appoint a random CEO who (presumably) doesn’t care about x-risk, and what do they get out of it?
I’d guess it looks more stable to investors. Unlike if you have a bunch of EAs on the OpenAI board and they confusingly try to fire the CEO for as unimportant a crime as lying; that’s quite hard for investors to predict.
Sincere request that Anthropic work with Reed to bring Person of Interest back to Netflix and give it the Suits promotional push this summer. (I am not kidding. I mean this, actually.)