[Edit: I this post was based on a factual error. Reid Hoffman is not on the Anthropic board. Reed Hastings is. Thank you to Neel for correcting my mistake!]
What are the dynamics of Anthropic board meetings are like, given that some of the board seem to not really understand or believe in Superintelligence?
Reid Hoffman is on the board. He’s the poster child for “AI doesn’t replace humans, it’s a tool that empowers humans”. Like he wrote two(!) whole books about it (titles: Impromtu: Amplifying Our Humanity Through AI and Superagency: Empowering Humanity in the Age of AI).
In this early period, many companies haven’t yet figured out how to integrate new engineers into AI-native workflows.
But I still believe there will be essentially unlimited demand for people who think computationally.
and
If you’re entering the workforce today, you have a unique advantage: you can grow up working with copilots, understanding the leverage they give you as an employee, and help your companies figure out how to integrate AI into their work
It sure doesn’t sound like he’s living in a mental world where there will be AIs that will be better at almost everyone at almost all tasks by 2030!
He seems to be expressing broadly similar talking points about AI amplifying human work, as recently as three weeks ago.[1]
Does Reid internally translates that to “we’re building awesome software tools that will empower people, not replace them”?
Does he think Dario is exaggerating for effect?
Does he think that Dario is just factually wrong about projections that are extremely central to Anthropic’s business, but they haven’t bothered to (or at least haven’t succeeded at) clarify that disagreement?
Does Dario not say these things to his board, but only in essays and interviews that he publishes to the whole world?!
Is Reid posturing about what he believes?
I don’t have a hypothesis that explains these observation that doesn’t seem bizarre. My best bad guess is that Reid is basically filtering out anything that doesn’t match his existing impressions about AI, despite being an early investor in OpenAI and being on the board of Anthropic!
My first guess is that Amodei simply treats the board meeting like a relatively standard for-profit company. Talks about revenue, growth, new features, new deals, etc.
My guess is that it’s a relatively common occurrence for Founders/CEOs to believe that their product is going to do wondrous things and take over the world, and that investors mostly see this as a positive.
Like, I don’t think VCs are especially trying to be intellectuals, and don’t mind much if people around them seem to believe inconsistent or incoherent things. I expect many founders around him believe many crazy things and he doesn’t argue with them about it.
Edit: Seems I was explaining something that wasn’t true! Points awarded to Eli’s model that was confused.
Yes! But also, points deducted from Eli’s epistemic practice that he was confused and said explicitly that none of the hypotheses to describe the observation seemed non-bizarre, and then didn’t have the thought “maybe the I’m mistaken about the data.”
[Edit: I this post was based on a factual error. Reid Hoffman is not on the Anthropic board. Reed Hastings is. Thank you to Neel for correcting my mistake!]
What are the dynamics of Anthropic board meetings are like, given that some of the board seem to not really understand or believe in Superintelligence?
Reid Hoffman is on the board. He’s the poster child for “AI doesn’t replace humans, it’s a tool that empowers humans”. Like he wrote two(!) whole books about it (titles: Impromtu: Amplifying Our Humanity Through AI and Superagency: Empowering Humanity in the Age of AI).
For instance, here:
and
It sure doesn’t sound like he’s living in a mental world where there will be AIs that will be better at almost everyone at almost all tasks by 2030!
He seems to be expressing broadly similar talking points about AI amplifying human work, as recently as three weeks ago.[1]
I imagine Dario coming into the board meetings and say “Alright guys, I expect AI that is better than almost all humans at almost all tasks, possibly by 2027 and almost certainly no latter than 2030. Our mainline projection is that Anthropic will have a country of geniuses in a datacenter within 5 years.”
What is going on here?
Does Reid internally translates that to “we’re building awesome software tools that will empower people, not replace them”?
Does he think Dario is exaggerating for effect?
Does he think that Dario is just factually wrong about projections that are extremely central to Anthropic’s business, but they haven’t bothered to (or at least haven’t succeeded at) clarify that disagreement?
Does Dario not say these things to his board, but only in essays and interviews that he publishes to the whole world?!
Is Reid posturing about what he believes?
I don’t have a hypothesis that explains these observation that doesn’t seem bizarre. My best bad guess is that Reid is basically filtering out anything that doesn’t match his existing impressions about AI, despite being an early investor in OpenAI and being on the board of Anthropic!
Though I didn’t listen to this whole interview, so I might be misrepresenting him.
Reid Hoffman is not on the Anthropic board. You’re likely confusing him with Reed Hastings
https://www.anthropic.com/company
(This isn’t fully up to date, but I’m not aware of Reid Hoffman being on the board at any point)
Reid Hoffman used to be on the OpenAI Board, which might be another contributor to the name collision here
Ah!
That does resolve my confusion!
Thank you!
My first guess is that Amodei simply treats the board meeting like a relatively standard for-profit company. Talks about revenue, growth, new features, new deals, etc.
Maybe, but that doesn’t feel like it explains my confusion!
Reid is interested in the future of AI. Presumably he’s had conversations with Dario about it?
My guess is that it’s a relatively common occurrence for Founders/CEOs to believe that their product is going to do wondrous things and take over the world, and that investors mostly see this as a positive.
Like, I don’t think VCs are especially trying to be intellectuals, and don’t mind much if people around them seem to believe inconsistent or incoherent things. I expect many founders around him believe many crazy things and he doesn’t argue with them about it.
Edit: Seems I was explaining something that wasn’t true! Points awarded to Eli’s model that was confused.
Yes! But also, points deducted from Eli’s epistemic practice that he was confused and said explicitly that none of the hypotheses to describe the observation seemed non-bizarre, and then didn’t have the thought “maybe the I’m mistaken about the data.”