My understanding is that when the post was written, Anthropic had already had the first Claude, so the knowledge was available to the community.
A month after this post was retracted, ChatGPT was released.
Plausibly, “the EA community” would’ve been in a better place if it started to publicly and privately use its chips for AI x-risk advocacy and talking about the short timelines.
Looking back at the parameters of the bet, it’s interesting to me that the benchmark and math components have all fallen, but that the two “real world” components of the bet are still standing.
Three years later, I think the post was right, and the pushback was wrong.
People who disagreed with this post lost their bets.
My understanding is that when the post was written, Anthropic had already had the first Claude, so the knowledge was available to the community.
A month after this post was retracted, ChatGPT was released.
Plausibly, “the EA community” would’ve been in a better place if it started to publicly and privately use its chips for AI x-risk advocacy and talking about the short timelines.
Looking back at the parameters of the bet, it’s interesting to me that the benchmark and math components have all fallen, but that the two “real world” components of the bet are still standing.
I agree that the update was correct. But you didn’t state a concrete action to take?