My guess would be that OpenAI and Anthropic both lowball their financial estimates for strategic reasons. Better for your already-very-ambitious targets to be exceeded repeatedly, than to propose even one so-ambitious-you-sound-like-an-insane-cult target which you then fail to meet.
I think even if they hit some insane targets in the near term, the act of claiming explosive growth in a legible (and legally serious) growth estimate might be shocking to a lot of third parties, and have some wider memetic ripple effects. While it feels like the public has become “situationally aware” at a rapid pace in the last year, most people have not grappled deeply with the implications of possible transformative AI within the next few years.
My guess would be that OpenAI and Anthropic both lowball their financial estimates for strategic reasons. Better for your already-very-ambitious targets to be exceeded repeatedly, than to propose even one so-ambitious-you-sound-like-an-insane-cult target which you then fail to meet.
I think even if they hit some insane targets in the near term, the act of claiming explosive growth in a legible (and legally serious) growth estimate might be shocking to a lot of third parties, and have some wider memetic ripple effects. While it feels like the public has become “situationally aware” at a rapid pace in the last year, most people have not grappled deeply with the implications of possible transformative AI within the next few years.