Altman doesn’t own equity in OpenAI and he’s doing it for the glory. He genuinely believes he might give birth to the AI god. Why should he do anything different, from his vantage point
Why should he do anything different, from his vantage point
Because the AI god might happen in 2033 or something, after OpenAI mostly or fully goes out of business (following the more bubble-pilled takes), in which case only the more conservative path of Anthropic or the more balance sheet backstopped path of GDM let these AI companies compete at that time.
So it’s insufficient to only care about the AI god, it’s also necessary to expect it on a specific timeline. Or, as I argue in a sibling comment, this behavior plausibly isn’t actually as risky (for the business) as it looks.
Of course being right is better than being wrong. Ideally he should know the exact date of the arrival of the Superintelligence and organize finances for that.
But it seems to me that he has the best shot of creating the AI god with his current process.
Altman doesn’t own equity in OpenAI and he’s doing it for the glory. He genuinely believes he might give birth to the AI god. Why should he do anything different, from his vantage point
Because the AI god might happen in 2033 or something, after OpenAI mostly or fully goes out of business (following the more bubble-pilled takes), in which case only the more conservative path of Anthropic or the more balance sheet backstopped path of GDM let these AI companies compete at that time.
So it’s insufficient to only care about the AI god, it’s also necessary to expect it on a specific timeline. Or, as I argue in a sibling comment, this behavior plausibly isn’t actually as risky (for the business) as it looks.
Of course being right is better than being wrong. Ideally he should know the exact date of the arrival of the Superintelligence and organize finances for that.
But it seems to me that he has the best shot of creating the AI god with his current process.