How did you determine the cost and speed of it, given that there is no unified model that we have access to, just some router between models? Unless I’m just misunderstanding something about what GPT-5 even is.
The router is only on ChatGPT, not the API, I believe. And it switches between two models of the same size and cost (GPT-5 with thinking and GPT-5 without thinking).
How did you determine the cost and speed of it, given that there is no unified model that we have access to, just some router between models? Unless I’m just misunderstanding something about what GPT-5 even is.
The router is only on ChatGPT, not the API, I believe. And it switches between two models of the same size and cost (GPT-5 with thinking and GPT-5 without thinking).