Hmm, but when you use these models in the chat interface, you can literally open up the reasoning tab and watch it be generated in real time? It feels like there isn’t enough time here for that reasoning to have been generated by a summarizer
Hmm, but when you use these models in the chat interface, you can literally open up the reasoning tab and watch it be generated in real time? It feels like there isn’t enough time here for that reasoning to have been generated by a summarizer