Hmm good question. The OpenAI GPT-4 case is complicated in my mind. It kind of looks to me like their approach was:
Move really fast to develop a next-gen model
Take some months to study, test and tweak the model before releasing it
Since it’s fast and slow together, I’m confused about whether it constitutes a deliberate slowdown. I’m curious about your and other people’s takes.
Hmm good question. The OpenAI GPT-4 case is complicated in my mind. It kind of looks to me like their approach was:
Move really fast to develop a next-gen model
Take some months to study, test and tweak the model before releasing it
Since it’s fast and slow together, I’m confused about whether it constitutes a deliberate slowdown. I’m curious about your and other people’s takes.