I find it very interesting that they managed to beat GPT-3.5 with only 2 months of training! This makes me think xAI might become a major player in AGI development.
Did they do it using substantially less compute as well or something? Because otherwise, I don’t see what is that impressive about this.
Did they do it using substantially less compute as well or something? Because otherwise, I don’t see what is that impressive about this.
Money
Isn’t that effectively the same thing as using substantially less compute?
I suppose being pithy backfired here. I meant that they may have spent lots of money and may have more to spend.
Right. Are you saying Grok may be impressive because of the sheer amount of resources being funnelled into it?