Sure. I listed a bunch of improvements right there. They just tend to always be relatively small on economically-important usecases, compared to other things like RLHF. (Sadly, no matter how much I whine about ChatGPT’s poetry being super basic and bland because of BPEs+mode-collapse, better poetry wouldn’t make OA much money compared to working harder on RL tuning to prioritize Q&A or reasoning or investing in more GPUs to serve more users.)
I think so. If someone could show that BPEs were changing the scaling laws on an important task end-users will pay for, then it wouldn’t be hard to change that: for example, I noted that Codex induced OA to change BPEs, because that substantially increased the effective context window when you generate BPEs optimized for programming language syntax, which matters to big paying customers like Github (the larger the ctx, the more the variables & definitions inside a specific project are available for relevant completion). Otherwise, the general attitude seems to be to shrug and it’ll fix itself at some point when GPT-4 or GPT-5 or god knows what system uses some new tokenization or byte-level encoding or a fancy new attention/history mechanism with near-unlimited ctx motivated by other concerns and then the problems go away and become a minor historical footnote. And they are probably right to, as much as it annoys me to see the bad poetry or see people running into blatantly BPE-caused problems and declare ‘deep learning has hit a wall!’...
Are there any serious efforts to fix these quick hacks?
Sure. I listed a bunch of improvements right there. They just tend to always be relatively small on economically-important usecases, compared to other things like RLHF. (Sadly, no matter how much I whine about ChatGPT’s poetry being super basic and bland because of BPEs+mode-collapse, better poetry wouldn’t make OA much money compared to working harder on RL tuning to prioritize Q&A or reasoning or investing in more GPUs to serve more users.)
Ah that makes sense, so the fixes are ready but waiting for a compelling argument?
I think so. If someone could show that BPEs were changing the scaling laws on an important task end-users will pay for, then it wouldn’t be hard to change that: for example, I noted that Codex induced OA to change BPEs, because that substantially increased the effective context window when you generate BPEs optimized for programming language syntax, which matters to big paying customers like Github (the larger the ctx, the more the variables & definitions inside a specific project are available for relevant completion). Otherwise, the general attitude seems to be to shrug and it’ll fix itself at some point when GPT-4 or GPT-5 or god knows what system uses some new tokenization or byte-level encoding or a fancy new attention/history mechanism with near-unlimited ctx motivated by other concerns and then the problems go away and become a minor historical footnote. And they are probably right to, as much as it annoys me to see the bad poetry or see people running into blatantly BPE-caused problems and declare ‘deep learning has hit a wall!’...