There is a Feb 2024 paper that predicts high compute multipliers from using more finer-grained experts in MoE models, optimally about 64 experts activated per token at 1e24-1e25 FLOPs, whereas MoE models with known architecture usually have 2 experts activated per token. DeepSeek-V3 has 8 routed experts activated per token, a step in that direction.
On the other hand, things like this should’ve already been tested at the leading labs, so the chances that it’s a new idea being brought to attention there seem slim. Runners-up like xAI and Meta might find this more useful, if that’s indeed the reason, rather than extremely well-done post-training or even pretraining dataset construction.
There is a Feb 2024 paper that predicts high compute multipliers from using more finer-grained experts in MoE models, optimally about 64 experts activated per token at 1e24-1e25 FLOPs, whereas MoE models with known architecture usually have 2 experts activated per token. DeepSeek-V3 has 8 routed experts activated per token, a step in that direction.
On the other hand, things like this should’ve already been tested at the leading labs, so the chances that it’s a new idea being brought to attention there seem slim. Runners-up like xAI and Meta might find this more useful, if that’s indeed the reason, rather than extremely well-done post-training or even pretraining dataset construction.