If you’re able to shift the crossover by just more resampling, yeah, that suggests that the slight inversion is a minor artifact—maybe the hyperparameters are slightly better tuned at the start for MLPs compared to CNNs or you don’t have enough regularization for MLPs to keep them near the CNNs as you scale which exaggerates the difference (adding in regularization is often a key ingredient in MLP papers), something boring like that...
If you’re able to shift the crossover by just more resampling, yeah, that suggests that the slight inversion is a minor artifact—maybe the hyperparameters are slightly better tuned at the start for MLPs compared to CNNs or you don’t have enough regularization for MLPs to keep them near the CNNs as you scale which exaggerates the difference (adding in regularization is often a key ingredient in MLP papers), something boring like that...