Right now, AI capability advances are driven by compute scaling, human ML research, and ML experiments. Transparency and direct modification of models do not have good returns to AI capabilities. What reasons are there to think transparency and direct modification would have better returns in the future?
Right now, AI capability advances are driven by compute scaling, human ML research, and ML experiments. Transparency and direct modification of models do not have good returns to AI capabilities. What reasons are there to think transparency and direct modification would have better returns in the future?