Cap Model Size for AI Safety

There are diminishing marginal returns to intelligence—an AI with an IQ of 150 can perform almost all human tasks flawlessly. The only exception may be conducting scientific research.

So why don’t we lobby for capping the model size, at perhaps, a couple hundred billion parameters? This cap can be strictly enforced if it’s encoded into deep learning software packages (e.g. NVIDIA, PyTorch, etc.).

I think this may be the most tractable approach for AI safety.