I don’t understand why you would want to spend any effort proving that transformers could scale to AGI.
The point would be to try and create common knowledge that they can. Otherwise, for any “we decided to not do X”, someone else will try doing X, and the problem remains.
Humanity is already taking a shotgun approach to unaligned AGI. Shotgunning safety is viable and important, but I think it’s more urgent to prevent the first shotgun from hitting an artery. Demonstrating AGI viability in this analogy is shotgunning a pig in the town square, to prove to everyone that the guns we are building can in fact kill.
We want safety to have as many pellets in flight as possible. But we want unaligned AGI to have as few pellets in flight as possible. (Preferably none.)
The point would be to try and create common knowledge that they can. Otherwise, for any “we decided to not do X”, someone else will try doing X, and the problem remains.
Humanity is already taking a shotgun approach to unaligned AGI. Shotgunning safety is viable and important, but I think it’s more urgent to prevent the first shotgun from hitting an artery. Demonstrating AGI viability in this analogy is shotgunning a pig in the town square, to prove to everyone that the guns we are building can in fact kill.
We want safety to have as many pellets in flight as possible. But we want unaligned AGI to have as few pellets in flight as possible. (Preferably none.)