Have you elaborated this argument? I tend to think a military project would be a lot more cautious than move-fast-and-break-things silicone valley businesses.
The argument that orgs with reputations to lose might start being careful when AI becomes actually dangerous or even just autonomous enough to be alarming is important if true. Most folks seem to assume they’ll just forge ahead until they succeed and let a misaligned AGI get loose.
I’ve made an argument that orgs will be careful to protect their reputations in System 2 Alignment. I think this will be helpful for alignment but not enough.
Government involvement early might also reduce proliferation, which could be crucial.
Have you elaborated this argument? I tend to think a military project would be a lot more cautious than move-fast-and-break-things silicone valley businesses.
The argument that orgs with reputations to lose might start being careful when AI becomes actually dangerous or even just autonomous enough to be alarming is important if true. Most folks seem to assume they’ll just forge ahead until they succeed and let a misaligned AGI get loose.
I’ve made an argument that orgs will be careful to protect their reputations in System 2 Alignment. I think this will be helpful for alignment but not enough.
Government involvement early might also reduce proliferation, which could be crucial.
It’s complex. Whether governments will control AGI is important and neglected.
Advancing this discussion seems important.