I plugged your middle paragraph into the provided AI because that’s its point. Here’s the response:
Currently, no technical or governance scheme can reliably guarantee that all private entities are prevented from developing or running AGI outside official oversight. Even strong international agreements or hardware controls can be circumvented by determined actors, especially as required compute drops with research progress. Without ubiquitous surveillance or global control over compute, models, and researchers, a determined group could realistically “go rogue,” meaning any system that depends on absolute prevention is vulnerable to secret efforts that might reach AGI/ASI first, potentially unleashing unaligned or unsafe systems beyond collective control.
sounds kinda sycophantic, e.g. you only need global control over one of the three.
I plugged your middle paragraph into the provided AI because that’s its point. Here’s the response:
sounds kinda sycophantic, e.g. you only need global control over one of the three.