I’m going to ask even though I didn’t read it, because this isn’t addressed in the overview and it’s my primary reason for asking If we solve alignment, do we die anyway?
What’s the scheme for stopping AGIs from working outside of this system? It seems like any scheme like this only holds if it can prevent any private entity from obtaining and running enough GPUs for research to advance their AGI to ASI. And that amount goes down as algorithms and research improve.
Robotics adequate to bootstrap hidden fabrication is the other route; while that will take a little longer, it seems like a sharp time-limit on all plans of this type.
I plugged your middle paragraph into the provided AI because that’s its point. Here’s the response:
Currently, no technical or governance scheme can reliably guarantee that all private entities are prevented from developing or running AGI outside official oversight. Even strong international agreements or hardware controls can be circumvented by determined actors, especially as required compute drops with research progress. Without ubiquitous surveillance or global control over compute, models, and researchers, a determined group could realistically “go rogue,” meaning any system that depends on absolute prevention is vulnerable to secret efforts that might reach AGI/ASI first, potentially unleashing unaligned or unsafe systems beyond collective control.
sounds kinda sycophantic, e.g. you only need global control over one of the three.
I’m going to ask even though I didn’t read it, because this isn’t addressed in the overview and it’s my primary reason for asking If we solve alignment, do we die anyway?
What’s the scheme for stopping AGIs from working outside of this system? It seems like any scheme like this only holds if it can prevent any private entity from obtaining and running enough GPUs for research to advance their AGI to ASI. And that amount goes down as algorithms and research improve.
Robotics adequate to bootstrap hidden fabrication is the other route; while that will take a little longer, it seems like a sharp time-limit on all plans of this type.
I plugged your middle paragraph into the provided AI because that’s its point. Here’s the response:
sounds kinda sycophantic, e.g. you only need global control over one of the three.