I assume this is how we make AGIs controllable right. Give them tight constraints on their network size and weights and execution depth and short term memory. Force them to optimize for short term exploiting.
This prevents most hostile plans as something like “take over the planet and then make more paperclips” has a period of time where less paperclips are being made than the immediate term “order more wire and paperclip bending robots”.
I assume this is how we make AGIs controllable right. Give them tight constraints on their network size and weights and execution depth and short term memory. Force them to optimize for short term exploiting.
This prevents most hostile plans as something like “take over the planet and then make more paperclips” has a period of time where less paperclips are being made than the immediate term “order more wire and paperclip bending robots”.
Alas, the commands to open a cloud computing account and allocate a million VMs are very low-energy. AI can scale in ways that living things cannot.