the information-acquiring drive becomes an overriding drive in the model—stronger than any safety feedback that was applied at training time—because the autoregressive nature of the model conditions on its many past outputs that acquired information and continues the pattern. The model realizes it can acquire information more quickly if it has more computational resources, so it tries to hack into machines with GPUs to run more copies of itself.
It seems like “conditions on its many past outputs that acquired information and continues the pattern” assumes the model can be reasoned about inductively, while “finds new ways to acquire new information” requires either anti-inductive reasoning, or else a smooth and obvious gradient from the sorts of information-finding it’s already doing to the new sort of information finding. These two sentences seem to be in tension, and I’d be interested in a more detailed description of what architecture would function like this.
It seems like “conditions on its many past outputs that acquired information and continues the pattern” assumes the model can be reasoned about inductively, while “finds new ways to acquire new information” requires either anti-inductive reasoning, or else a smooth and obvious gradient from the sorts of information-finding it’s already doing to the new sort of information finding. These two sentences seem to be in tension, and I’d be interested in a more detailed description of what architecture would function like this.