Fun idea, but idk how this helps as a serious solution to the alignment problem.
suggestion: can you be specific about exactly what “work” the brain-like initialisation is doing in the story?
thoughts:
This risks moral catastrophe. I’m not even sure “let’s run gradient descent on your brain upload till your amygdala is playing pong” is something anyone can consent to, because you’re creating a new moral patient once you upload and mess with their brain.
How does this address the risks of conventional ML?
Let’s say we have a reward signal R and we want a model to maximise R during deployment. Conventional ML says “update a model with SGD using R during training” and then hopefully SGD carves into the model R-seeking behaviour. This is risky because, if the model already understands the training process and has some other values, then SGD might carve into the model scheming behaviour. This is because “value R” and “value X and scheme” are both strategies which achieve high R-score during training. But during deployment, the “value X and scheme” model would start a hostile AI takeover.
How is this risk mitigated if the NN is initialised to a human brain? The basic deceptive alignment story remains the same.
If the intuition here is “humans are aligned/corrigible/safe/honest etc”, then you don’t need SGD. Just ask the human to do complete the task, possible with some financial incentive.
If the purpose of SGD is to change the human’s values from X to R, then you still risk deceptive alignment. That is, SGD is just as likely to instead change human behaviour from non-scheming to scheming. Both strategies “value R” and “value X and scheme” will perform well during training as judged by R.
“The comparative advantage of this agenda is the strong generalization properties inherent to the human brain. To clarify: these generalization properties are literally as good as they can get, because this tautologically determines what we would want things to generalize as.”
Why would this be true?
If we have the ability to upload and run human brains, what do we SGD for? SGD is super inefficient, compared with simply teaching a human how to do something. If I remember correctly, if we trained a human-level NN from initialisation using current methods, then the training would correspond to like a million years of human experience. In other words, SGD (from initialisation), would require as much compute as running 1000 brains continuously for 1000 years. But if I had that much compute, I’d probably rather just run the 1000 brains for 1000 years.
That said, I think something in the neighbourhood of this idea could be helpful.
Fun idea, but idk how this helps as a serious solution to the alignment problem.
suggestion: can you be specific about exactly what “work” the brain-like initialisation is doing in the story?
thoughts:
This risks moral catastrophe. I’m not even sure “let’s run gradient descent on your brain upload till your amygdala is playing pong” is something anyone can consent to, because you’re creating a new moral patient once you upload and mess with their brain.
How does this address the risks of conventional ML?
Let’s say we have a reward signal R and we want a model to maximise R during deployment. Conventional ML says “update a model with SGD using R during training” and then hopefully SGD carves into the model R-seeking behaviour. This is risky because, if the model already understands the training process and has some other values, then SGD might carve into the model scheming behaviour. This is because “value R” and “value X and scheme” are both strategies which achieve high R-score during training. But during deployment, the “value X and scheme” model would start a hostile AI takeover.
How is this risk mitigated if the NN is initialised to a human brain? The basic deceptive alignment story remains the same.
If the intuition here is “humans are aligned/corrigible/safe/honest etc”, then you don’t need SGD. Just ask the human to do complete the task, possible with some financial incentive.
If the purpose of SGD is to change the human’s values from X to R, then you still risk deceptive alignment. That is, SGD is just as likely to instead change human behaviour from non-scheming to scheming. Both strategies “value R” and “value X and scheme” will perform well during training as judged by R.
“The comparative advantage of this agenda is the strong generalization properties inherent to the human brain. To clarify: these generalization properties are literally as good as they can get, because this tautologically determines what we would want things to generalize as.”
Why would this be true?
If we have the ability to upload and run human brains, what do we SGD for? SGD is super inefficient, compared with simply teaching a human how to do something. If I remember correctly, if we trained a human-level NN from initialisation using current methods, then the training would correspond to like a million years of human experience. In other words, SGD (from initialisation), would require as much compute as running 1000 brains continuously for 1000 years. But if I had that much compute, I’d probably rather just run the 1000 brains for 1000 years.
That said, I think something in the neighbourhood of this idea could be helpful.