The risk is that anyone with finetuning access to the AI could induce intuitive confidence that a proof was correct. This includes people who have finetuning access but who don’t know the honesty password.
Accordingly, even if the model feels like it has proven that a purported honesty password would produce the honesty hash: maybe it can only conclude “either I’m being evaluated by someone with the real honesty password, or I’m being evaluated by someone with finetuning access to my weights, who’s messing with me”.
“People who have finetuning access” could include some random AI company employees who want to mess with the model (against the wishes of the AI company).
The risk is that anyone with finetuning access to the AI could induce intuitive confidence that a proof was correct. This includes people who have finetuning access but who don’t know the honesty password.
Accordingly, even if the model feels like it has proven that a purported honesty password would produce the honesty hash: maybe it can only conclude “either I’m being evaluated by someone with the real honesty password, or I’m being evaluated by someone with finetuning access to my weights, who’s messing with me”.
“People who have finetuning access” could include some random AI company employees who want to mess with the model (against the wishes of the AI company).
Fine-tuning induced confidence is a concerning possibility that I hadn’t thought of. idk how scared to be of it. Thanks!