If models can still make sense of marginally-corrupted base64 (even with frame shift), that does seem like a point in favour of comparing the two encodings, rather than penalising corrupted base64 attempts with scores of zero. Thanks for pointing this out!
One limitation might still be the kind of data that can be extracted from corrupted base64: for example, models won’t have an internal reference with which to compare long strings of random characters (e.g. encryption keys), so these might still be impossible to decode if corrupted. It could be that Sonnet 4.5 could only make some sense of the bad encoding in your example because it’s comparing to known words (which it has representations for internally). Even if that’s true, lossily encoding natural language in a way that’s somewhat decodable by other models is still a useful capability for a schemer!
I’m definitely also interested in testing reasoning in base64 - as you say, this could be another useful black-box method for investigating how models do base64 internally. (I found a post on this just now which showed that GPT-4o’s addition capabilities fall pretty drastically when reasoning in base64!)
Ah that’s very interesting, thanks!
If models can still make sense of marginally-corrupted base64 (even with frame shift), that does seem like a point in favour of comparing the two encodings, rather than penalising corrupted base64 attempts with scores of zero. Thanks for pointing this out!
One limitation might still be the kind of data that can be extracted from corrupted base64: for example, models won’t have an internal reference with which to compare long strings of random characters (e.g. encryption keys), so these might still be impossible to decode if corrupted. It could be that Sonnet 4.5 could only make some sense of the bad encoding in your example because it’s comparing to known words (which it has representations for internally). Even if that’s true, lossily encoding natural language in a way that’s somewhat decodable by other models is still a useful capability for a schemer!
I’m definitely also interested in testing reasoning in base64 - as you say, this could be another useful black-box method for investigating how models do base64 internally. (I found a post on this just now which showed that GPT-4o’s addition capabilities fall pretty drastically when reasoning in base64!)