Thanks for answering so many questions about this. I can see why it makes sense to filter on text from the evals. What’s the rationale for not also filtering on the canary string as a precaution? I realize there would be some false positives due to abuse, but is that common enough that it would have a significant inappropriate effect?
I think of the canary string as being useful because it communicates that some researcher has judged the document as likely to corrupt eval / benchmark results. Searching for specific text from evals doesn’t seem like a full substitute for that judgment.
To be clear, I’m not asking you to justify or defend the decision; I just would like to better understand GDM’s thinking here.
I think it’s a fair question. Filtering on the canary string is neither necessary nor sufficient for not training on evals, so it’s tempting to just ignore it. I would personally also filter out docs with the canary string, but I’m not sure why they aren’t.
Thanks for answering so many questions about this. I can see why it makes sense to filter on text from the evals. What’s the rationale for not also filtering on the canary string as a precaution? I realize there would be some false positives due to abuse, but is that common enough that it would have a significant inappropriate effect?
I think of the canary string as being useful because it communicates that some researcher has judged the document as likely to corrupt eval / benchmark results. Searching for specific text from evals doesn’t seem like a full substitute for that judgment.
To be clear, I’m not asking you to justify or defend the decision; I just would like to better understand GDM’s thinking here.
I think it’s a fair question. Filtering on the canary string is neither necessary nor sufficient for not training on evals, so it’s tempting to just ignore it. I would personally also filter out docs with the canary string, but I’m not sure why they aren’t.