Thanks, appreciate it! Interested if you have any particular tasks you’d want as part of the safety case (we are actively building out a dataset of tasks for evaluating interpretability assistants and looking for ideas).
maybe too late, but here are some thoughts (TL;DR out-of-distribution prompt-based stress tests, and maybe some fancy SDF stuff) https://www.lesswrong.com/posts/RQadLjnmBZtvg7p8W/on-meta-level-adversarial-evaluations-of-white-box-alignment
Thanks, appreciate it! Interested if you have any particular tasks you’d want as part of the safety case (we are actively building out a dataset of tasks for evaluating interpretability assistants and looking for ideas).
maybe too late, but here are some thoughts (TL;DR out-of-distribution prompt-based stress tests, and maybe some fancy SDF stuff) https://www.lesswrong.com/posts/RQadLjnmBZtvg7p8W/on-meta-level-adversarial-evaluations-of-white-box-alignment