I love this approach, I think it very much relates to how systems need good ground truth signals and how verification mechanisms are part of the core thing we need for good AI systems.
I would be very interested in setting more of this up as infrastructure for better coding libraries and similar for the AI Safety research ecosystem. There’s no reason why this shouldn’t be a larger effort for alignment research automation. I think it relates to some of the formal verification stuff but it is to some extent the abstraction level above it and so if we want efficient software systems that can be integrated into formal verification I see this as a great direction to take things in.
I love this approach, I think it very much relates to how systems need good ground truth signals and how verification mechanisms are part of the core thing we need for good AI systems.
I would be very interested in setting more of this up as infrastructure for better coding libraries and similar for the AI Safety research ecosystem. There’s no reason why this shouldn’t be a larger effort for alignment research automation. I think it relates to some of the formal verification stuff but it is to some extent the abstraction level above it and so if we want efficient software systems that can be integrated into formal verification I see this as a great direction to take things in.