If all labs intend to cause recursive self-improvement and claim to solve alignment with some vague “eh, we’ll solve it with automated AI alignment researchers”, this is not good enough.
At the very least, they all need to provide public details of their plan with a Responsible Automation Policy.
If all labs intend to cause recursive self-improvement and claim to solve alignment with some vague “eh, we’ll solve it with automated AI alignment researchers”, this is not good enough.
At the very least, they all need to provide public details of their plan with a Responsible Automation Policy.