AI might help with people generating tests for key results from okrs and publishing if they are not met.
If the key results are published this could help with AI pauses by validating that no stories on creating beyond frontier models have been written or started (assuming that that is a key result people care about).
I figured that objectives and key results are how companies maintain alignment and avoid internal arms races so might be useful for alignment between entities too (perhaps with government accredited badges for people that maintain objectives like pausing and responsible data use)
AI might help with people generating tests for key results from okrs and publishing if they are not met.
If the key results are published this could help with AI pauses by validating that no stories on creating beyond frontier models have been written or started (assuming that that is a key result people care about).
I figured that objectives and key results are how companies maintain alignment and avoid internal arms races so might be useful for alignment between entities too (perhaps with government accredited badges for people that maintain objectives like pausing and responsible data use)