Oh interesting! I just had a go at testing it on screenshots from a parallel conversation and it seems like it incorrectly interprets those screenshots as also being of its own conversation.
So it seems like ‘recognising things it has said’ is doing very little of the heavy lifting and ‘recognising its own name’ is responsible for most of the effect.
I’ll have a bit more of a play around and probably put a disclaimer at the top of the post some time soon.
I believe Dusan was trying to say that davidad’s agenda limits the planner AI to only writing provable mathematical solutions. To expand, I believe that compared to what you briefly describe, the idea in davidad’s agenda is that you don’t try to build a planner that’s definitely inner aligned, you simply have a formal verification system that ~guarantees what effects a plan will and won’t have if implemented.