I’m not sure we should call it self-improvement at all because the prompt is not part of the model, so actually the model (the “self”) is not improving at all. It seems more like practice to improve on a skill, not improving the author itself.
jnalanko
Karma: 21
This is a nice approach! Sounds a lot like an argument map: https://en.wikipedia.org/wiki/Argument_map.
One tweak I might make to this is to assign probabilities to the circles instead of binary yes/no decisions. This will give a more principled way to reason with multiple conflicting pieces of evidence.