I’m not convinced because I don’t think limited context windows are a fundamental problem. Humans have limited context windows. I keep forgetting things all the time, yet I’m able to work with large codebases. The way around this is to organize information into documentation, and hiding details behind layers of abstraction. But Claude can do this too, at least when instructed to do so. I can ask the AI to explain the main design choices it has made, and I can ask it to change course, still without reading the actual code. Documentation lets me and others know and remember why things are done the way they are.
jnalanko
Karma: 23
This is a nice approach! Sounds a lot like an argument map: https://en.wikipedia.org/wiki/Argument_map.
One tweak I might make to this is to assign probabilities to the circles instead of binary yes/no decisions. This will give a more principled way to reason with multiple conflicting pieces of evidence.
I’m not sure we should call it self-improvement at all because the prompt is not part of the model, so actually the model (the “self”) is not improving at all. It seems more like practice to improve on a skill, not improving the author itself.
Interesting experiment. I think this is alright for AI music, but I feel it’s missing a musical identity. Every song has a new vocalist, and a new musical style or even genre. The musical vocabulary between the songs is all over the place. The only thing really holding them together is the shared lyrical theme. That’s where I can somehow feel the presence of a common human author.