I think that if you want to compute logical correlations between programs, you need to look at their, well, logic. E.g., if you have some way of extracting a natural abstraction-based representation of their logic, build something like a causal graph from it, and then design a similarity metric for comparing these representations.
I have a suspicion, though, that this is not the right approach for handling ECL because ECL (I think?) involves the agent(s) looking at (some abstraction over) their “source code(s)” and then making a decision based on that. I expect that this ~reflection needs to be modeled explicitly.
I think that if you want to compute logical correlations between programs, you need to look at their, well, logic. E.g., if you have some way of extracting a natural abstraction-based representation of their logic, build something like a causal graph from it, and then design a similarity metric for comparing these representations.
I have a suspicion, though, that this is not the right approach for handling ECL because ECL (I think?) involves the agent(s) looking at (some abstraction over) their “source code(s)” and then making a decision based on that. I expect that this ~reflection needs to be modeled explicitly.