The evolution is meant to quickly reduce the space of search. If every perk in the pool starts with just 1 ticket, then most of perks will only be tested once (because they lead to a loss and their population immediately got to zero). If the very first run is a loss, then the true-unknown winrate of a perk is likely not a 90%, so we should not regret throwing it away.
The synergies will have effect on the populations sizes later. Those pairs that synergise are slightly more likely to lead to a win and to increase tickets of both in the pool. After some weak perks fell off and the total amount of species in the pool was reduced, we expect potential synergy-makers to “meet” more often. That is a step in the right direction. If their win was just a lucky coincidence when the perks are not consistently good, they will die out a bit later.
Of course, if the very best build relies purely on synergy and is a combination of very-bad-in-solo perks, it will not be found. I acknowledge there is no way to find the true best combination, that search would require bruteforce playing all possible combinations 20+ times. The aim is to find a managable algorithm which does not rely on personal evaluation at all (because opinions is partially the reason of “stagnation of meta”).
This comment is my thoughts.
If you have N situations, it does not automatically mean they have same probabilities. I call the mistake of not recognising it—Equiprobability mistake.
Outcomes have to be excluding. So people make the mistake at the very beginnign—at constructing the Ω set. Two of those situations are not excluding. One of them literally guarantees with 100% certainty, that the other will happen. When you have a correct Ω , then probability of one outcome given any other is zero. To check, whether outcomes are excluding, draw the branching universe graph and imagine a single slice in a much later point of time (Sunday), and count how many parallel universes reached that point. You will find, that only two, but thirders count the second entity twice. No matter what situation you research, the nodes which you take as outcomes CAN NEVER BE CONSEQUETIVE. If it was not the axiom, then i would be able add “I throw a dice” into the set of possible numbers that the dice shows at the end, and i would get an nonsense which is not Omega: {I throw the dice, dice shows 1, dice shows 2, shows 3, 4, 5, 6}. Thirders literally construct such an omega and thus get 1⁄3 for an outcome, just like I would get a “1/7 chance” of getting a number6 if i was also using a corrupted omega set.
There is a table. I place on it two apples, jar, bin, box. I pit the first apple into a jar. I put the jar into the box. I put the second apple into the bin. Comes a thirder and starts counting: “How many apples in a jar. One. How many apples in the box. One. How many apples in a bin. One. So, there are 3 apples” And forgets, that th apple in a jar and the apple in the box is THE SAME apple.
P(Monday|Tails)=P(Tuesday|Tails) is technically true, not “because two entities are equal”, but because an entity is compared to itself! It is a single outcome, which is phrased differently by using consequtive events of the single outcome.
When apple is in a jar, it guarantees that it is also in a box, the same way as <Monday and tails> situation guarantees <Tuesday and tails>.
If talking about graphs, both situations are literrally just the node sliding along the branch, not reaching any points of branching.