Given that losing is not catastrophic, it is the preferred tactic to create 10 agents with the same goal as you as opposed to a one-off finite gain.
In short: 1x(t)+200 < 10x(t) for sufficiently large t and positive values of x’(t)
Also ethically, I lean toward having 10 persons making informed decisions than 200 humans following the leader.
Given that losing is not catastrophic, it is the preferred tactic to create 10 agents with the same goal as you as opposed to a one-off finite gain.
In short: 1x(t)+200 < 10x(t) for sufficiently large t and positive values of x’(t)
Also ethically, I lean toward having 10 persons making informed decisions than 200 humans following the leader.