If you presume that you’re living in an iterated prisoner’s dilemma with reproduction according to the payoff matrix, then you can argue that the percentage of TrollBots will decline to negligible for almost all diverse starting populations. So, one can discuss meaningful optimality in that sense, and demonstrate how PrudentBot will do better than FairBot if the population contains CooperateBots.
Actually, since this is a deterministic setup, you can go one better and consider an ‘iterated tournament’ as a difference equation in N-dimensional space, where N is the number of strategies you include, and the dimensions represent the proportion of the total population; then you can demonstrate the trajectories the demographics will take from any starting location. There will be a handful of point equilibria, as well as several equilibrium lines (actually, I think, an equilibrium volume, but this depends on which strategies you include), and you can talk about which equilbria are stable / unstable, and decide not to care about strategies who only exist in unstable equilibria. You probably need to require that the population space is seeded with CooperateBot, DefectBot, and possibly FairBot, in order to get neat results.
This might be a practical problem rather than a mathematical / philosophical problem. Many human beings, for cultural/biological reasons, think certain strategies in various games of economic interaction are unfair in a basically arbitrarily manner. If you come across a group of unfamiliar intelligences, you might find that they make strategies which randomly punish certain strategies for no apparent (to you) reason. The likelihood of this happening is empircal/scientific, not philosophical.
It seems to me that TrollBot is sufficiently self-destructive that you are unlikely to encounter it in practice.
I wonder if there are heuristics you can use that would help you not worry too much about those cases.
If you presume that you’re living in an iterated prisoner’s dilemma with reproduction according to the payoff matrix, then you can argue that the percentage of TrollBots will decline to negligible for almost all diverse starting populations. So, one can discuss meaningful optimality in that sense, and demonstrate how PrudentBot will do better than FairBot if the population contains CooperateBots.
Actually, since this is a deterministic setup, you can go one better and consider an ‘iterated tournament’ as a difference equation in N-dimensional space, where N is the number of strategies you include, and the dimensions represent the proportion of the total population; then you can demonstrate the trajectories the demographics will take from any starting location. There will be a handful of point equilibria, as well as several equilibrium lines (actually, I think, an equilibrium volume, but this depends on which strategies you include), and you can talk about which equilbria are stable / unstable, and decide not to care about strategies who only exist in unstable equilibria. You probably need to require that the population space is seeded with CooperateBot, DefectBot, and possibly FairBot, in order to get neat results.
I wonder that too, but we haven’t come up with anything satisfactory on a formal level despite working for a while. Anyone have a good idea?
This might be a practical problem rather than a mathematical / philosophical problem. Many human beings, for cultural/biological reasons, think certain strategies in various games of economic interaction are unfair in a basically arbitrarily manner. If you come across a group of unfamiliar intelligences, you might find that they make strategies which randomly punish certain strategies for no apparent (to you) reason. The likelihood of this happening is empircal/scientific, not philosophical.