Reminds me of Core War. Only that the cells were single memory locations. There are no ‘actors’ - if you could call it that because cells had no owner; only execution threads have.
Maybe some insights can be derived by comparing the significant experience with Core Wards programs. For example There was no single winning strategy but kind of rock-paper-scissors types of programs. It also allos analysis and exploitation of opponents code—albeit at a very low level.
ADDED: The advantage of this model (which has been used to study evolution of actors) is that it is extremely simple. The disadvantage is that the effect per computation is very high: A simple loop can alter a sizable fraction of the ‘world’ within short time. Thus no complex analysis of opponents never pays off (except for tests like ‘is some opponent at this address’).
I like your bot-world as it is kind of a hybrid of other bot worlds and core wars in that it does allow inspection. But I think it’s complexity (robots, parts, items, inspection, cells; esp. the winning condition) doesn’t cut directly to the problem at hand. The key point missing in core wars is a limit to physical reach. Using a limit to the range of mov-instructions would have made a significant difference—even without going to 2D or 3D (actually that complicates reasoning). Or if mov instructions took more time the further they reach.
I agree that a more gradual winning criteria than dead/alive (as in core wars) would help direct the search for more efficient programs.
Hmm, if you were talking about “rock-paper-scissors” as an example of games without a pure Nash equilibrium, then some games have it and some don’t. Intuitively, the more complex the game (the larger the number of pure strategies) the less likely that there is a pure Nash equilibrium, but that’s not a strict rule. However, under weak assumptions, there is always at least one, generally mixed, Nash equilibrium.
If by “winning strategy” you meant a mixed strategy that guarantees a greater than zero payoff, then in a two-player symmetric zero-sum game it can’t exist: if it existed both player would use it at the Nash equilibrium, and the sum of their expected payoffs would be greater than zero.
A simple loop can alter a sizable fraction of the ‘world’ within short time. Thus no complex analysis of opponents never pays off (except for tests like ‘is some opponent at this address’).
It’s not because a simple loop can alter a lot of space. It’s because Core Wars world is crowded both in space and time. Make agents start a lightyear away from each other and/or make a communication/energy/matter bottleneck and all of a sudden it pays off to do a complex code analysis of your opponent!
Reminds me of Core War. Only that the cells were single memory locations. There are no ‘actors’ - if you could call it that because cells had no owner; only execution threads have.
Maybe some insights can be derived by comparing the significant experience with Core Wards programs. For example There was no single winning strategy but kind of rock-paper-scissors types of programs. It also allos analysis and exploitation of opponents code—albeit at a very low level.
ADDED: The advantage of this model (which has been used to study evolution of actors) is that it is extremely simple. The disadvantage is that the effect per computation is very high: A simple loop can alter a sizable fraction of the ‘world’ within short time. Thus no complex analysis of opponents never pays off (except for tests like ‘is some opponent at this address’).
I like your bot-world as it is kind of a hybrid of other bot worlds and core wars in that it does allow inspection. But I think it’s complexity (robots, parts, items, inspection, cells; esp. the winning condition) doesn’t cut directly to the problem at hand. The key point missing in core wars is a limit to physical reach. Using a limit to the range of mov-instructions would have made a significant difference—even without going to 2D or 3D (actually that complicates reasoning). Or if mov instructions took more time the further they reach.
I agree that a more gradual winning criteria than dead/alive (as in core wars) would help direct the search for more efficient programs.
See also: The AI Challenge—Ants Article
That’s true for any symmetric zero-sum game.
Interesting I wasn’t aware of that. Does it hold only for sufficiently complex games? Can you give a reference?
Hmm, if you were talking about “rock-paper-scissors” as an example of games without a pure Nash equilibrium, then some games have it and some don’t. Intuitively, the more complex the game (the larger the number of pure strategies) the less likely that there is a pure Nash equilibrium, but that’s not a strict rule.
However, under weak assumptions, there is always at least one, generally mixed, Nash equilibrium.
If by “winning strategy” you meant a mixed strategy that guarantees a greater than zero payoff, then in a two-player symmetric zero-sum game it can’t exist:
if it existed both player would use it at the Nash equilibrium, and the sum of their expected payoffs would be greater than zero.
It’s not because a simple loop can alter a lot of space. It’s because Core Wars world is crowded both in space and time. Make agents start a lightyear away from each other and/or make a communication/energy/matter bottleneck and all of a sudden it pays off to do a complex code analysis of your opponent!
That’s exactly the same thing oly phrased concrete vs. abstract.
Thats an abstract formulation of my proposal to “Using a limit to the range of mov-instructions would have made a significant difference”.