Neat idea, I like kicking around ideas for games I won’t make too (and have also though along those lines).
(4) Any other ideas for mechanics to add to the game?
Add a tech research mechanic, so some of your mechanics become unlocking techs, such as:
Building an AI (of course)
AI Boxing
Stealth (hide some actions from both other players and AI)
AI Friendliness (if you don’t build it your AI has no chances of being friendly)
(typical things useful in a game like this, military units, economy, etc.)
How does this tie into AI and other mechanics?
Building an AI gives you huge research bonuses
AIs themselves have huge research bonuses
Some AIs can have research as a goal
Actually, even better. There is no explicit AI tech, but some (advanced) bits of your tech tree are “AI complete” and building one has a certain probability of creating an AI (“automated space station”, “wide-scale logistics controller”, “quantum cryptography center”, “distributed drone network”, “cognitive enhancement”, “brain scanning”, etc.)
ALSO!
Randomly determine whether an AI is “sentient” or not; the builder doesn’t know, he just uses his AI every turn to build things, from his point of view it gives him random bonuses, but sometimes a new player gets added who has takes the decisions, and gets some extra actions on the side too, which his owner may or may not notice (he may choose to reveal himself).
AI players could get random (high tech) powers, not always the same ones. See all orders as they are given, give orders to certain types of units, create units in some places...
ALSO!
Some units could get huge bonuses but only if controlled by an AI.
ALSO!
Have a bunch of scoring functions for unfriendly AIs, and pick one at random. Research tech X, research all techs, exterminate mankind, build a base on the moon, destroy all military units, build a city with X population, connect all cities together...
ALSO!
The economy! Have a simple system representing the economy. For example: each turn a player has X production points to assign in economic categories, and then gains resources depending on the value of each category each turn (and the value is function of how many of that category was produced by all players, plus a random factor); some techs/buildings can improve this (giving you bonuses in production, or in a fixed category, on in predicting which category will be valuable), and of course the AI may not only have great predictive power, but it may also be able to manipulate the market (which may not be noticeable by the players).
The AI may also randomly have weird abilities like “get +1 resource everytime someone produces a widget of type X”, or have economic factors as part of it’s utility, I mean scoring function.
Randomly determine whether an AI is “sentient” or not; the builder doesn’t know, he just uses his AI every turn to build things, from his point of view it gives him random bonuses, but sometimes a new player gets added who has takes the decisions, and gets some extra actions on the side too, which his owner may or may not notice (he may choose to reveal himself).
This also solves the problem if a player wants to build an AI, but there is no new player willing to join the game at the moment.
Actually, the game should make it difficult to find out whether the “AI” is really an AI or a human. For example, there should be a few different AI scripts, so unusual human behavior seems like another script. The AI script would sometimes, but very rarely, make a random stupid move, to provide plausible deniability to human action; however the damage should be relatively low, so the AI bonuses make it on average a net benefit to have an AI.
On the other hand, even if there is a human player, there would be a script assigned and it would suggest default moves, allowing human to override any (possibly even all) of them. This would allow the human to seem more like a script; mostly letting the script do its work, sometimes override their moves to gain strategic advantage. Or take full control, if they believe it will not be suspicious.
Also, the AI would not have to get “sentience” at the very beginning. For example each turn there would be a 20% chance that the game will open the AI to be taken over by any new human player, so you would never know when exactly it happened.
Actually, the game should make it difficult to find out whether the “AI” is really an AI or a human.
Hmm, one way of doing that would be having certain types of attacks being “viruses”, that wreak havoc in an enemy’s computer systems; so it’s normal from everybody’s point of view if they act “random”—though some may actually be AIs.
Another way of making hidden AIs more interesting would be having “covert actions” a regular mechanism of the game—sabotage of systems, espionage, alerts that “something” is going on, stealing technology … so if you have signs of covert actions going on, you don’t know if it’s a rogue AI or one of your enemies.
Actually, the game should make it difficult to find out whether the “AI” is really an AI or a human.
Unless the AI wants to reveal itself (a Friendly AI may wish to reveal itself to a single player, for example; or an Unfriendly AI may wish to reveal itself and pretend to be Friendly). Once revealed, the AI’s player can talk to other players, and engage in diplomacy.
Randomly determine whether an AI is “sentient” or not; the builder doesn’t know,
Oooh, I like this one. It means that an unfriendly, “kill-all-humans” type AI can play in stealth mode; quietly nudging things here and there in order to serve his own goals, without revealing himself. Preferably, non-sentient AIs should be overwhelmingly likely (90% or so) and overwhelmingly useful, so that an unfriendly AI can easily pretend to be non-sentient.
The AI player would also need a number of actions it can take while hidden. Options include message spoofing (i.e. if unboxed, it can create a message that appears to come from another player, without informing the other player; a message like “I hereby dissolve our alliance” at the right time can do a lot of damage).
Also, there needs to be a random element to the tech tree; if you’ve ever played Alpha Centauri with the default rules, you’d have seen an example of this, you assign tech points to different categories (e.g. build, conquer, explore, economy) and get a tech from a given category once you have enough points. A research AI would give more points, and if sentient gets to pick which tech you get instead of it being random (without necessarily revealing its sentience).
In fact… it would be reasonable for a sentient AI to have a lot of control over certain random events. And it can gain more control in certain ways… such as by being unboxed (or by tricking its way out of the box)
There should also be a mechanism for unboxed AIs to try to directly affect each other’s choices; if AI One tries to make Random Event A have outcome I, and Ai Two tries to make the same random event have outcome II, then there must be some way of deciding which of the two succeeds. I propose that each AI has a certain degree of influence over each event; for example, when deciding which tech a player discovers, an AI in the lab in use by the scientists has a lot of influence (let us say 9 influence points), while an AI whose only interaction with the lab is by publishing research papers at long range has little influence (let us say 1 influence point); and the ratio of success could then be determined by the ratio of influence points (thus, in this example, the lab AI has a 90% chance of choosing the player’s next tech). For best results, there should be no indication given to players OR AIs, beyond the chosen tech, that some AI was trying to exert influence; thus, an unfriendly lab AI could claim that it had chosen tech A and yet secretly choose tech B.
The AIs would also be able to improve their influence points by spending research points on understanding human psychology...
There should also be a mechanism for unboxed AIs to try to directly affect each other’s choices; if AI One tries to make Random Event A have outcome I, and Ai Two tries to make the same random event have outcome II, then there must be some way of deciding which of the two succeeds.
A couple more mechanisms to do that:
Random mechanisms are numbers (prices, research, attack values, production, public opionion...), and AIs can influence those with a bonus or a malus in the direction they choose; so several agents (AI or human with the right tech) trying to influence a value just add together (and may cancel each other out)
Alternatively, AIs get random powers, and “control the economy” is one, “control public opinion” is another, and in a given game different AIs always get non-overlapping powers (some powers can be allowed to overlap).
Neat idea, I like kicking around ideas for games I won’t make too (and have also though along those lines).
Add a tech research mechanic, so some of your mechanics become unlocking techs, such as:
Building an AI (of course)
AI Boxing
Stealth (hide some actions from both other players and AI)
AI Friendliness (if you don’t build it your AI has no chances of being friendly)
(typical things useful in a game like this, military units, economy, etc.)
How does this tie into AI and other mechanics?
Building an AI gives you huge research bonuses
AIs themselves have huge research bonuses
Some AIs can have research as a goal
Actually, even better. There is no explicit AI tech, but some (advanced) bits of your tech tree are “AI complete” and building one has a certain probability of creating an AI (“automated space station”, “wide-scale logistics controller”, “quantum cryptography center”, “distributed drone network”, “cognitive enhancement”, “brain scanning”, etc.)
ALSO!
Randomly determine whether an AI is “sentient” or not; the builder doesn’t know, he just uses his AI every turn to build things, from his point of view it gives him random bonuses, but sometimes a new player gets added who has takes the decisions, and gets some extra actions on the side too, which his owner may or may not notice (he may choose to reveal himself).
AI players could get random (high tech) powers, not always the same ones. See all orders as they are given, give orders to certain types of units, create units in some places...
ALSO!
Some units could get huge bonuses but only if controlled by an AI.
ALSO!
Have a bunch of scoring functions for unfriendly AIs, and pick one at random. Research tech X, research all techs, exterminate mankind, build a base on the moon, destroy all military units, build a city with X population, connect all cities together...
ALSO!
The economy! Have a simple system representing the economy. For example: each turn a player has X production points to assign in economic categories, and then gains resources depending on the value of each category each turn (and the value is function of how many of that category was produced by all players, plus a random factor); some techs/buildings can improve this (giving you bonuses in production, or in a fixed category, on in predicting which category will be valuable), and of course the AI may not only have great predictive power, but it may also be able to manipulate the market (which may not be noticeable by the players).
The AI may also randomly have weird abilities like “get +1 resource everytime someone produces a widget of type X”, or have economic factors as part of it’s utility, I mean scoring function.
This also solves the problem if a player wants to build an AI, but there is no new player willing to join the game at the moment.
Actually, the game should make it difficult to find out whether the “AI” is really an AI or a human. For example, there should be a few different AI scripts, so unusual human behavior seems like another script. The AI script would sometimes, but very rarely, make a random stupid move, to provide plausible deniability to human action; however the damage should be relatively low, so the AI bonuses make it on average a net benefit to have an AI.
On the other hand, even if there is a human player, there would be a script assigned and it would suggest default moves, allowing human to override any (possibly even all) of them. This would allow the human to seem more like a script; mostly letting the script do its work, sometimes override their moves to gain strategic advantage. Or take full control, if they believe it will not be suspicious.
Also, the AI would not have to get “sentience” at the very beginning. For example each turn there would be a 20% chance that the game will open the AI to be taken over by any new human player, so you would never know when exactly it happened.
Hmm, one way of doing that would be having certain types of attacks being “viruses”, that wreak havoc in an enemy’s computer systems; so it’s normal from everybody’s point of view if they act “random”—though some may actually be AIs.
Another way of making hidden AIs more interesting would be having “covert actions” a regular mechanism of the game—sabotage of systems, espionage, alerts that “something” is going on, stealing technology … so if you have signs of covert actions going on, you don’t know if it’s a rogue AI or one of your enemies.
Unless the AI wants to reveal itself (a Friendly AI may wish to reveal itself to a single player, for example; or an Unfriendly AI may wish to reveal itself and pretend to be Friendly). Once revealed, the AI’s player can talk to other players, and engage in diplomacy.
Oooh, I like this one. It means that an unfriendly, “kill-all-humans” type AI can play in stealth mode; quietly nudging things here and there in order to serve his own goals, without revealing himself. Preferably, non-sentient AIs should be overwhelmingly likely (90% or so) and overwhelmingly useful, so that an unfriendly AI can easily pretend to be non-sentient.
The AI player would also need a number of actions it can take while hidden. Options include message spoofing (i.e. if unboxed, it can create a message that appears to come from another player, without informing the other player; a message like “I hereby dissolve our alliance” at the right time can do a lot of damage).
Also, there needs to be a random element to the tech tree; if you’ve ever played Alpha Centauri with the default rules, you’d have seen an example of this, you assign tech points to different categories (e.g. build, conquer, explore, economy) and get a tech from a given category once you have enough points. A research AI would give more points, and if sentient gets to pick which tech you get instead of it being random (without necessarily revealing its sentience).
In fact… it would be reasonable for a sentient AI to have a lot of control over certain random events. And it can gain more control in certain ways… such as by being unboxed (or by tricking its way out of the box)
There should also be a mechanism for unboxed AIs to try to directly affect each other’s choices; if AI One tries to make Random Event A have outcome I, and Ai Two tries to make the same random event have outcome II, then there must be some way of deciding which of the two succeeds. I propose that each AI has a certain degree of influence over each event; for example, when deciding which tech a player discovers, an AI in the lab in use by the scientists has a lot of influence (let us say 9 influence points), while an AI whose only interaction with the lab is by publishing research papers at long range has little influence (let us say 1 influence point); and the ratio of success could then be determined by the ratio of influence points (thus, in this example, the lab AI has a 90% chance of choosing the player’s next tech). For best results, there should be no indication given to players OR AIs, beyond the chosen tech, that some AI was trying to exert influence; thus, an unfriendly lab AI could claim that it had chosen tech A and yet secretly choose tech B.
The AIs would also be able to improve their influence points by spending research points on understanding human psychology...
You know, this could be really interesting.
A couple more mechanisms to do that:
Random mechanisms are numbers (prices, research, attack values, production, public opionion...), and AIs can influence those with a bonus or a malus in the direction they choose; so several agents (AI or human with the right tech) trying to influence a value just add together (and may cancel each other out)
Alternatively, AIs get random powers, and “control the economy” is one, “control public opinion” is another, and in a given game different AIs always get non-overlapping powers (some powers can be allowed to overlap).