Social deduction games
with clear final objectives: Mafia, Tank Tactics, Neptune’s Pride. These games have clear winning conditions, thus final objectives for the players. The meta objectives are open ended, which gives the players a more opened way to play the game. These games have very little rules and mechanics to limit how the game would be played.
with ambiguous final objectives: Petrov Day, Reddit’s The Button. These games have no clear winning conditions, thus the final objectives are open ended. They are the same as above, with little rules and open ended playing styles for the players. The main difference is that there is no final objective, which in turn may change how players play them open endedly, but the overall playing style is more or less the same. They usually call these social experiments for the lack of clearly defined final objectives.
rules can be directly changed: Nomic and variants. These are basically social deduction games that break the 4th wall. The open endedness have been applied to not just the gameplay but also to directly modifying the game to various extent. If the rules can be followed arbitrarily, then the game moves closer to simulating real life.
No rules, no objectives: Real life. This is Nomic where everything is arbitrary. The actual limitations we are dealing with are the constraints of reality and the survival of the players themselves in the real world.
No rules, no objectives, nothing is real: Simulation theory, the Matrix. This is basically the turtles all the way down concept to however many levels you wish to go.
Single player games vs multiplayer games. Single player games are the most restrictive form of gaming. NPCs are limited because mainly they can’t play the meta and anything beyond that. You can follow the similar breakdown above for single player games too, but they wouldn’t be as interesting for the lack of multiplayer component to form the meta and beyond.
Alignment is always contextual in regards to the social norms at the time. We’ve not had AI for that long so people assume that the alignment problem is a solve it once for all type of thing instead of an ever changing problem.
It’s very similar in nature as in how they test new technologies for mass adoption. Things have been massively adopted before their safety is thoroughly researched, but you can only do so much before the demand for their necessity and people’s impatience push for their ubiquity, like asbestos and radiation. When we fail to find alternatives for the new demands, it will be massively adopted regardless of their consequences. AI can be thought of as just an extension of computers, specialized to certain tasks. The technology itself is fundamentally the same, how it’s been used is mostly what’s been changing because of the improved efficiency. The technology, computer, has seen mass adoption already, but it’s no longer the same computers as people were using 30 or even 20 years ago. Most new technologies are even as close to multipurpose as the computer, so we are dealing with an unprecedented type of mass adoption event in human history where the technology itself is closely tied to how it’s been used and its ever changing nature of the type of computations people at the time decide to use them for.