Announcing Encultured AI: Building a Video Game
Also available on the EA Forum.
Preceded By: Encultured AI Pre-planning, Part 2: Providing a Service
If you’ve read to the end of our last post, you maybe have guessed: we’re building a video game!
This is gonna be fun :)
Our homepage: https://encultured.ai/
Will Encultured save the world?
Is this business plan too good to be true? Can you actually save the world by making a video game?
Well, no. Encultured on its own will not be enough to make the whole world safe and happy forever, and we’d prefer not to be judged by that criterion. The amount of control over the world that’s needed to fully pivot humanity from an unsafe path onto a safe one is, simply put, more control than we’re aiming to have. And, that’s pretty core to our culture. From our homepage:
Still, we don’t believe our company or products alone will make the difference between a positive future for humanity versus a negative one, and we’re not aiming to have that kind of power over the world. Rather, we’re aiming to take part in a global ecosystem of companies using AI to benefit humanity, by making our products, services, and scientific platform available to other institutions and researchers.
Our goal is to play a part in what will be or could be a prosperous civilization. And for us, that means building a successful video game that we can use in valuable ways to help the world in the future!
Fun is a pretty good target for us to optimize
You might ask: how are we going to optimize for making a fun game and helping the world at the same time? The short answer is that creating a game world in which lots of people are having fun in diverse and interesting ways in fact creates an amazing sandbox for play-testing AI alignment & cooperation. If an experimental new AI enters the game and ruins the fun for everyone — either by overtly wrecking in-game assets, subtly affecting the game culture in ways people don’t like, or both — then we’re in a good position to say that it probably shouldn’t be deployed autonomously in the real world, either. In the long run, if we’re as successful we hope as a game company, we can start posing safety challenges to top AI labs of the form “Tell your AI to play this game in a way that humans end up endorsing.”
Thus, we think the market incentive to grow our user base in ways they find fun is going to be highly aligned with our long-term goals. Along the way, we want our platform to enable humanity to learn as many valuable lessons as possible about human↔AI interaction, in a low-stakes game environment before having to learn those lessons the hard way in the real world.
Principles to exemplify
In preparation for growing as a game company, we’ve put a lot of thought into how to ensure our game has a positive rather than negative impact on the world, accounting for its scientific impact, its memetic impact, as well as the intrinsic moral value of the game as a positive experience for people.
Below are some guiding principles we’re planning to follow, not just for ourselves, but also to set an example for other game companies:
Pursue: Fun! We’re putting a lot of thought into not only how our game can be fun, but also ensuring that the process of working at Encultured and building the game is itself fun and enjoyable. We think fun and playfulness are key for generating outcomes we want, including low-stakes high-information settings for interacting with AI systems.
Maintain: opportunities to experiment. No matter how our product develops, we’re committed to maintaining its value as a platform for experiments, especially experiments that help humanity navigate the present and future development of AI technology.
Avoid: teaching bad lessons. On the margin, we expect our game to incentivize cooperation over conflict, relative to other games. If players demand some amount of in-game violence, we might enable it, but only along with other features that reward people/groups for finding ways to avoid violence (like in the real world). We hope that our creativity in this regard can set a positive example for other game companies.
Avoid: in-game suffering. Unlike other game developers, we are committed to ensuring that the entities in our game are not themselves susceptible to conscious suffering. Today’s narrow AI systems are not likely to be entities that suffer, but if that changes, we’ll be on the lookout to avoid it, and to promote industry-wide standards for minimizing the in-game suffering of algorithmic entities.
Avoid: uncontrolled intelligence explosions. This should go without saying given our founding team, but: we expect to be much more careful than other companies to ensure that recursively self-improving intelligent agents don’t form within our game and break out onto the internet! Again, with today’s AI technology – especially as used in our video game as-planned — this possibility is extremely unlikely; however, as AI progresses, we’re going to exercise and promote industry-wide caution around the potential for intelligence explosions.
Pursue: more fun :) We want our developers’ sense of creativity and our users’ sense of fun to drive our product development for the most part; otherwise, we’ll miss out on a huge number of connections with people who can teach us valuable lessons about how human↔AI interactions should work.
So, that’s it. Make a fun game, make sure it remains a healthy and tolerant place for experiments with AI safety and alignment, and be safe and ethical ourselves in the ways we want all game companies to be safe and ethical. We hope you’ll like it!
If we’re very lucky and the global development of AI technology moves in a really safe and positive direction — e.g., if we end up with a well-functioning Comprehensive AI Services economy — maybe our game will even stick around as a long-lasting source of healthy entertainment. While it’s beyond our ability to unilaterally prevent every disaster that could avert such a positive future, it’s definitely our intention to help steer things in that direction.