The Power of Letting Go Part I: Examples
There’s an approach/aesthetic of design/engineering/problem-solving which I want to talk about. I’m calling it “just let go” design; the general theme is that the designer/engineer/problem-solver gives up control over the details of the solution.
This post is mostly a list of examples to show the breadth of the concept, ranging from Minecraft to tvtropes to price controls to mutation-assisted breeding to jurisprudence to SQL, with a baby lamb in a bag somewhere in the middle. The next post will zoom out and discuss requirements and advantages of the approach.
Standard advice for a role-playing game master (GM): don’t railroad the players. There’s some wiggle room in how much the GM keeps players on an intended plot line, but the main rule is that a GM should never blatantly squash players’ creativity just because it wrecks the intended plot. Intelligent characters want to throw your plot out the window; squashing that creativity wrecks the fun.
Taken to an extreme, the opposite of railroading players is to create a sandbox game: think Minecraft or a Falling Sands game. Such games give players near-total freedom to build whatever they want; the game just supplies a canvas.
From a design perspective, one interesting property of sandbox games is that they take relatively little effort to create, yet appeal to a wide audience. Wikipedia lists Minecraft as the second best-selling video game ever; third place goes to Grand Theft Auto V. The number of lines of code in those two games is ambiguous, but GTA probably has well over 10x more code based on any apples-to-apples comparison.
Back in my day, before minecraft, we played with physical blocks called “legos”.
Lego experimented with a foray into less sandboxy products back in the early 00’s; they tried contracts with popular media franchises (e.g. Harry Potter), and creating their own media (e.g. Bionicle). Sales fell. Eventually, they turned around and went in the exact opposite direction, creating extra-sandboxy products. Those sold like hotcakes.
The rise of mobile has made app/website “flows” the main design structure: users go through a linear series of screens, one after the other. It’s railroading, but for apps and websites.
What would a more sandbox-ish alternative look like? One example is TvTropes. Art school folks may poo-poo the design, but the site is popular and notoriously addictive. People interact with TvTropes mainly by clicking links within each article, and then links within those articles, and so on, until their browser blows up in a giant tabsplosion. People naturally end up reading about tropes which resonate with their own identity; the experience feels like exploration.
Rather than pushing users along a railroad of pages, TvTropes simply offers a world and some links to help navigate it.
Engineering the Equilibrium
Changing gears, economics teaches us to pay attention to the effect of a change on the equilibrium, not just the immediate impact. Price controls are a classic example: you can cap prices in the short run, but in equilibrium that will mean supply shortages; nobody will produce a product which can’t be sold at a profitable price. Adding subsidies or removing production controls, on the other hand, will generally increase production and push down (user-facing) prices.
If we want to change something about the economy, then we can’t just declare it so without consequence; we need to understand root causes. We need to understand what features of the environment—technology and institutions—give rise to the current equilibrium, and then adjust the technology/institutions accordingly.
In game theory, the problem of designing “institutions” becomes the problem of designing the rules of the game; this is called “mechanism design”.
A simple example: suppose we have a bunch of agents, each with their own private information, and we want to incentivize them all to honestly reveal their information. How can we choose the rules of the game to make that happen? This can be tricky if, for instance, there are two agents trading a car, and we want to incentivize the seller to honestly reveal any mechanical problems. (Simply making the seller legally responsible for mechanical issues creates new problems—the buyer is no longer incentivized to take good care of the car.)
The point: the designer cannot simply tell agents to be honest; there’s no way to enforce that. We can’t directly control peoples’ choices like that. We need to design the system to incentivize the behavior instead. Much of our legal system can be understood in this light.
Technology and Equilibria
We rarely hear people talk about intentionally using technology to alter social/economic equilibria (except maybe in the clean energy context). Yet, technology has been the biggest driver of equilibrium shift over the last two centuries. For feminism, for instance, probably the most long-term-impactful development in recent years was not a march or a brief string of harassment scandals, but the baby lamb bag: an artificial womb. That’s a technology shift capable of permanently removing the mommy problem on a large scale. Of particular interest, it’s a technology which should enjoy spontaneous adoption: people will be economically incentivized to use it, without any need for an additional push. Just introduce the tech and let go. We don’t need to control peoples’ choices, we just need to change the technology underlying their incentives.
As a tool for change, technology makes far more sense than large-scale social movements. A small team of specialists focused on an under-researched problem is all you need; the resource requirements are comparatively tiny. Alas, it doesn’t make for very exciting journalism or signalling, so tech approaches tend to be ignored.
At the micro-level, another type of equilibrium design is incentive design. If you’ve ever worked in a sales-based business, you’ve probably noticed that salespeople tend to be very good at pursuing their incentives—and if their incentives don’t quite align with the company’s long-term interests, too bad.
It falls to managers to design incentive systems which reward the behavior they really want. This is hard, especially in large companies with deep pipelines; even just figuring out what you want your employees to do can be difficult. And you can’t micromanage all employees all the time, so you need some sort of incentive feedback system to keep people pointing in a useful direction.
Parents are even more familiar with this problem: children respond to incentives, but children will also try to game the incentives. Like managers, parents can’t always control their kids’ behavior directly. Finding the right incentives is tricky.
The same problem underlies the much of the difficulty of friendly AI: even simple-looking incentives/utility functions and can lead to problems if the AI optimizes really hard.
Incentive design’s next-door neighbor is selection: try lots of stuff, keep what works. This is the “design” mechanism behind evolution and the efficiency of capital markets.
As an engineering approach, selection leads to things like mutation-assisted breeding or genetic algorithms. The designer/engineer doesn’t specify a design, they just figure out what they want, find some way to measure it, and then let noise and selection find a solution. The result: surprising yet efficient radio antennas.
The beauty of selection is that no single step needs to be smart. The stock market, for instance, can be efficient even if every investor is individually an idiot. Investors with strategies which happen to be efficient at any given time make more money and have more impact on prices—they are positively selected. If the broader economy shifts, other investors’ strategies are positively selected. Even if each investor pulls their strategy at random from a hat, the system-level selection mechanism can still yield efficient markets.
Common Law Jurisprudence
Under civil law, most law is created by the statutes of politicians and bureaucrats. The law is whatever the statutes say.
But under common law (practiced in most former British territories), most law is set by the precedent of courts. Judges are generally expected to rule in line with precedent.
One commonly-cited advantage of common over civil law is predictability: without the legally-binding status of precedent, interpretation of statutes can be dicey. That makes the system unreliable and opens the door to abuse by savvy insiders. Common law allows less direct control by politicians, bureaucrats and even most judges, but is considered more stable and predictable in exchange.
Next, let’s switch from economics to programming. The programming analogue of a sandbox game is a platform: a system designed for other people to build things on.
Windows/iOS/Android, programming languages, web frameworks, game engines and cloud servers are all simple examples: each is a platform on which programmers write apps. From a platform design perspective, the key problem in each case is to build a toolset which allows programmers to build more interesting things more easily with the platform than without, in order to draw them in. Once that’s done, you copyright key parts of the platform, lock in the programmers, and you’ve got a business model.
On the other hand, if you fail to make the platform substantially easier or more powerful than working without it, then you become Windows Phone.
A key piece here is not to lock programmers into too many assumptions about what their app should do. Don’t railroad the programmers! A platform which can only be used to make one specific app is no platform at all.
Declarative vs Imperative Programming
Finally, a perfect computer science example of the just-let-go approach: declarative programming.
The usual sort of programming, imperative programming, requires the programmer to write detailed step-by-step directions for the computer. In declarative programming, on the other hand, the programmer specifies what they want—and how to get it is left to the system. SQL is a classic example: we don’t specify the exact loops, data structures, storage formats, etc needed to evaluate a SQL query, we just specify what it needs to produce.
The result, in most cases, is much simpler, clearer, more compact code.
The theme throughout is the power of letting go: give up detailed control of a process in order to obtain a better design.
In some cases, we give control to the users, to play the game the way they want (sandboxes) or to build the products they want (platforms). We just provide the tools, the environment in which to work and play.
In other cases, we have no real control over the users from the start, so we’re forced to design for the economic equilibrium. We design rules (mechanism design) and incentives to get people to do what we want.
Perhaps our “user” is not a person at all, but a computer (declarative programming). We hand off control of the details to a compiler/interpreter, and just say what we want it to do.
These all share a certain aesthetic similarity, but can we abstract out any useful insights about just-let-go design in general? Why is it useful? The next few posts will address those questions.