Total horse takeover

Link post

I hear a lot of talk of ‘taking over the world’. What is it to take over the world? Have I done it if I am king of the world? Have I done it if I burn the world? Have humans or the printing press or Google or the idea of ‘currency’ done it?

Let’s start with something more tractable, and be clear on what it is to take over a horse.

A natural theory is that to take over a horse is to be the arbiter of everything about the horse —to be the one deciding the horse’s every motion.

But you probably don’t actually want to control the horse’s every motion, because the horse’s own ability to move itself is a large part of its value-add. Flaccid horse mass isn’t that helpful, not even if we throw in the horse’s physical strength to move itself according to your commands, and some sort of magical ability for you to communicate muscle-level commands to it. If you were in command of the horse’s every muscle, it would fall over. (If you directed its cellular processes too, it would die; if you controlled its atoms, you wouldn’t even have a dead horse.)

Information and computing capacity

The reason this isn’t so good is that balancing and maneuvering a thousand pounds of fast-moving horse flesh balanced on flexible supports is probably hard for you, at least via an interface of individual muscles, at least without more practice being a horse. I think for two reasons:

  • Lack of information e.g. about exactly where every part of the horse’s body is and where its hoofs are touching the ground how hard

  • Lack of computing power to dedicate to calculating desired horse muscle motions from the above information and your desired high level horse behavior

(Even if you have these things, you don’t obviously know how to use them to direct the horse well, but you can probably figure this out in finite time, so it doesn’t seem like a really fundamental problem.)

Tentative claim: holding more levers is good for you only insofar as you have the information and computing capacity to calculate which directions you should want those levers pushed.

So, you seem to be getting a lot out of the horse and various horse subcomponents making their own decisions about steering and balance and breathing and snorting and mitosis and where electrons should go. That is, you seem to be getting a lot out of not being in control of the horse. In fact so far it seems like the more you are in control of the horse in this sense, the worse things go for you.

Is there a better concept of ‘taking over’—a horse, or the world—such that someone relatively non-omniscient might actually benefit from it? (Maybe not—maybe extreme control is just bad if you aren’t near-omniscient, which would be good to know.)

What riding a horse is like

Perhaps a good first question: is there any sort of power that won’t make things worse for you? Surely yes: training a horse to be ridden in the usual sense seems like ‘having control over’ the horse more than you would otherwise, and seems good for you. So what is this kind of control like?

Well, maybe you want the horse to go to London with you on it, so you get on it and pull the reins to direct it to London. You don’t run into the problems above, because aside from directing its walking toward London, it sticks to its normal patterns of activity pretty closely (for instance, it continues breathing and keeping its body in an upright position and doing walking motions in roughly the direction its head is pointed).

So maybe in general: you want to command the horse by giving it a high level goal (‘take me to London’) then you want it to do the backchaining and fill in all the details (move right leg forward, hop over this log, breathe..). That’s not quite right though, because the horse has no ability to chart a path from here to London, due to its ignorance of maps and maybe London as a concept. So you are hoping to do the first step of the backchaining—figure out the route—and then to give the horse slightly lower level goals such as, ‘turn left here’, ‘go straight’, and for it to do the rest. Which still sounds like giving it a high level goal, then having it fill in the instrumental subgoals and do them.

But that isn’t quite right. You probably also want to steer the details there somewhat. You are moment-to-moment adjusting the horse’s motion to keep you on it, for instance. Or to avoid scaring some chickens. Or to keep to the side as another horse goes by. While not steering it entirely, at that level. You are relying on its own ability to avoid rocks and holes and to dodge if something flies toward it, and to put some effort into keeping you on it. How does this fit into our simple model?

Perhaps you want the horse to behave as it would—rather than suddenly leaving every decision to you—but for you to be able to adjust any aspect of it, and have it again work out how to support that change with lower level choices. You push it to the left and it finds new places to put its feet to make that work, and adjusts its breathing and heart rate to make the foot motions work. You pull it to a halt, and it changes its leg muscle tautnesses and heart rate and breathing to make that work.

Levers

On this model, in practice your power is limited by what kinds of changes the horse can and will fill in new details for. If you point its head in a new direction, or ask it to sit down, it can probably recalculate its finer motions and support that. Whereas if you decide that it should have have holes in its legs, it just doesn’t have an affordance for doing that. And if you do it, it will bleed a lot and run into trouble rather than changing its own bloodflow. If you decide it should move via a giant horse-sized bicycle, it probably can’t support that, even if in principle its physiology might allow it. If you hold up one of its legs so its foot is high in the air, it will ‘support’ that change by moving its leg back down again, which is perhaps not what you were going for.

This suggests that taking over a thing is not zero sum. There is not a fixed amount of control to be had by intentional agents. Because perhaps you have all the control that anyone has over a horse, in the sense that if the horse ever has a choice, it will try to support your commands to it. But still it just doesn’t know how to control its own heart rate consciously or ride a giant horse-sized bicycle. Then one day it learns these skills, and can let you adjust more of its actions. You had all the control the whole time, but all became more.

Consequences

One issue with this concept of taking over is that it isn’t clear what it means to ‘support’ a change. Each change has a number of consequences, and some of them are the point while others are undesirable side effects, such that averting them is an integral part of supporting the change. For instance, moving legs faster means using up blood oxygen and also traveling faster. If you gee up the horse, you want it to support this by replacing the missing blood oxygen, but not to jump on a treadmill to offset the faster travel.

For the horse to get this right in general, it seems that it needs to know about your higher level goals. In practice with horses, they are just built so that if they decide to run faster their respiratory system supplies more oxygen and they aren’t struck by a compulsion to get on a treadmill, and if that weren’t true we would look for a different animal to ride. The fact that they always assume one kind of thing is the goal of our intervention is fine, because in practice we do basically always want legs for motion and never for using up oxygen.

Maybe there is a systematic difference between desirable consequences and ones that should be offset—in the examples that I briefly think of, the desirable consequences seem more often to do with relationships with larger scale things, and the ones that need offsetting are to do with internal things, but that isn’t always true (I might travel because I want to be healthier, but I want to be in the same relationship with those who send me mail). If the situation seems to turn inputs into outputs, then the outputs are often the point, though that is also not always true (e.g. a garbage burner seeks to get rid of garbage, not create smoke). Both of these also seem maybe contingent on our world, whereas I’m interested in a general concept.

Total takeover

I’ll set that aside, and for now define a desirable model of controlling a system as something like: the system behaves as it would, but you can adjust aspects of the system and have it support your adjustment, such that the adjustment forwards your goals.

There isn’t a clear notion of ‘all the control’, since at any point there will be things that you can’t adjust (e.g. currently the shape of the horse’s mitochondria, for a long time the relationship between space and time in the horse system), either because you or the system don’t have a means of making the adjustment intentionally, or the system can’t support the adjustment usefully. However ‘all of the control that anyone has’ seems more straightforward, at least if we define who is counted in ‘anyone’. (If you can’t control the viral spread, is the virus a someone who has some of the universe’s control?)

I think whether having all of the control at a particular time gets at what I usually mean by having ‘taken over’ depends on what we expect to happen with new avenues of control that appear. If they automatically go to whoever had control, then having all of the control at one time seems like having taken over. If they get distributed more randomly (e.g. the horse learns to ride a bicycle, but keeps that power for itself, or a new agent is created with a power), so that your fraction of control deteriorates over time, that seems less like having taken over. If that is how our world is, I think I want to say that one cannot take it over.

***

This was a lot of abstract reasoning. I especially welcome correction from someone who feels they have successfully controlled a horse to a non-negligible degree.