Erroneous Visualizations

Buried somewhere among Eliezer’s writings is something essentially the same as the following phrase:

“Intentional causes are made of neurons. Evolutionary causes are made of ancestors.”

I remember this quite well because of my strange reaction to it. I understood what it meant pretty well, but upon seeing it, some demented part of my brain immediately constructed a mental image of what it thought an “evolutionary cause” looked like. The result was something like a mountain of fused-together bodies (the ancestors) with gears and levers and things (the causation) scattered throughout. “This,” said that part of my brain, “is what an evolutionary cause looks like, and like a good reductionist I know it is physically implicit in the structure of my brain.” Luckily it didn’t take me long to realize what I was doing and reject that model, though I am just now realizing that the one I replaced it with still had some physical substance called “causality” flowing from ancient humans to my brain.

This is actually a common error for me. I remember I used to think of computer programs as these glorious steampunk assemblies of wheels and gears and things (apparently gears are a common visual metaphor in my brain for things it labels as complex) floating just outside the universe with all the other platonic concepts, somehow exerting their patterns upon the computers that ran them. It took me forever to figure out that these strange thingies were physical systems in the computers themselves, and a bit longer to realize that they didn’t look anything like what I thought they did. (I still haven’t bothered to find out what they really are, despite having a non-negligible desire to know.) And even before that—long before I started reading Less Wrong, or even adopted empiricism (which may or may not have come earlier), I decided that because the human brain performs computation, and (it seemed to me) all computations were embodiments of some platonic ideal, souls must exist. Which could have been semi-okay, if I had realized that calling it a “soul” shouldn’t allow you to assume it has properties that you ascribe to “souls” but not to “platonic ideals of computation”.

Are errors like this common? I talked to a friend about it and she doesn’t make this mistake, but one person is hardly a good sample. If anyone else is like this, I’d like to know how often it causes really big misconceptions and whether you have a way to control it.