Getting Nearer

Reply to: A Tale Of Two Tradeoffs

I’m not comfortable with compliments of the direct, personal sort, the “Oh, you’re such a nice person!” type stuff that nice people are able to say with a straight face. Even if it would make people like me more—even if it’s socially expected—I have trouble bringing myself to do it. So, when I say that I read Robin Hanson’s “Tale of Two Tradeoffs”, and then realized I would spend the rest of my mortal existence typing thought processes as “Near” or “Far”, I hope this statement is received as a due substitute for any gushing compliments that a normal person would give at this point.

Among other things, this clears up a major puzzle that’s been lingering in the back of my mind for a while now. Growing up as a rationalist, I was always telling myself to “Visualize!” or “Reason by simulation, not by analogy!” or “Use causal models, not similarity groups!” And those who ignored this principle seemed easy prey to blind enthusiasms, wherein one says that A is good because it is like B which is also good, and the like.

But later, I learned about the Outside View versus the Inside View, and that people asking “What rough class does this project fit into, and when did projects like this finish last time?” were much more accurate and much less optimistic than people who tried to visualize the when, where, and how of their projects. And this didn’t seem to fit very well with my injunction to “Visualize!”

So now I think I understand what this principle was actually doing—it was keeping me in Near-side mode and away from Far-side thinking. And it’s not that Near-side mode works so well in any absolute sense, but that Far-side mode is so much more pushed-on by ideology and wishful thinking, and so casual in accepting its conclusions (devoting less computing power before halting).

An example of this might be the balance between offensive and defensive nanotechnology, where I started out by—basically—just liking nanotechnology; until I got involved in a discussion about the particulars of nanowarfare, and noticed that people were postulating crazy things to make defense win. Which made me realize and say, “Look, the balance between offense and defense has been tilted toward offense ever since the invention of nuclear weapons, and military nanotech could use nuclear weapons, and I don’t see how you’re going to build a molecular barricade against that.”

Are the particulars of that discussion likely to be, well, correct? Maybe not. But so long as I wasn’t thinking of any particulars, my brain had free reign to just… import whatever affective valence the word “nanotechnology” had, and use that as a snap judgment of everything.

You can still be biased about particulars, of course. You can insist that nanotech couldn’t possibly be radiation-hardened enough to manipulate U-235, which someone tried as a response (fyi: this is extremely silly). But in my case, at least, something about thinking in particulars...

...just snapped me out of the trance, somehow.

When you’re thinking using very abstract categories—rough classes low on computing power—about things distant from you, then you’re also—if Robin’s hypothesis is correct—more subject to ideological bias. Together this implies you can cherry-pick those very loose categories to put X together with whatever “similar” Y is ideologically convenient, as in the old saw that “atheism is a religion” (and not playing tennis is a sport).

But the most frustrating part of all, is the casualness of it—the way that ideologically convenient Far thinking is just thrown together out of whatever ingredients come to hand. The ten-second dismissal of cryonics, without any attempt to visualize how much information is preserved by vitrification and could be retrieved by a molecular-level scan. Cryonics just gets casually, perceptually classified as “not scientifically verified” and tossed out the window. Or “what if you wake up in Dystopia?” and tossed out the window. Far thinking is casual—that’s the most frustrating aspect about trying to argue with it.

This seems like an argument for writing fiction with lots of concrete details if you want people to take a subject seriously and think about it in a less biased way. This is not something I would have thought based on my previous view.

Maybe cryonics advocates really should focus on writing fiction stories that turn on the gory details of cryonics, or viscerally depict the regret of someone who didn’t persuade their mother to sign up. (Or offering prizes to professionals who do the same; writing fiction is hard, writing SF is harder.)

But I’m worried that, for whatever reason, reading concrete fiction is a special case that doesn’t work to get people to do Near-side thinking.

Or there are some people who are inspired to Near-side thinking by fiction, and only these can actually be helped by reading science fiction.

Maybe there are people who encounter big concrete detailed fictions process them in a Near way—the sort of people who notice plot holes. And others who just “take it all in stride”, casually, so that however much concrete fictional “information” they encounter, they only process it using casual “Far” thinking. I wonder if this difference has more to do with upbringing or genetics. Either way, it may lie at the core of the partial yet statistically outstanding correlation between careful futurists and science fiction fans.

I expect I shall be thinking about this for a while.