Seeing the Invisible (And How to Think About Machine Learning)

It’s been two months now since I quit working in the company that bought my startup. Two months into this phase of the void. I was looking forward to it; I also felt the fear. It’s a phase in between projects, in between lives. A transition.

When people asked me, “What will you do now?” I replied, “I just want to free-float for a year or two.” There’s a lot of catching up to do and a lot of infrastructure work after the intense years. So it’s busy in the transition, and yet there is enough space to begin feeling those large shapes in the dark. The ancient statues towering above one’s life, barely felt. Too large to be seen, too dark to be known. These are the contours of one’s mission, character, and substance. In my experience, it takes several months at least in the presence of these giants to reacquaint oneself, to find the bearings of the next journey.

Tim Ferriss writes about facing the void.

Rupert Sheldrake talks about the mathematical concept of attractors and how it applies to life. Imagine a large shape somewhere above one’s life that gently, but persistently pulls in a certain direction. Its gravity won’t be obvious most of the time, but in retrospect, the direction will be clear.

What will my next project be? What will its content be and what form will it have? Another startup in the analytics space? The transformation of how humans work with information and complexity is hardly finished—why, it’s barely started. Or is it this newsletter? Is it the right vehicle? How about a podcast? A movie? A foundation? A research paper? A thesis-driven venture capital firm? These all are possible forms that an idea can take. But what is the central idea that’s nagging to be materialized?

I can’t say for sure yet, but there is one thread I keep returning to. It’s seeing the invisible.

I mean that in a broad sense. The invisible includes all of mathematics. One’s beliefs. Mental models and inner states. Probable realities, and the realm of probability. Human histories and futures. Data analytics. Machine learning models. Economic models. The physics of business. The webs of meaning of words and of lives. All of these are abstract, invisible concepts that you can actually see. Once you start seeing them, it’s possible to navigate them as if flying a spaceship and watching worlds unfold around you.

I saw the angel in the marble and carved until I set him free.

— Michelangelo

I believe the invisible worlds are as real and natural as the visible ones, with their own forces, weather, and histories, and that we inhabit these realms even more naturally than the physical one.

Does that sound wonky? Peter Thiel has a question he asks founders: “What important truth do very few people agree with you on?” Well, for me, it’s this one. A lot of my life philosophy follows from this core divergent belief. And because this belief is contrary to general science and to most people’s ideas, it’s one that I must keep carefully testing against reality. I cannot rely on the crowd here. If I choose to hold this strange, outlier belief, it’s my job to vigilantly assess whether it’s working or wonky.

So far, I’ve managed to survive with my belief in the invisible for at least 15 years. As I built and exited a successful startup, this belief helped me deeply understand data analytics and innovate in the space. Elsewhere, it has made me take relationships seriously because I approach their complexity and nuance as real and tangible substance, rather than just an imaginary void between actual objects. I see the energies and physics involved in interpersonal situations, and my real-life interactions are richer and better for it.

Why is this belief contrarian? That is, why do few people believe it? I think it’s because the invisible realms are, by definition, hard to see. To our minds, they are water. But once seen, they start unfolding everywhere.

Let me give one example. My favourite way to describe how to think about machine learning is as follows. Imagine sitting in a plane as it lands. It’s just a typical Boeing descending onto a Newark runway. The pilot hands the landing over to the autopilot. If one could look through the autopilot’s eyes, they would see not one, but millions of landings, all overlayed over the actual one. Past landings onto Newark. Past landings of Boeings at other airports, at various times and weather conditions. These millions of landings would give context to the actual landing, informing the autopilot about the best way to approach the runway, what’s normal, what’s an anomaly, what’s a known problem and known solutions to it, and when to abort. This is what a machine learning algorithm “sees,” and how it learns with each new landing happening around the world. Do you see those millions of landings enveloping you as you sit in that plane?

Once you start seeing that invisible web, several things happen. You grasp the aviatic context of your landing, which transforms your experience. It allows you to trust the autopilot and understand how safe you are. Each past landing is literally supporting you!

Further, as you begin to understand machine learning, you perhaps start seeing its uses elsewhere. That is a very valuable skill today.

You can do all of this mental work even without believing those other landings are somehow real. You can—it just brings drag. Denying their reality undermines their credibility and opens up more problems than it solves (e.g., where are they?).

So why does this all matter? Here’s my heretical belief: giving the invisible realm full citizenship in our worldview would revolutionize every single human domain. And to me, helping this one belief build credibility sounds like a worthy goal for my next phase. But I am only two months into the void. The shapes are vague and formless. Nearly invisible.

No comments.