Four kinds of problems

I think there are four nat­u­ral kinds of prob­lems, and learn­ing to iden­tify them helped me see clearly what’s bad with philos­o­phy, good with start-ups, and many things in-be­tween.

Con­sider these ex­am­ples:

  1. Make it so that bank trans­fers to Africa do not take weeks and re­quire vis­it­ing phys­i­cal offices, in or­der to make it eas­ier for im­mi­grants to send money back home to their poor fam­i­lies.

  2. Prove that the ex­ter­nal world ex­ists and you’re not be­ing fooled by an evil de­mon, in or­der to use that epistemic foun­da­tion to de­rive a the­ory of how the world works.

  3. Develop a syn­thetic biol­ogy safety pro­to­col, in or­der to en­sure your lab does not ac­ci­den­tally leak a dan­ger­ous pathogen.

  4. Build a space­ship that trav­els faster than the speed of light, in or­der to har­vest re­sources from out­side our light cone.

Th­ese ex­am­ples all con­sist in prob­lems that are en­coun­tered as part of work on larger pro­jects. We can clas­sify them by ask­ing how we should re­spond when they arise, as fol­lows:

1. is a prob­lem to be solved. In this par­tic­u­lar ex­am­ple, it turns out global re­mit­tances are sev­eral times larger than the com­bined for­eign aid bud­gets of the Western world. Build­ing a ser­vice avoid­ing the huge fees charged by e.g. Western Union is a very promis­ing way of helping the global poor.

2. is a prob­lem to be got­ten over. You prob­a­bly won’t find a solu­tion of the kind philoso­phers usu­ally de­mand. But, ev­i­dently, you don’t have to in or­der to make mean­ingful epistemic progress, such as de­riv­ing Gen­eral Rel­a­tivity or in­vent­ing vac­cines.

3. is a cru­cial con­sid­er­a­tion—a prob­lem so im­por­tant that it might force you to drop the en­tire pro­ject that spawned it, in or­der to just fo­cus on solv­ing this par­tic­u­lar prob­lem. Upon dis­cov­er­ing that there is a non-triv­ial risk of tens of mil­lions of peo­ple dy­ing in a nat­u­ral or en­g­ineered pan­demic within our life­times, and then re­al­is­ing how woe­fully un­der­pre­pared our health care sys­tems are for this, pub­lish­ing yet an­other pa­per sud­denly ap­pears less im­por­tant.

4. is a defeat­ing prob­lem. Solv­ing it is im­pos­si­ble. If a solu­tion forms a cru­cial part of a pro­ject, then the prob­lem is go­ing to bring that pro­ject with it into the grave. If what­ever we want to spend our time do­ing, if it re­quires re­sources from out­side our light cone, we should give it up.

With this cat­e­gori­sa­tion in mind, we can un­der­stand some good and bad ways of think­ing about prob­lems.

For ex­am­ple, I found that learn­ing the differ­ence be­tween a defeat­ing prob­lem and a prob­lem-to-be-solved was what was re­quired to adopt a “hacker mind­set”. Con­sider the re­mit­tances prob­lem above. If some­one had posed it as some­thing to do af­ter they grad­u­ate, they might have ex­pected replies like:

“Send­ing money? Surely that’s what banks do! You can’t just… build a bank?”

“What if you get hacked? Soft­ware in­fras­truc­ture for send­ing money has to be crazy re­li­able!”

“Well, if you’re go­ing to build a startup to help to global poor, you’d have to move to Sene­gal.”

Now of course, nei­ther of these things vi­o­late the laws of physics. They might vi­o­late a few so­cial norms. They might be scary. They might seem like the kind of prob­lem an or­di­nary per­son would not be al­lowed to try to solve. How­ever, if you re­ally wanted to, you could do these things. And some less con­formist peo­ple who did just that have now be­come billion­aires or, well, moved to Sene­gal (c.f. PayPal, Stripe, Monzo and Wave).

As Han­ni­bal said when his gen­er­als cau­tioned him that it was im­pos­si­ble to cross the Alps by elephant: “I shall ei­ther find a way or make one.”

This is what’s good about startup think­ing. Philos­o­phy, how­ever, has a big prob­lem which goes the other way: mis­tak­ing prob­lems-to-be-solved for defeat­ing prob­lems.

For ex­am­ple, a fre­quen­tist philoso­pher might ob­ject to Bayesi­anism say­ing some­thing like “Prob­a­bil­ities can’t rep­re­sent the de­grees of be­lief of agents, be­cause in or­der to prove all the im­por­tant the­o­rems you have to as­sume the agents are log­i­cally om­ni­scient. But that’s an un­rea­son­able con­straint. For one thing, it re­quires you to have an in­finite num­ber of be­liefs!” (this ob­jec­tion is made here, for ex­am­ple). And this might con­vince peo­ple to drop the Bayesian frame­work.

How­ever, the prob­lem here is that it has not been for­mally proven that the im­por­tant the­o­rems of Bayesi­anism in­e­liminably re­quire log­i­cal om­ni­science in or­der to work. Rather, that is of­ten as­sumed, be­cause peo­ple find it hard to do things for­mally oth­er­wise.

As it turns out, though, the prob­lem is solv­able. Philoso­phers did not find this out, how­ever, as they get paid to ar­gue and so love mak­ing ob­jec­tions. The proper re­sponse to that might just be “shut up and do the im­pos­si­ble”. (A funny and anec­do­tal ex­am­ple of this is the stu­dent who solved an un­solved prob­lem in maths be­cause he thought it was an exam ques­tion.)

Fi­nally, we can be more sys­tem­atic in clas­sify­ing sev­eral of these mis­con­cep­tions. I’d be happy to take more sug­ges­tions in the com­ments.