Gödel’s Legacy: A game without end

Cross-posted at Brick. A fun ex­plo­ra­tion, hope­fully re­quiring only a medium level of tech­ni­cal fa­mil­iar­ity.

“Not with a bang but a whim­per”

It’s 1930 and Kurt Gödel has just de­liv­ered a talk at Königs­berg that car­pet bombed Hilbert’s Pro­gram.

...and no one no­ticed.

Godel’s an­nounce­ment, de­liv­ered dur­ing the sum­ma­riz­ing ses­sion on the third and last day of the con­fer­ence, was so un­der­stated and ca­sual—so thor­oughly un­dra­matic—that it hardly qual­ified as an an­nounce­ment, and no one pre­sent, with one ex­cep­tion, paid it any mind at all.
(source)

Well, al­most no one. John von Neu­mann came up to Gödel af­ter the talk, pre­sum­ably to dou­ble check if he was go­ing crazy or if Gödel had in fact just proved the ex­is­tence of un­prov­able math­e­mat­i­cal truths.

To un­der­stand what Gödel’s In­com­plete­ness The­o­rem says, it helps to see the meta-math­e­mat­i­cal opinions of some of his peers. Wittgen­stein opens his big fa­mous book (not that I’ve ac­tu­ally read it) with:

One could put the whole sense of the book per­haps in these words: What can be said at all, can be said clearly; and whereof one can­not speak, thereof one must be silent.

This was a bold move in its own way. Con­trast it with the fol­low­ing take, “Every­thing can be spo­ken of, you just need to get smarter” which is a deeply held ideal of Straw­man STEM-Lord. In­ter­est­ingly, both Wittgen­stein and STEM-Lord agree on some­thing, the ex­act some­thing that Gödel smashed up in Königs­berg. It’s a very tech­ni­cal claim, best com­mu­ni­cated with the fol­low­ing GIF:

Witt and STEM-Lord dis­agree on what bleeds, but they are in fer­vent agree­ment that what bleeds can be kil­led.

Un­rav­el­ing Self-reference

Take the clas­sic liar’s para­dox:

Oh shit, it’s true and false, or nei­ther, paaaaaaaaaaaaaaara­dox! The Wittgen­stein-es­que ap­proach might be to claim the liar’s sen­tence doesn’t ac­tu­ally mean any­thing. There is no re­al­ity to it to speak of, so you can’t speak of it. You are just get­ting your­self tied into a lin­guis­tic knot, trick­ing your­self into think­ing that just be­cause you can write this sen­tence it must mean some­thing. Nat­u­ral lan­guage is not a for­mal sys­tem and this sen­tence need not have a co­her­ent “log­i­cal” “truth value”. BOOM, check­mate hy­po­thet­i­cal op­po­nent!

Hid­den in the ex­pla­na­tion “Lan­guage isn’t meant to avoid para­dox and self-refer­ence” is the claim “but for­mal math does avoid them!” The first part is true, the sec­ond isn’t.

What Gödel did was to make a log­i­cal sen­tence in the syn­tax of Peano Arith­metic which had the mean­ing, “This state­ment is un­prov­able in Peano Arith­metic”. The two tricky com­po­nents of this were find­ing a way to math­e­mat­i­cally en­code self-refer­ence (“This state­ment”) and to talk about the sys­tem from within the sys­tem (“is un­prov­able in Peano Arith­metic”). Both are quite a challenge. If you mess around with the liar’s para­dox, you’ll quickly get into in­finite re­cur­sion when try­ing to un­ravel the self-refer­ence.

(source)

Peo­ple have known for­ever that self-refer­ence can of­ten lead to para­dox. This is part of why it’s easy to have the in­tu­ition that full nat­u­ral lan­guage ex­pres­sions need not have truth val­ues. So you make your math­e­mat­i­cal sys­tems more pre­cise, and more limited than full nat­u­ral lan­guage, then you can avoid self-refer­ence, right?

Part of the sneak­i­ness of Gödel’s the­o­rem is that it shows how self-refer­ence sneaks in whether you want it or not. Just like a com­puter sys­tem is not safe merely be­cause you didn’t ex­plic­itly cre­ate a “HaCk ThIs SoFtWaRe” but­ton, a for­mal math­e­mat­i­cal sys­tem is not free of self-refer­ence merely be­cause it’s not ex­plic­itly part of the de­sign. Self-refer­ence isn’t just a part of Peano Arith­metic, it’s an in­evitable fea­ture of any sys­tem that has a ba­sic amount of com­plex­ity.

If you’re hav­ing trou­ble wrap­ping your head around how some­thing as sim­ple as ar­ith­metic could em­bed self-refer­ence and the con­cept of prov­abil­ity, it’s helpful to look at pro­gram­ming lan­guages. Brain­fuck can ex­press any­thing that python can ex­press, be­cause both of them reach a “suffi­cient level of com­plex­ity” called be­ing Tur­ing Com­plete. Gw­ern main­tains a fun list of sur­pris­ingly Tur­ing Com­plete sys­tems (Poke­mon Yel­low and CSS are my fa­vorite).

Speak­ing of pro­gram­ming, this segues nicely into the next topic :)

Tur­ing and the Undecidable

Be­fore there were com­put­ers, there were al­gorithms. Unam­bigu­ous, mechanis­tic, finite pro­ce­dures which when car­ried out con­sis­tently lead to cer­tain re­sults. Al­gorithms go way back, but our un­der­stand­ing of how to clearly de­mar­cate them is re­cent. The phrase “effec­tive pro­ce­dure” has been used to tag the in­tu­itive and fuzzy group­ing that set apart the ac­cessible from the in­ac­cessible. There’s clearly a differ­ence, so what’s the differ­ence?

A storm of re­search in the first half of the 1900′s built a con­clu­sive an­swer. They cre­ated the ab­stract math­e­mat­i­cal back­ing of not only com­put­ers, but of the idea of al­gorithms in gen­eral. Tur­ing made Tur­ing Machines, Church made lambda calcu­lus, and Kleene made re­cur­sion the­ory. Though very differ­ent in taste and tex­ture, all of these sys­tems turned out to cap­ture the ex­act same thing; you could use any one to fully rep­re­sent and simu­late the other. Th­ese sys­tems form the bar that I men­tioned early; Tur­ing Com­plete­ness is be­ing able to do ev­ery­thing a Tur­ing Ma­chine can do.

Just like how when a for­mal sys­tem get’s com­plex enough, self-refer­ence and in­com­plete­ness sneak in, when a com­pu­ta­tional sys­tem reaches Tur­ing Com­plete­ness, a whole class of “un­com­putable/​un­de­cid­able” prob­lems emerge. Th­ese are well formed and clearly defined com­pu­ta­tional prob­lems which no al­gorithm can always an­swer cor­rectly.

The most fa­mous on is the Halt­ing Prob­lem:

Given the source code to a pro­gram and its in­put, de­ter­mine if the pro­gram will ever ter­mi­nate, or if it will get caught in a loop for­ever.

Turns out, there’s no al­gorithm which an­swers this ques­tion cor­rectly for all pos­si­ble source code-in­put pairs. Each bold word is key to un­der­stand­ing what it means for a prob­lem to be un­de­cid­able.

Some­times peo­ple get con­fused by the halt­ing prob­lem. How can it be un­de­cid­able if I can clearly de­ter­mine that:

def code(x):
	re­turn x

halts for all in­puts. Have I defeated com­put­ers? The an­swer is in the differ­ence be­tween a spe­cific in­stance of a prob­lem, and the prob­lem class as a whole. “Solve the prob­lem with this spe­cific in­put” is in­cred­ibly differ­ent from “Solve the prob­lem in full gen­er­al­ity for all pos­si­ble in­puts.”

Com­pare this with Gödel’s the­o­rems. It’s not that all state­ments are un­prov­able. It’s that there ex­ists a state­ment that’s mean­ingful and can’t be proven. This is not a sur­face level similar­ity. Se­cretly, the halt­ing prob­lem and Gödel’s in­com­plete­ness the­o­rems are the same thing. It’s out­side the already wildly ex­pand­ing scope of this post, but the Curry-Howard Iso­mor­phism is a crazy bridge that con­nects math­e­mat­i­cal proofs to the ex­e­cu­tion of pro­grams.

Suffice to say, in­com­plete­ness and un­de­cid­abil­ity are two sides of the same coin, and I will ca­su­ally flip flop be­tween them as much as I please. They are the for­mal re­fu­ta­tion to “If it bleeds, we can kill it”. They are pieces of math­e­mat­ics that have been thor­oughly and rigor­ously for­mal­ized, and yet prov­ably defy be­ing re­solved.

But Wait, there’s more!

Tur­ing and Gödel show us that there are mon­sters out there that bleed and can’t be kil­led, a shock­ing fact to cer­tain philo­soph­i­cal frames. But may­haps there’s only a few of such crea­tures. Un­for­tu­nately, most things that bleed can’t be kil­led.

Rice’s the­o­rem proves that al­most all prop­er­ties that you might want to prove about pro­gram be­hav­ior are un­de­cid­able. Not only are there un­de­cid­able prob­lems, but there’s an in­finite hi­er­ar­chy of in­creas­ingly difficult com­pu­ta­tional prob­lems which can’t even be solved if you had a mag­i­cal or­a­cle to solve prob­lems one rung down.

Th­ese sorts of re­sults are not con­tained just to de­cid­abil­ity. With re­gards to Kol­mogorov com­plex­ity most strings are in-com­press­ible. Most boolean cir­cuits have ex­po­nen­tial com­plex­ity. The No-Free-Lunch the­o­rem tells us there is no uni­ver­sal learn­ing al­gorithm that out­performs all oth­ers on all learn­ing tasks. Weier­strass sum­moned mon­sters that still roam.

Re­sult af­ter re­sult in this vein can leave one feel­ing… small.

Gödel’s Impact

It took some time be­fore Gödel’s proof re­ally took hold in the math­e­mat­i­cal world. Gödel him­self was the sort of guy that the words ret­i­cent and reclu­sive were cre­ated to de­scribe, and didn’t push for recog­ni­tion. Luck­ily, John von Neu­mann took it upon him­self to spread the good word.

Within a few decades, af­ter fully in­ter­nal­iz­ing that their quest of sub­mit­ting all of math to be­ing prov­able was doomed, most math­e­mat­i­ci­ans had moved onto other fields. Math de­part­ments be­gan to thin. In 1963 UC Berkley was the last Univer­sity in the world to offer a math­e­mat­ics ma­jor, which it­self was dis­con­tinued in 19—

Ha, just kid­ding, math go brrrrrrrrrr

De­spite de­stroy­ing the con­tem­po­rary un­der­stand­ing of what math is, we still sent peo­ple to space, cre­ated the in­ter­net, and made things like GPT-3. Why does the fact that most prob­lems in math­e­mat­ics are un­solv­able not seem to mat­ter in the slight­est? How is that not a huge road­block to the de­vel­op­ment of hu­man knowl­edge?

Hilbert’s Pro­gram was part of a large scale drive to cre­ate a “se­cure and unas­sailable” foun­da­tion for math­e­mat­ics. This in­tent was in the wa­ter-sup­ply ever since Eu­clid’s The Ele­ments, the poster-boy for math­e­mat­i­cal rigor, had been tar­nished with the dis­cov­ery of non-eu­clidean ge­om­e­try. Math­e­mat­i­ci­ans felt burned by this be­trayal, and so be­gan the purg­ing of in­tu­ition from math in fa­vor of for­mal rigor.

But… why the in­tense fuss if it turns out “a se­cure and unas­sailable foun­da­tion” is not needed to ac­tu­ally do math? It feels like Gödel pul­led a magic trick, yank­ing the foun­da­tions of math out from un­der it­self, yet failing to dis­turb any­thing. It makes one a bit sus­pi­cious about what a foun­da­tion is sup­posed to be do­ing if it’s not sup­port­ing the house.

How to “solve” the halt­ing problem

In tHe ReAl WoRlD few peo­ple care about de­cid­abil­ity. Peo­ple “solve” the halt­ing prob­lem all day ev­ery­day; wait a few sec­onds, if the pro­gram hasn’t re­turned con­clude it never will. Even your grandma “solves” the halt­ing prob­lem when she even­tu­ally de­cides to re­boot the com­puter af­ter star­ing at a frozen in­ter­net ex­plorer for a few min­utes.

The bound­aries that en­g­ineers do care about are com­plex­ity bounds. “This prob­lem is not un­de­cid­able!” is still use­less for ap­pli­ca­tion if it turns out the most effi­cient al­gorithm won’t finish be­fore the heat death of the uni­verse.

So maybe that’s it. Fear not the specter of in­com­plete­ness, in­stead cower at the sight of the com­plex. Wail and gnash your teeth upon find­ing out the prob­lem you want to solve is NP-Hard.

Once upon a time, I was sit­ting in office hours ask­ing a pro­fes­sor about how code ver­ifi­ca­tion worked:

Me: How this work?

Prof: We’ve worked out how to rep­re­sent the code as an SMT and just pass it all to an SMT solver

Me: How do SMT solvers work?

Prof: Often they re­duce it to a boolean satis­fac­tion prob­lem and then just use a gar­den va­ri­ety SAT solver.

Me: …...wut? SAT is sup­posed to be NP-hard?!

Prof: laughs oh yeah, it is. But in prac­tice we can han­dle all sorts of NP-hard prob­lems to­tally fine.

Me: mind melts out of ears

This com­pletely blew my mind. Sud­denly I lived in a new mag­i­cal world.

How ex­actly does this work? Stack­ex­change has a nice an­swer. I want to home in on re­strict­ing the prob­lem. Prob­lem re­lax­ation is a tried and true tac­tic for solv­ing all kinds of challenges. If X is too hard, solve an eas­ier prob­lem. Some­times this helps you solve the origi­nal prob­lem, but some­times you re­al­ize you didn’t ac­tu­ally need to solve the harder prob­lem, and your al­gorithm for the re­laxed prob­lem works just fine on all the prob­lem in­stances you care about.

There’s a sneaky difficulty to know­ing what you ac­tu­ally want. Maybe you’re a food de­liv­ery com­pany and you think you need to solve the trav­el­ing sales­man prob­lem (NP-Hard), but upon fur­ther thought you re­al­ize this is Man­hat­tan and ev­ery­thing is on a grid! Now you only care about a spe­cific sub­set of trav­el­ing sales­man prob­lems, one’s that have a very spe­cific struc­ture. Re­joice! This sub­set is ac­tu­ally tractable! It may even be effi­cient!

But for a lot of real world sce­nar­ios, there is no for­mal­ized sub prob­lem. You get that you’re not deal­ing with the fully gen­eral ver­sion, but you don’t un­der­stand your use cases enough to for­mal­ize them as a sub­set. Clearly, there is rich struc­ture to the irl in­stances of the prob­lems that you’re solv­ing, oth­er­wise you wouldn’t be mak­ing any progress. But un­til you can figure out ex­actly what that struc­ture is and re-for­mal­ize a re­lax­ation, you’re in the po­si­tion of try­ing to solve a for­mally “in­tractable” by try­ing some­thing that very smart peo­ple think is a de­cent idea, and hav­ing it work out most of the time.

That’s not to say that things are easy. There are plenty of prob­lems that we can’t cur­rently effi­ciently solve or prove, ones which would be a HUGE boon to hu­man ca­pa­bil­ities if re­solved. And Har­vey Fried­man seems to have de­voted his whole life to show­ing how in­com­plete­ness sneaks into oth­er­wise nat­u­ral and nor­mal math­e­mat­i­cal do­mains.

The Vast Math­e­mat­i­cal Wilderness

I’ve been shar­ing a lot of tech­ni­cal de­tails, but what I re­ally hope is to com­mu­ni­cate an aes­thetic sense. From Am­bram Dem­ski in a post on some­thing else:

To ex­plain Gödel’s first in­com­plete­ness the­o­rem, Dou­glas Hofs­tadter likened truth and false­hood to white and black trees which never touch, but which in­ter­lace so finely that you can’t pos­si­ble draw a bor­der be­tween them. [...] A gap­ing frac­tal hole ex­tend­ing self-similar ten­drils into any suffi­ciently com­plex area of mathematics

(Am­bram’s draw­ing)
(Hofs­tadter’s origi­nal draw­ing)

The se­cret to math’s gen­er­al­ity is that we aren’t ac­tu­ally deal­ing with vasts swaths of the for­mal world. We live in a strange and struc­tured bub­ble near the trunk of Gödel’s tree. Re­mem­ber ear­lier how I was de­scribing that most for­mal­iz­able prob­lems are un­de­cid­able, and most de­cid­able prob­lems are mired in in­tractable com­plex­ity? And de­spite that our al­gorithms work on a freaky amount of the prob­lems we ac­tu­ally in­ter­act with? It’s be­cause those pes­simistic re­sults aren’t facts about our lit­tle bub­ble; they’re facts about the jagged waste­lands that lay be­yond our bub­ble, out in the frac­tal folds of the wild.

This is both sur­pris­ing and ex­pected. Borges’ Library of Bab­ble is a sprawl­ing haunt that con­tains ev­ery work of liter­a­ture ever pro­duced, and it’s still end­less enough to be mostly non­sen­si­cal gib­ber­ish. Of all the pos­si­ble ways that let­ters can be com­bined, only a small frac­tions cre­ate words. As it is with words, so with math.

Gödel and Tur­ing helped teach us that this wilder­ness ex­ists. Our ex­pe­rience is what taught us that we live on the edge of this wilder­ness. We don’t know the ex­act bound­ary of our lit­tle bub­ble. The bub­ble con­tains the prob­lems that we hap­pen to en­counter and hap­pen to care about. It’s a bound­ary that’s highly con­tex­tual and very hu­man. One way to see the ad­vance of knowl­edge is as the ex­plo­ra­tion of this bound­ary. It too will prob­a­bly have un­re­solv­able frac­tal na­ture.


I could leave it there. “Gödel illu­mi­nated a gi­ant wilder­ness, but luck­ily we al­most never have to deal with it.”

Ex­cept I don’t think that’s quite right. There are con­texts the force you out of the pre­dictable bub­ble, and into wild.

Jour­ney through the Frac­tal Folds

Self-refer­ence is at the heart of all un­de­cid­abil­ity and in­com­plete­ness. If you’re in a stan­dard STEM en­vi­ron­ment you might think that self-refer­ence is a weird cor­ner case, when re­ally it’s what peo­ple spend most of their time do­ing. Pre­dict­ing ad­ver­sar­ial in­tel­li­gent agents that run on similar hard­ware is the alt text on life. All agents are em­bed­ded slices of re­al­ity, but they aren’t like other slices of re­al­ity; they’re ex­actly the kind of strange-loops that en­gen­der self-refer­ence and in­com­plete­ness.

The only thing harder than hit­ting a mov­ing tar­get is hit­ting a mov­ing tar­get that is watch­ing you try­ing to hit it. The best move for you de­pends on what your op­po­nent thinks their best move is, which de­pends on what you think your best move is; it’s Si­cilian rea­son­ing all the way up. The clas­sic game the­ory pre­scrip­tion is to play ran­domly, called a mixed strat­egy. This is closely re­lated to min­i­max “Min­i­mize the max­i­mum dam­age that the other can do to you”.

Both mixed strate­gies and min­i­max black box your op­po­nent. They ig­nore any in­for­ma­tion you may have about your op­po­nent and make no effort to an­ti­ci­pate the other’s moves. It does so even when it could do bet­ter by peek­ing in­side the black­box.

You can beat most peo­ple who know the rules of chess with Scholar’s mate, ever though such an open­ing would let a mas­ter de­stroy you. In rock pa­per scis­sor, a lot of peo­ple de­fault to a pre­dictable cycli­cal strat­egy that you can take ad­van­tage of, even though a suffi­ciently smart player could catch on to your pre­dictable strat­egy and re­li­ably take ad­van­tage of you. Tra­di­tional game the­ory sac­ri­fices any plau­si­ble yet un­cer­tain edge you might have for the guaran­tee that you will never be taken ad­van­tage of.

In many hu­man af­fairs, min­i­max­ing and mixed strate­gies are not enough. Play­ing ran­domly might mean you can never be out­smarted, but your ex­pected win­nings might still be well be­low what it takes to sur­vive. Some­times try­ing to out­smart the com­pe­ti­tion may be your only choice.

Though a lot of sci­ence is in­duc­tive, com­pet­i­tive land­scapes are of­ten anti-in­duc­tive. There’s ex­plicit pres­sure to keep things un­pre­dictable and in­com­pre­hen­si­ble. Flirt­ing is anti-in­duc­tive:

(link)

Now, flirt­ing isn’t that crazy. The goal is not to make it so that NO ONE can in­ter­est you; it’s to move around enough that the only way they other per­son can suc­ceed is by ditch­ing their canned strat­egy and pay­ing at­ten­tion.

The anti-in­duc­tion of mar­kets is a lot more in­tense. War­fare even more so. All your pre­dic­tions of the en­emy are based off of their past be­hav­ior, be­hav­ior that may or may not have been done ex­plic­itly to fool you. There are no perfectly re­li­able sig­nals, and you’re forced to get down in the weeds an try your best to out com­pete other’s lo­cally, with­out fully gen­er­al­ized tac­tics.

None of this is news to peo­ple who study strat­egy. If you want to read more about OODA here and here are great re­sources. Alter­na­tively, you can learn ev­ery­thing you need to know by watch­ing these robots:

(link to ac­tual video)

Look at 1:35 for a perfect fake out. Two bots are in a head­lock, mo­tors at full force. One stalls out. To not go off the edge, the other stalls, and right away the other full speed rams them faster than they can re­ori­ent. Sumo robots is a game of PURE tempo. Every move­ment has a perfect counter. The only path to vic­tory is to get in­side the oth­ers con­trol loop.

The Ghost of Gödel

Every com­ment I’ve made so far about strat­egy has been le­git, but it hasn’t needed Gödel and Tur­ing to ex­plain. If any­thing, us­ing the lens of com­pet­i­tive agents makes the halt­ing prob­lem more un­der­stand­able. No, to see where U&I comes into play, we need to go deeper.

Imag­ine amend­ing the sumo bot rules: be­fore each bout, both sumo bots were given the other’s source code. What changes? This level of ap­par­ent de­ter­mi­nancy might tempt you into think­ing that there could be some way to always win; af­ter all it’s just to­tally pre­dictable code, right? Nope, still un­de­cid­able.

There are other games that don’t have the multi-po­lar qual­ity of sumo bots and rock pa­per scis­sors, with ev­ery move hav­ing a perfect counter. Take com­puter se­cu­rity. This field feels much more like a ris­ing tide than bal­anced wheel. Even­tu­ally a new offense will be made that ren­ders a lot of pre­vi­ous defense ob­so­lete. Then a new defense will be made that ren­ders a lot of old offense ob­so­lete. Heuris­tic piles on top of heuris­tic and the tides rise.

Will this be a never end­ing game off spy vs spy as well? Some as­pects of se­cu­rity, like cryp­tog­ra­phy, seem to be clear vic­tory for defense. On the offense, poly­mor­phic viruses ex­ist that al­ter their own code while re­tain­ing log­i­cal iden­tity to avoid de­tec­tion, virus de­tec­tion in full gen­er­al­ity is im­pos­si­ble, and re­li­able de­tec­tion of bounded length viruses is NP-com­plete.

Re­call, with the bulk of en­g­ineer­ing we don’t fret when our prob­lem is un­de­cid­able or NP-hard. We sim­ply do our best and a silly amount of the time we are re­warded. We ap­prox­i­mate, ap­ply heuris­tics, and do what feels right. But that’s what we see hap­pen when we tackle sta­tion­ary or mov­ing tar­gets. Com­puter se­cu­rity is en­g­ineer­ing that’s try­ing to hit a mov­ing tar­get that fights back.

Virus de­tec­tion be­ing un­de­cid­able is ex­actly what en­sure that de­tec­tion and eva­sion will be a never end­ing pro­gres­sions of in­creas­ingly so­phis­ti­cated heuris­tics. The frac­tal ten­drils of in­com­plete­ness are what provide the in­ex­haustible land­scape for in­tel­li­gent sys­tems bat­tle.

When it comes down to it, all games are open games. The rules, bound­aries, and scor­ing all shift with time and in­tent. Games with rigid bound­aries are only pos­si­ble through the mu­tual agree­ment of all play­ers. Soc­cer only works be­cause ev­ery­one has agreed to abide by the rules. The more ad­ver­sar­ial the con­text, the more the bound­aries dis­solve. Gödel’s legacy is to show us that there is no limit to how far the bound­aries can dis­solve. He guaran­tees the in­ex­haustibly of the dark wilder­ness, a wilder­ness that be­comes the new play­ing field for life as ad­ver­sar­ial ac­tors butt heads.

In this world, there are no fully gen­eral an­swers. Good ap­prox­i­ma­tions don’t stay good ap­prox­i­ma­tions for long. Lack­ing the cer­tainty and guaran­tees we are used to, all you can do is ap­ply the full force of your be­ing. Every vic­tory is nec­es­sar­ily cir­cum­stan­tial.

What is called the spirit of the void is where there is noth­ing. It is not in­cluded in man’s knowl­edge. Of course the void is noth­ing­ness. By know­ing things that ex­ist, you can know that which does not ex­ist. That is the void.

The Book of Five Rings