Dependent Type Theory and Zero-Shot Reasoning


When I sit down to write code, I can safely throw ev­ery­thing I can think of at the prob­lem and just keep try­ing to run the code un­til it works—and when it works, I can ac­tu­ally see it work­ing. If my code runs, it means I ac­tu­ally had to un­der­stand enough of the mov­ing parts to get it to run. This is es­pe­cially true in lan­guages like Haskell with strong type con­straints pre­vent­ing me from do­ing any­thing too stupid.

When I sit down to prove a the­o­rem, I have to be much more care­ful. One mis­step in my rea­son­ing and my whole proof will be in­valid, with­out me even know­ing I made a mis­take un­til I or some­one else catches it later. If my proof goes through, it’s pos­si­ble I un­der­stood enough of the mov­ing parts to make it work, but it’s also pos­si­ble I didn’t.

The stan­dard solu­tion to the prob­lem of not hav­ing some­thing to test your proof against when do­ing math is to use a proof checker, which lets you turn an ar­bi­trary math prob­lem into a pro­gram­ming prob­lem. In­stead of hav­ing to just get the right an­swer with­out ever be­ing able to ac­tu­ally check your an­swer, you get lots of chances to test pos­si­ble solu­tions.

Proof check­ers are great, but tra­di­tional proof check­ers are also pretty limited. Con­cepts like equal­ity re­quire spe­cial­ized ma­chin­ery like paramod­u­la­tion to prop­erly re­solve, in­tro­duc­ing new data types re­quires writ­ing out long lists of PA-like ax­ioms and in gen­eral mod­ern func­tional pro­gram­ming tools that make writ­ing out math eas­ier like pat­tern-match­ing, ADTs, mon­ads, etc. are just miss­ing. When I wrote my own the­o­rem prover/​proof checker a while ago I ran into ex­actly these sorts of prob­lems.

Depen­dent type the­ory is the mod­ern way to solve all of these prob­lems. Both equal­ity and the nat­u­ral num­bers are defined just by giv­ing the con­struc­tors, no ax­ioms needed. For-alls and im­pli­ca­tions are just func­tions. Prov­ing the­o­rems re­ally just feels like writ­ing Haskell code.

I’m not go­ing to try and give a de­pen­dent type the­ory tu­to­rial here, be­cause I don’t think I would be able to do bet­ter than the one I used, The­o­rem Prov­ing in Lean, which I highly recom­mend tak­ing a look at. That be­ing said, I do want to give a cou­ple ex­am­ples of what I mean be­fore I move on. Don’t worry if you don’t un­der­stand this part, I just want to show you some ac­tual code in a de­pen­dent type the­ory.

I said equal­ity and the nat­u­ral num­bers were just data types, but to show you what I mean here is the ac­tual defi­ni­tion of the nat­u­ral num­bers in the Lean stan­dard library

in­duc­tive nat
| zero : nat
| succ (n : nat) : nat­

where : means “has type of.” Those few lines are all that are re­quired to com­pletely define the se­man­tics of the nat­u­ral num­bers—the in­duc­tion rule is au­to­mat­i­cally syn­the­sized from the above defi­ni­tion.

Equal­ity is slightly more com­plex, but only be­cause you need a lit­tle bit more boiler­plate. Here is the ac­tual defi­ni­tion of equal­ity in the Lean stan­dard library

in­duc­tive eq {α : Sort u} (a : α) : α → Prop
| refl : eq a

which is re­ally just the Lean trans­la­tion of the Haskell

data Equals a = Refl

where eq a rep­re­sents a = a and refl stands for re­flex­ivity, the rule that a = a always holds.


Even de­spite all of the ad­vances in au­to­mated proof check­ing with de­pen­dent type the­ory, there’s still a big dis­ad­van­tage to typ­ing up all of your proofs in a de­pen­dent type the­ory like Lean: for­mal­ity. When you write a proof by hand, you have the abil­ity to elide over all sorts of de­tails that you have to make ex­plicit when you type them up into a proof checker. Thus, the main rea­son not to use a proof checker is the very rea­son to use one in the first place.

I think there ex­ists a mid­dle ground be­tween these two ex­tremes, how­ever. One thing which has most im­pressed me work­ing with Nate Soares is his abil­ity to con­struct mas­sive math­e­mat­i­cal proofs on pa­per with­out a proof checker. When I asked him how he did this, he told me he used the type checker in his head. I’ve been try­ing to do this as well re­cently, and I want to try to share what that looks like.

Law­vere’s the­o­rem is a gen­eral re­sult in cat­e­gory the­ory that states that if there ex­ists a sur­jec­tive func­tion (tech­ni­cally but we’ll elide that de­tail), then has the prop­erty that any func­tion has some fixed point such that . To give you a sense of how im­por­tant this the­o­rem is, it’s the gen­eral re­sult un­der­ly­ing both Gödel’s in­com­plete­ness the­o­rems and Löb’s the­o­rem. I claim that Law­vere’s the­o­rem is nearly triv­ial to prove if you just imag­ine you have a type checker in your head.

Our start­ing en­vi­ron­ment, where the rep­re­sents the thing we’re try­ing to prove, is

from which point the only pos­si­ble way to pro­ceed is to use , which, un­der the in­ter­pre­ta­tion where for-alls are func­tions, re­quires us to pass it some . If you think about how to con­struct a with the right type out of what we cur­rently have available, there re­ally aren’t very many op­tions. In fact, I think there are ba­si­cally just the two: and , but since we know we’re try­ing to prove some­thing about , the defi­ni­tion with in it seems like the safest bet. Thus, we get

which, pass­ing to , gives us

which we can (clas­si­cally) un­pack into

and if two func­tions are equal, then they must be equal on all in­puts, and the only pos­si­ble in­put available is , so we get

which, where , com­pletes the proof.


I was talk­ing with Scott Garrabrant late one night re­cently and he gave me the fol­low­ing prob­lem: how do you get a fixed num­ber of DFA-based robots to tra­verse an ar­bi­trary maze (if the robots can lo­cally com­mu­ni­cate with each other)? My ap­proach to this prob­lem was to come up with and then try to falsify var­i­ous pos­si­ble solu­tions. I started with a hy­poth­e­sis, threw it against coun­terex­am­ples, fixed it to re­solve the coun­terex­am­ples, and iter­ated. If I could find a hy­poth­e­sis which I could prove was un­falsifi­able, then I’d be done.

When Scott no­ticed I was us­ing this ap­proach, he re­marked on how differ­ent it was than what he was used to when do­ing math. Scott’s ap­proach, in­stead, was to just start prov­ing all of the things he could about the sys­tem un­til he man­aged to prove that he had a solu­tion. Thus, while I was work­ing back­wards by com­ing up with pos­si­ble solu­tions, Scott was work­ing for­wards by ex­pand­ing the scope of what he knew un­til he found the solu­tion.

I think that the differ­ence be­tween my ap­proach and Scott’s ap­proach elu­ci­dates an im­por­tant dis­tinc­tion re­gard­ing strate­gies for solv­ing prob­lems in gen­eral. In Alex Zhu’s post on zero-shot rea­son­ing he talked about the differ­ence be­tween zero-shot rea­son­ing, where you solve the prob­lem with­out ever test­ing your solu­tion, and many-shot rea­son­ing, where you solve the prob­lem by test­ing your solu­tion re­peat­edly. In many ways, Scott’s ap­proach to the robot prob­lem was a zero-shot ap­proach, whereas my ap­proach was a many-shot ap­proach.

One thing that’s in­ter­est­ing in this anal­ogy, how­ever, is that nei­ther Scott or I were ac­tu­ally us­ing any tools to solve the prob­lem aside from our own brains. Thus, in some sense, both of our ap­proaches were zero-shot. And yet, my ap­proach cer­tainly feels far more many-shot than Scott’s. I’m go­ing to call what I did in this situ­a­tion many-shot rea­son­ing for a zero-shot prob­lem, and I think this idea is one that ap­plies a lot more broadly than just in that con­ver­sa­tion. One pos­si­ble way to un­der­stand the dis­tinc­tion be­tween MIRI’s and Paul Chris­ti­ano’s strate­gies to­wards AI al­ign­ment, for ex­am­ple, is us­ing this dis­tinc­tion. In this in­ter­pre­ta­tion, MIRI is solv­ing the zero-shot prob­lem with zero-shot rea­son­ing, whereas Paul is solv­ing the zero-shot prob­lem with many-shot rea­son­ing.

In par­tic­u­lar, I think this idea maps well onto why I find de­pen­dent type the­ory so use­ful for solv­ing math prob­lems. When I write up a math prob­lem in Lean, I’m turn­ing what is fun­da­men­tally a zero-shot prob­lem into some­thing more like a many-shot prob­lem, since it gives me a way to con­stantly be test­ing my hy­pothe­ses. But some­thing even more in­ter­est­ing is hap­pen­ing when I’m us­ing the type checker in my head. In that case, I’m do­ing some­thing more like what I did in the robot prob­lem, where I’m us­ing many-shot rea­son­ing to solve a zero-shot prob­lem. I’m never ac­tu­ally us­ing any­thing to test my hy­pothe­ses—in­stead, I’m build­ing a model in my head of the thing which would test my hy­pothe­ses, and then test­ing them against that.

I think this hints at what a solu­tion to the zero-shot rea­son­ing prob­lem might look like: if you can build a model of the cor­re­spond­ing many-shot prob­lem which is ac­cu­rate enough, then you can use many-shot rea­son­ing on that model to solve the origi­nal zero-shot prob­lem.