Is epistemic logic useful for agent foundations?

The ti­tle isn’t a rhetor­i­cal ques­tion; I’m ac­tu­ally look­ing for an­swers. This sum­mer, I’ll have the op­por­tu­nity to at­tend a sum­mer school on logic, lan­guage and in­for­ma­tion. Whether or not I go de­pends to a sig­nifi­cant ex­tent on whether what they’ll be teach­ing—par­tic­u­larly epistemic logic, also some other top­ics in logic and lan­guage—will be use­ful for AI safety re­search. Here is a sum­mary of epistemic log­ics, and here are the courses I’ll be able to take. I’ve already taken classes in first-or­der logic, but right now I’m un­cer­tain about the value of do­ing these ex­tra courses.

Rea­sons to think learn­ing epistemic logic will be use­ful for agent foun­da­tions:

  • MIRI’s work re­lies heav­ily on high-level con­cepts in logic

  • Epistemic logic is par­tic­u­larly con­cerned with state­ments about knowl­edge and be­lief which seem very rele­vant to rea­son­ing about agents

  • Learn­ing about epistemic logic is prob­a­bly use­ful for think­ing about other forms of logic

Rea­sons to think it won’t be use­ful:

  • As far as I can tell, it doesn’t ap­pear on MIRI’s re­search guide, nor in any of their pa­pers, nor in the sequences

  • It seems like epistemic logic is mostly non-prob­a­bil­is­tic and is talk­ing about a fun­da­men­tally differ­ent sort of knowl­edge from prob­a­bil­is­tic Bayesian knowl­edge, which in­creases my cre­dence that it’s the sort of philos­o­phy which isn’t go­ing to be of much prac­ti­cal use