Is epistemic logic useful for agent foundations?

The title isn’t a rhetorical question; I’m actually looking for answers. This summer, I’ll have the opportunity to attend a summer school on logic, language and information. Whether or not I go depends to a significant extent on whether what they’ll be teaching—particularly epistemic logic, also some other topics in logic and language—will be useful for AI safety research. Here is a summary of epistemic logics, and here are the courses I’ll be able to take. I’ve already taken classes in first-order logic, but right now I’m uncertain about the value of doing these extra courses.

Reasons to think learning epistemic logic will be useful for agent foundations:

  • MIRI’s work relies heavily on high-level concepts in logic

  • Epistemic logic is particularly concerned with statements about knowledge and belief which seem very relevant to reasoning about agents

  • Learning about epistemic logic is probably useful for thinking about other forms of logic

Reasons to think it won’t be useful:

  • As far as I can tell, it doesn’t appear on MIRI’s research guide, nor in any of their papers, nor in the sequences

  • It seems like epistemic logic is mostly non-probabilistic and is talking about a fundamentally different sort of knowledge from probabilistic Bayesian knowledge, which increases my credence that it’s the sort of philosophy which isn’t going to be of much practical use