Yes. I had a course on Logic and Knowledge Representation last semester (October->January). In parallel, I attended an Autumn School about AI in late October, which included two 2h courses on Epistemic Logic. The speaker went super fast, so those 4 hours were ultra-productive (here are my notes in French). However, I did not fully understand everything, so I was happy to make my knowledge more solid with practical exercises/exams/homework etc. during my Logic Knowledge Representation course. The two approach (summer school and semester course) were complementary and gave me a good grasp on logic in general, and in particular epistemic logic.
This semester(February->May), I had a course on Multi-Agent Systems , and knowing about epistemic logic, and more generally modal logic, was handy. When the environment of a robot changes, it needs to take into account this change and integrate this into his representation of the world. When two agents communicate, having a representation of the knowledge of the other agent is essential to not send redundant or, worse, contradictory information.
A great part of Multi-Agent Systems is about communication, or how to harmonize the global knowledge, so knowing about epistemic logic is advantageous. In this article I talk about the gossip problem in the context of a project we had to do in a course on Multi-Agent Systems. The same teacher who exposed me the gossip problem, was the one who taught the Modal Logic/Epistemic Logic course. Epistemic Logic is useful to have the full picture about communication protocols, and the lower bounds in bits of information necessary to communicate secrets.
A little anecdote to end this comment: last week I had a mathematician/logician who came to a Meetup I organized on AI Safety. At the beginning, it was only the two of us, and he went to explain how he went from logic to AI. “You know, all my life I have been studying very theoretical problems. To be very specific, group theory was the most applied math I have ever done. But, now that I study AI, having studied intractable/undecidable problems for decades, I know almost instantaneously which theory will work for AGI and which won’t.” We ended up having some discussion about logic and knowledge representation. We could not have had this chat without me having taken some courses on epistemic logic.
Yes. I had a course on Logic and Knowledge Representation last semester (October->January). In parallel, I attended an Autumn School about AI in late October, which included two 2h courses on Epistemic Logic. The speaker went super fast, so those 4 hours were ultra-productive (here are my notes in French). However, I did not fully understand everything, so I was happy to make my knowledge more solid with practical exercises/exams/homework etc. during my Logic Knowledge Representation course. The two approach (summer school and semester course) were complementary and gave me a good grasp on logic in general, and in particular epistemic logic.
This semester(February->May), I had a course on Multi-Agent Systems , and knowing about epistemic logic, and more generally modal logic, was handy. When the environment of a robot changes, it needs to take into account this change and integrate this into his representation of the world. When two agents communicate, having a representation of the knowledge of the other agent is essential to not send redundant or, worse, contradictory information.
A great part of Multi-Agent Systems is about communication, or how to harmonize the global knowledge, so knowing about epistemic logic is advantageous. In this article I talk about the gossip problem in the context of a project we had to do in a course on Multi-Agent Systems. The same teacher who exposed me the gossip problem, was the one who taught the Modal Logic/Epistemic Logic course. Epistemic Logic is useful to have the full picture about communication protocols, and the lower bounds in bits of information necessary to communicate secrets.
A little anecdote to end this comment: last week I had a mathematician/logician who came to a Meetup I organized on AI Safety. At the beginning, it was only the two of us, and he went to explain how he went from logic to AI. “You know, all my life I have been studying very theoretical problems. To be very specific, group theory was the most applied math I have ever done. But, now that I study AI, having studied intractable/undecidable problems for decades, I know almost instantaneously which theory will work for AGI and which won’t.” We ended up having some discussion about logic and knowledge representation. We could not have had this chat without me having taken some courses on epistemic logic.