In chapter 1 of his book Reasoning about Rational Agents, Michael Wooldridge identifies some of the reasons for trying to build rational AI agents in logic:
There are some in the AI research community who believe that logic is (to put it crudely) the work of the devil, and that the effort devoted to such problems as logical knowledge representation and theorem proving over the years has been, at best, a waste of time. At least a brief justification for the use of logic therefore seems necessary.
First, by fixing on a structured, well-defined artificial language (as opposed to unstructured, ill-defined natural language), it is possible to investigate the question of what can be expressed in a rigorous, mathematical way (see, for example, Emerson and Halpern [50], where the expressive power of a number of temporal logics are compared formally). Another major advantage is that any ambiguity can be removed (see, e.g., proofs of the unique readability of propositional logic and first-order predicate logic [52, pp.39-43]).
Transparency is another advantage: “By expressing the properties of agents, and multiagent systems as logical axioms and theorems in a language with clear semantics, the focal points of (the theory) are explicit. The theory is transparent; properties, interrelationships, and inferences are open to examination. This contrasts with the use of computer code, which requires implementational and control aspects within which the issues to be tested can often become confused.” [68, p.88]
Finally, by adopting a logic-based approach, one makes available all the results and techniques of what is arguably the oldest, richest, most fundamental, and best-established branch of mathematics.
By moving away from strictly logical representation languages… one can build agents that enjoy respectable performance. But one also loses what is arguably the greatest advantage that the logical approach brings: a simple, elegant logical semantics.
In chapter 1 of his book Reasoning about Rational Agents, Michael Wooldridge identifies some of the reasons for trying to build rational AI agents in logic:
In An Introduction to MultiAgent Systems, he writes: