Relating an agent's beliefs and desires to its action has been one of the major challenges to practical reasoning, i.e., reasoning that we use to decide what to do. Practical reasoning has long been a field of philosophical studies. Examples include the study of various forms of the so-called practical syllogism:
The logical analysis of practical reasoning was pioneered by G. H. von Wright ([vW63], [vW72]). Since the 1980s, AI research has seen a revival of interest in theories of knowledge and action. The relationship between knowledge and action are being investigated intensively in the field of intelligent agents research, and a number of sophisticated theories have been proposed for describing this relationship. In this section I will examine briefly the role that the epistemic concepts play in some of the more influential agent theories. For an overview of recent agent theories consult [WJ95].
Modern theories of knowledge and action are built up from some basic mental, i.e., informational and motivational attitudes (like knowledge, belief, goal, intention), together with some ``objective'' modalities (like time, possibility, chance). By far, the latter concepts are much less controversial than the former ones. Formal theories of these ``objective'' modalities can be developed independently on any theory of mental concepts, but the converse is not necessarily true. For example, systems of modal or temporal logic do not presume any logic of mental notions, but a theory of intention is typically developed on the basis of some temporal logic.
As to the mental attitudes, there is no agreement about the choice of the set of the primitive notions. However, the informational aspect seems so fundamental that it is agreed that knowledge (in the sense of know-that) cannot be defined in terms of others and should be included as one of the basic concepts. On the other hand, the concept of knowledge is essential in theories of other mental notions like intention, know-how, or even desire and goal. For example, know-how is normally defined in terms of knowledge: knowing how to achieve a goal includes the knowledge that after doing something, certain facts will obtain.
The first formalizations of knowledge and action in AI was carried out in the late 1970s and early 1980s. The primary interest was to study knowledge as pre-condition for executing plans. Inspired by ideas of McCarthy and Hayes ([MH69]), Robert Moore developed a formal theory of knowledge which is essentially modal logic S4, but expressed in the first-order metatheory ([Moo90]).
More recently, Cohen and Levesque's theory of intention ([CL90]) has been very influential. Following Bratman's analysis of intention and the role that intentions play in human practical reasoning ([Bra87], [BIP88]), Cohen and Levesque identify the key properties that must be satisfied by a reasonable theory of intention. They develop a formal theory based on two primitive mental notions: belief and goal. The logic of belief is assumed to be the modal system KD45, and that of goal KD. Together with two (temporal) modalities indicating that some event will happen next and some event has just happened, they are able to define the concept of intention and to show that many of Bratman's requirements for a theory of intention are satisfied.
In another attempt to formalize Bratman's theory of intention, Rao and Georgeff ([RG91b], [RG91a]) have developed a logical framework for agent theory based on three primitives: belief, desire, and intention. Within this BDI (Belief - Desire - Intention) architecture, belief is treated as a basic modality which satisfies the KD45 axioms. Desire and intention are assumed to be KD-modalities. The BDI architecture has been adopted and further developed subsequently by a number of researchers ([GR95], [Sin94], [Sin95], [Woo96]).
In related work to formalize properties of intelligent agents, Meyer et. al. have proposed the KARO (Knowledge - Abilities - Results - Opportunities) architecture ([vdHvLM94], [vLvdHM94]). In this architecture, KD45 is assumed as the logic of belief, and S5 is used to formalize knowledge.
Although not strictly a logic-based theory of agency, the AOP (Agent oriented programming) paradigm ([Sho93]) also deals with the behavior of rational agents. Again, belief is taken as one basic mental concept and is formalized using the modal logic KD45. Moreover, belief is also used to characterize commitment (or obligation), another basic mental concept: besides the KD-axioms, the concept of commitment must also satisfy some additional rationality postulates, which basically say that commitments are known.
To summarize, the most influential among the recent agent theories are developed on the basis of modal epistemic logic. Now I shall argue that the modal approach is not suitable because it does not yield specifications of cognitive states which can play a justificatory role for agents' action. The agent model provided by modal epistemic logic does not accord with generally agreed facts about the nature of intelligent agents, in particular with the fact that they are limited in the amount and complexity of the information they can handle.