The proof systems defined for explicit knowledge (chapter 4) and algorithmic knowledge (chapter 5) provide a procedural semantics for these concepts. It remains an open problem to develop an intuitively acceptable declarative semantics for them. Although it is possible to develop a model theory along the lines of [Ho95] and to prove completeness of the specified systems with respect to those models, such a model theory is simply a reformulation of the procedural semantics and not very useful. It does not provide us with a tool to determine if all (semantically) valid epistemic statements have been captured by the proof system, and it does not allow us to analyze the concepts of knowledge and belief in terms of simpler, more fundamental concepts. But this is exactly what a well-motivated and intuitively acceptable semantics should do.
Semantics has also proved helpful for studying the complexity of modal epistemic logic. The complexity analysis for the specified logics of explicit and algorithmic knowledge remains an open issue. It is hoped that their complexities can be determined by embedding them into systems whose complexities are known.
The investigated logics of explicit knowledge (DEK and its extensions) have been monotonic in two aspects. First, their consequence operations are monotonic. Second, the knowledge of the agents always grows over time. A very interesting, still open problem is to develop logics of explicit knowledge based on non-monotonic logic, where the agents can revise their knowledge when they find out that their knowledge is inconsistent. We may expect to find interesting connections with two other, very active fields of AI research, viz. to non-monotonic reasoning and to the logic of belief revision. This seems to be a promising field of research and needs further investigations.
My study of algorithmic knowledge has concentrated on a single time point in the real time line. A formula like is primarily a statement about the agent's current ability to compute knowledge. It does not say anything about his actual knowledge time units from now -- except for the case . An interesting problem is to relate an agent's algorithmic knowledge at different time points to each other. Let me elaborate that issue in some more details.
Let the integers be the representation of the objective time structure (``real time line'', ``real world history''). For any integer we shall write instead of to make clear that is the time point under consideration. That is, if at time the agent starts computing then he will needs at most time units to complete the task, so he will succeed at at the latest. The question is how the inference relations between an agent's algorithmic knowledge at two different points can be formalized.
Consider the simple case first. Can we assume that an agent's explicit knowledge persists over time? For example, is the formula valid? As I have argued in chapter 4, such a persistence axiom cannot be assumed for all formulae. So we may ask under which circumstances those persistence axioms are valid, or if certain default inference rules can be used for reasoning about the evolution of explicit knowledge over time.
A related question is how an agent's reasoning capacities change over time. If at time he needs time units to compute , will it remain true at time ? That is, is the formula valid? This is apparently not true universally, since the truth-value of may not be the same at different states. Moreover, at time the agent may need more time to compute the same query. Nevertheless, it seems plausible to assume that an agent's reasoning capacities do not decrease over time. Under this assumption, the mentioned formula may be valid if has a certain syntactic structure, e.g., if is objective and does not contain any negation sign.
Finally, the connection between and is worth examining. The former formula says that is implicit knowledge of agent at time , and he has the capacity to make it explicit if he chooses to do so and if he has enough resources to carry out his computation, in this case time units. The latter formula says that has actually become explicit knowledge of at time . A framework for reasoning about knowledge and action should be able to formalize the agent's computation of between and .
To summarize, an agent's algorithmic knowledge at different moments can be related to each other through certain consequence relations. The concrete conditions under which such inferences may be drawn must be examined carefully. It can be expected that default logic and other mechanisms of non-monotonic reasoning can help to specify consequence relations for algorithmic knowledge.