It is often important to know not only what an agent knows, but also what he does not know within a certain time limit. An agent's lack of knowledge may restrict his choices, so his action could be predicted or explained accordingly. For instance, consider a rational, utility-maximizing agent which must complete a task under certain time constraint. Moreover, suppose that computing a plan for doing that job is relatively easy, but computing the optimal plan is known to be a very hard problem which cannot be accomplished under the given constraint. (Many optimization problems belong to that category.) Then it is rational to find and execute another plan, and not to attempt to compute the optimal plan at all.
Another example may illustrate our considerations. When we use public-key cryptography to encrypt a message, we want to be sure that someone without the secret key will not be able to know its content within reasonable time -- although he can in principle infer it from the public key. The expectation that our message cannot be quickly decrypted is based on the complexity of the reasoning required.
The absence of certain information can be deduced from the presence of other information, utilizing some assumptions about agents' consistency. There is, however, another method for reasoning about the lack of algorithmic knowledge. The expectation that something cannot be known within some time limit is based on the complexity of the reasoning required. We use lower complexity bounds to estimate the least amount of time that an agent would need to infer some sentence, and so to infer what he cannot reliably know within some given limit of time.
For reasoning about what agents cannot infer under some constraints we
define for each agent a partial function
. Intuitively,
is the minimal amount of time that
needs to
compute
. Once such a function has been specified, any formula
can be assumed, independent
of the truth value of
. Even if
will
eventually succeed to infer
, the computation lasts longer
than
.
With the ability to reason about algorithmic knowledge or the lack
thereof, agents can develop intelligent inference strategies to solve
problems under time constraints. The logics of algorithmic knowledge
can be implemented and executed directly. When an agent has to
solve a problem
, he checks first if
belongs to a
known problem class
. If not, a ``universal problem
solver'' (for any problem that can be described in the language) is
activated, and
can only hope to find the solution quickly. But if
,
may estimate its complexity and then
decide if the optimal solution can be obtained in time or if some
heuristics is needed. Based on that information he can then choose a
procedure to solve
. Other agents can also reason about
and about the problems he has to solve to explain or predict his
actions accordingly.