What an intelligent agent chooses to do depends on his available
resources. The available resources are typically measured in
quantitative, rather than qualitative terms. A logic based on a
qualitative time structure can model qualitative constraints like
``agent always knows
before agent
knows it''. It
fails, however, when the constraints placed on the resources are given
in quantitative terms. It is not possible to express, e.g., that an
agent can compute the solution to some problem within a certain time
period.
The language
of dynamic-epistemic logic does not
allow the operator
to occur within the scope of
any knowledge operator. Consequently, the capacities of the
dynamic-epistemic systems to model meta-reasoning are rather
restricted. For example,
(``agent
knows that agent
knows
'') and
(``after
some reasoning agent
will know that agent
knows
'')
are well-formed formulae, but
(``agent
knows that agent
will know
after some
reasoning'') and
(``after some reasoning agent
will know that agent
will know
after some reasoning'') are not.
To be able to model resource boundedness within the language we
consider another notion of knowledge. The main intuition is the
following. An agent's action depends not only on what he
currently knows, but also on what he is able to infer within
some specific amount of time (intuitively, the time within which a
decision must be made -- a classical example being the time available
to make the next move in chess.) Given that an agent needs to
accomplish a task within an hour but does not yet know what actions he
must perform, it cannot be inferred that he will not finish his job in
time: he may be able to calculate the plan and finish it before the
deadline. If an agent knows that another agent must act under some
time constraint, he may infer what the second agent can or cannot know
under this constraint and predict his action accordingly. Therefore,
it must be considered what agents can reliably know within
time units, and not only what they currently know, i.e., what
they know within
unit of time. Thus, we shall analyze sentences of
the form ``if asked about
,
can derive reliably the answer
within
time units'', instead of sentences of the form ``agent
knows
(now)''.
In the informal characterization of knowledge above, the qualification
``reliably'' is important. It distinguishes an agent's ability to
bring about certain states of affairs from the mere logical
possibility that such states of affairs may obtain. The difference can
be illustrated by an example. A shooter may accidentally hit a target
at 1 km distance, but it cannot be said that he has that ability. It
cannot be safely assumed that he will succeed if he decides to try. He
may hit the target once but the success is not repeatable. Hence,
although there is the possibility that he can perform a certain
action, he does not have the ability to do it. In the context of
reasoning actions, an agent may possess a large number of algorithms
which can be applied to compute knowledge. If he chooses to derive
from his current knowledge, he may by chance succeed very
quickly if he applies the right algorithm. However, if he happens to
select another algorithm then it may take very long to compute the
same sentence. It can even the case that the algorithm does not
terminate at all. But if it cannot be safely assumed that the agent
can compute
in time, then generally the possible knowledge of
is not enough to justify his action. Reliability implies that
the agent is able to select deterministically a suitable procedure for
the input and compute the answer within finite time.
Our goal is to represent not only what agents know or can know, but also when they are expected to know what they can know. The first question is answered by specifying the logic used by agents in their reasoning, and the second one by a complexity analysis. What an agent knows or can derive from his knowledge is determined by the logic he uses in his reasoning. An agent may not know a sentence now, but he may possess a procedure to infer that sentence and know it at some future point. The amount of time needed to compute that knowledge depends on several factors, of which the most important ones are the complexity of the sentence and the agent's reasoning power. If the complexity of a sentence and the computation speed of an agent are known then the time he needs to infer the sentence can be estimated.