Let us assume that an agent knows within units of time, i.e., he needs time units to infer . Then naturally he is able to do it when even more time is available. So we can take as axiom any formula where . Note that this axiom does not say that knowledge is persistent in the sense that once established it will be available henceforth. The formula does not entails that will know at time point . It does not even imply that will eventually be known at all. In this aspect the present approach makes a more realistic assumption than the persistence axiom in chapter 4.
We have remarked previously that the formula contains more information than . While the latter formula only says that agent is able to derive in finite time if he chooses to, the former one also specifies the amount of time needed by to complete that task. Thus, the implication can be assumed as an axiom.
Let be a provable formula of the logic used by agent . We have argued previously that it cannot be assumed that knows automatically (i.e., without any reasoning.) However, he may know it after some course of reasoning. The interesting question is whether or not he is able to compute reliably within finite time. That would be the case if the agent has a general strategy which fulfills the following two requirements. First, it selects for any formula of the language an algorithm to compute that formula. Second, if the formula is provable then the selected algorithm will terminate with the correct answer. I shall argue that under reasonably weak assumptions, such a strategy exists and can be adopted by any intelligent agent, so that can be safely postulated if is provable.
The set of axioms that any agent presupposes is decidable -- in the normal case even finite. Because the number of permissible inference rules is also limited, all proofs can be generated algorithmically. Hence, there is a general-purpose theorem prover that can validate any theorem in finitely many steps. If the agent's selection mechanism always returns that general algorithm for computing knowledge, he is able to validate every theorem . That is, when presented with a theorem he can select an algorithm which runs on and outputs the answer ``Yes'' after a finite number of steps. Although the described strategy (``always use the same algorithm'') satisfies the stated conditions, it may not be the best: specific problems may be solved much more quickly by special algorithms than by a general-purpose theorem prover. Hence, the following would be a more reasonable strategy. First, the agent analyzes the query and tries to select one of his special algorithms to infer it. If no such algorithm can be found, then the general algorithm is selected. In this way, he can always find an algorithm to verify . (If the selection mechanism is not reliable, i.e., if it could return a wrong algorithm for some queries, then several algorithms can be selected and executed concurrently or interleavingly.)
A strategy to successfully prove every provable formula can be acquired by rational agents. An intelligent agent may learn to use some algorithms to compute knowledge. Such algorithms (together with a suitable selection scheme) can be built into artificial agents. Hence, the rule to infer from can be assumed to be valid.
The statement contains some uncertainty. It is not clear how long agent will need to infer . Can a more definite statement be made? That is, can a natural number (which typically depends on the structure of the theorem ) be determined such that can be assumed as a postulate? The discussion of this question will be postponed until section 5.2.3.
Now suppose that is provable and that each of the formulae can be computed reliably by an agent in finite time, i.e., are regarded to be true. Is it reasonable to infer that can compute reliably if he chooses to derive it? I shall argue that the conclusion can be justified.
When presented with a question , an agent naturally attempts to derive from all what he knows5.3. It is reasonable to assume that the consequence relation used by a rational agent has a certain transitivity property: if all the formulae are derivable from some knowledge base and can be inferred from , then can also be inferred from that knowledge base. Thus we can assume that follows from all what knows. Because agents are assumed to process only a limited amount of information, every consequence of their knowledge can be computed algorithmically. With a suitable selection strategy, e.g., one of the strategies outlined previously, an agent can find an algorithm to compute his knowledge successfully. Consequently, agent is able to compute after a finite number of steps. So, if is a theorem then we can assume that is valid.
As a special case we can assume that is valid, which says that agent can use modus ponens in his reasoning: if he can derive both and then he is also able to derive . Because explicit knowledge implies implicit knowledge, the formula is valid, provided that is a theorem. Thus, agents are able to compute all logical consequences of their explicit knowledge.
Recall that in chapter 4, the persistence axiom (``everything that has been once deduced will be available henceforth'') plays a prominent role in justifying the postulates stating that agents are able to use logical laws in their reasoning. In the current approach such an assumption is not necessary because we argue at a higher abstraction level. Only the final result is important, not the intermediate ones.
Since the formula is merely a definitional stipulation, it seems uncontroversial. If that axiom is adopted, the formula can be proved for all natural numbers . More interesting are variants of the consistency axiom. A weak consistency criterion is that agents do not believe apparent contradictions, i.e., their explicit knowledge is consistent: . A stronger requirement is that an agent's implicit knowledge be consistent, which is captured by the schema . That is, the agent's explicit knowledge is free of contradictions and his inference procedures are sound, so that consistency is preserved. (The latter formula is indeed a stronger condition than the former one because follows from .)
What about self-introspection? If an agent knows or does not know something explicitly, he only needs to reflect about himself to find it out. The cost of reflection is usually low, so it can be assumed that self-reflection can be performed in constant time. Hence, the formulae and seem plausible.
The situation is quite different when an agent tries to compute the truth value of where . Although he may actually compute and reflect about that, the result of his computation does not say much about his ability to infer . He may succeed to compute within time units, but there is still a chance that this success is only accidental and not reproducible. On the other hand, even if is false (i.e., agent cannot compute reliably within time units), may still happen to successfully infer after less than time units. Consequently, it is not sound to infer that he can or cannot compute reliably within time units. So, generally neither nor is valid. The same argument applies to the formulae and . Those principles can only be assumed for agents who know their abilities well. They can, for example, be postulated for an agent who works deterministically. For such an agent, a small number of tests may suffice to determine if he can perform a task under certain conditions.