The lack of logical omniscience can stem from various sources. An agent may not be aware of a sentence and therefore does not know it. He may be restricted in his logical capabilities and does not know all the axioms and inference rules. Or he may be biased and refuses to use certain rules of inference. It is also possible that an agent does not care about the consequences of a sentence, so he does not even try to compute them. However, the most important source of non-omniscience is the agents' resource boundedness: they simply do not have enough computational capacities (time, memory etc.) to compute all the consequences of their knowledge, even if all inference rules are available. It is not difficult to supply an agent with a sound and complete deduction mechanism, especially in the context of artificial intelligent agents. Such agents are not omniscient simply because they are resource bounded.
By exploiting the different sources of non-omniscience it can be possible to model non-omniscient agents. For example, by demanding that knowledge include awareness one can describe agents who are not logically omniscient because they are not aware of some formulae. By restricting the set of admissible inference rules that can be used by agents one can model agents who are not able to or refuse to use certain inference rules. Another way to develop a model of knowledge and belief based on the resource-bounded inferential capabilities of agents is to stipulate that the agents can only compute formulae whose derivations require at most inference steps, for some fixed value of .