A number of systems have been proposed which assume still more restricted reasoning capacities of the agents and in this way avoid all forms of logical omniscience. One framework that eliminates logical omniscience completely is the so-called impossible-worlds approach. Logical omniscience can be avoided if one allow ``impossible possible worlds'' in which the valuation of the sentences of the language is arbitrary. In other words, the logical laws do not hold in the ``impossible possible worlds'' ([Cre70], [Cre73], [Hin75], [Ste79], [Ran82], [Wan90]).
The intuition underlying the introduction of impossible worlds is that an agent may regard some models of the (real) world possible, although they are logically impossible. For example, a logical contradiction cannot be true. However, an agent may not have enough resources to determine the truth value of that contradiction and simply assumes it to be true. So he will consider some worlds possible, although logically they are impossible.
Because knowledge is evaluated with respect to all states and the laws
of logic do not hold in some states, all forms of logical omniscience
are avoided. For instance, the tautology
may be false in an impossible world, but an agent may consider that
world possible, so
does not hold
universally. In other words, the necessitation rule is not
valid. Similarly, axiom (K) (closure under material
implication) fails to hold, because it is possible that in an
impossible world both formulae
and
are
true while
is false.
The logic determined by the class of all impossible-worlds models is
rather uninteresting, because no genuine epistemic statement is
universally valid. Epistemic principles can be obtained by imposing
appropriate conditions on the models. For example, axiom (K) is
valid if for every impossible world, if the value is assigned to
both
and
then it must be assigned to the
formula
as well.