Prerequisite: Wumpus World in Artificial Intelligence
In this article, we’ll use our understanding to make wumpus world agents that use propositional logic. The first stage is to enable the agent to deduce the state of the world from its percept history to the greatest extent possible. This necessitates the creation of a thorough logical model of the consequences of actions. We also demonstrate how the agent may keep track of the world without having to return to the percept history for each inference. Finally, we demonstrate how the agent may develop plans that are guaranteed to meet its objectives using logical inference.
Wumpus World’s Current State
A logical agent works by deducing what to do given a knowledge base of words about the world. Axioms are the general information about how the universe works combine with percept sentences gleaned from the agent’s experience in a specific reality to form the knowledge base.
Understanding Axioms
We’ll start with the immutable aspects of the Wumpus world and move on to the mutable aspects later. For the time being, we’ll need the following symbols for each
- If there is a pit in
, is true. - If there is a Wumpus in
, whether dead or living, is true. - If the agent perceives a breeze in
, is true. - If the agent detects a smell in
, is true.
The sentences we write will be adequate to infer
- In
, there is no pit: - A square is breezy if and only if one of its neighbours has a pit. This must be stated for each square; for the time being, we will only add the relevant squares:
- In all Wumpus universes, the previous sentences are correct. The breeze percepts for the first two squares visited in the specific environment the agent is in are now included.
The agent is aware that there are no pits
The agent is also aware that there is only one wumpus on the planet. This is split into two sections. First and foremost, we must state that there is at least one wumpus:
Then we must conclude that there is only one wumpus. We add a statement to each pair of places stating that at least one of them must be wumpus-free:
So far, everything has gone well. Let’s look at the agent’s perceptions now. If there is now a stink, the knowledge base would benefit from the addition of the proposition
Associating propositions with time steps is a concept that may be applied to any feature of the universe that changes through time.
Through the location fluent, we can directly link stink and wind percepts to the attributes of the squares where they are encountered.
We assert
for every time step t and any square
Of course, axioms are required to allow the agent to keep track of fluents like
For starters, we’ll need proposition symbols for action occurrences. These symbols, like percepts, are indexed by time; for example, Forward 0 indicates that the agent performs the Forward action at time 0. The percept for a given time step occurs first, followed by the action for that time step, and then a transition to the next time step, according to the convention.
We can attempt defining effect axioms that explain the outcome of an action at the following time step to describe how the world changes. If the agent is at
Each potential time step, each of the 16 squares, and each of the four orientations would require a separate statement. For the other actions, we’d need comparable sentences: grab, shoot, climb, turnLeft, and turnRight.
Assume the agent decides to travel Forward at time 0 and records this information in its knowledge base. The agent can now derive that it is in [2, 1] using the effect axiom in the above equation and the initial statements about the state at time 0. \operatorname{ASK}\left(K B, L_{2,1}^{1}\right)=\operatorname{true}, in other words. So far, everything has gone well. Unfortunately, the news isn’t so good elsewhere: if we \operatorname{ASK}\left(K B, \text { HaveArrow }^{1}\right), the result is false, which means the agent can’t show it still has the arrow or that it doesn’t! Because the effect axiom fails to explain what remains unchanged as a result of an action, the knowledge has been lost. The frame problem arises from the need to do so. Adding frame axioms explicitly expressing all the propositions that remain the same could be one answer to the frame problem. For each time t, we would have
Despite the fact that the agent now knows it still retains the arrow after going ahead and that the wumpus hasn’t been killed or resurrected, the proliferation of frame axioms appears to be incredibly inefficient. The set of frame axioms in a universe with m distinct actions and n fluents will be of size O. (mn). The representational frame problem is a term used to describe this particular form of the frame problem. The problem has historically been a significant one for AI researchers; we go over it in more detail in the chapter’s notes.
The representational frame problem is significant because, to put it kindly, the real world has a large number of fluents. Fortunately for us, each action normally affects only a small number of those fluents — the world demonstrates localization. To solve the representational frame problem, the transition model must be defined using a set of axioms of size
The difficulty can be solved by shifting one’s attention from writing axioms about actions to writing axioms about fluents. As a result, we will have an axiom for each fluent
F^{t+1} \Leftrightarrow \text { ActionCausesF }^{t} \vee\left(F^{t} \wedge \neg \text { ActionCausesNotF }^{t}\right)
The HaveArrow axiom is one of the most basic successor-state axioms. Because there is no action for reloading, the