Open In App

Propositional Logic Hybrid Agent and Logical State

Last Updated : 28 Feb, 2022
Improve
Improve
Like Article
Like
Save
Share
Report

Prerequisite: Wumpus World in Artificial Intelligence

To create a hybrid agent for the wumpus world, the capacity to deduce various aspects of the state of the world may be integrated rather simply with the condition–action rules and problem-solving algorithms. The agent program keeps a knowledge base and a current strategy up to date. The atemporal axioms—those that do not depend on t  , such as the axiom connecting the breeziness of squares to the presence of pits—are included in the initial knowledge base. The new percept phrase is added at each time step, along with all the axioms that are dependent on t  , such as the successor-state axioms. (The agent doesn’t need axioms for future time steps, as explained in the next section.) The agent then utilizes logical inference to determine which squares are safe and which have yet to be visited by ASKing   inquiries of the knowledge base.

The agent program’s primary body creates a plan based on a diminishing priority of goals. If there is a sparkle, the program first devises a strategy for grabbing the gold, returning to the original place, and climbing out of the cave. Otherwise, if no current plan exists, the software plots a path to the nearest safe square it has not yet visited, ensuring that the route passes only via safe squares. A* search, not an ASK  , is used to plan a route. If the agent still has an arrow and there are no safe squares to investigate, the next step is to try to make a safe square by shooting at one of the available wumpus spots. These are found by inquiring where \operatorname{ASK}\left(K B, \neg W_{x, y}\right)   is false, i.e. when it is unknown whether or not there is a wumpus. PLAN-ROUTE   is used by the function PLAN-SHOT   (not shown) to plan a sequence of operations that will line up this shot. If this doesn’t work, the program looks for a square to explore that isn’t provably unsafe—that is, one for which \operatorname{ASK}\left(K B, \neg O K_{x, y}^{t}\right)   returns false. If no such square exists, the mission will be impossible, and the agent will withdraw to [1, 1]   and climb out of the cave.

Logic States

The agent program performs admirably, but it has one fundamental flaw: the computational cost of calls to ASK   grows exponentially over time. This is due to the fact that the required conclusions must reach back in time and involve an increasing number of proposition symbols. Obviously, this is unsustainable—we can’t have an agent whose processing time for each percept grows in lockstep with its lifespan! We truly need a constant update time—that is, one that is independent of t  . The apparent solution is to save, or cache, inference findings so that the inference process at the next time step can build on the outcomes of previous stages rather than having to re-start it again. The belief state—that is, some representation of the set of all conceivable current states of the world—can replace the previous history of percepts and all their repercussions

State estimation is the process of updating the belief state as fresh percepts come. We can employ a logical statement involving the proposition symbols associated with the current time step, as well as the atemporal symbols, instead of an explicit list of states as in Section 4.4. For instance, consider the logical sentence.

\text { WumpusAlive }^{1} \wedge L_{2,1}^{1} \wedge B_{2,1} \wedge\left(P_{3,1} \vee P_{2,2}\right)

describes the set of all situations at time 1 in which the wumpus is alive, the agent is at [2, 1]  , the square is breezy, and there is a pit in either [3, 1]   or [2, 2]  , or both.

It turns out that maintaining a precise belief state as a logical formula is not straightforward. There are 2n potential states—that is, assignments of truth values to those symbols—if there are n fluent symbols for time t. The powerset (all subsets) of the set of physical states is now the set of belief states. There are 2^{2^{n}}   belief states since there are 2n physical states. We’d require integers with \log _{2}\left(2^{2^{n}}\right)=2^{n}   bits to designate the current belief state even if we employed the smallest possible encoding of logical formulas, with each belief state represented by a distinct binary number. To put it another way, correct state estimation may necessitate logical formulas of a size proportional to the number of symbols.

The representation of belief states as conjunctions of literals, i.e. 1-CNF formulas, is a highly common and natural method for approximate state estimation. Given the belief state at t 1, the agent program simply tries to prove X^{t}   and \neg X^{t}   for each symbol X^{t}   (as well as each atemporal symbol whose truth value is unknown). The new belief state is formed by the conjunction of verifiable literals, and the previous belief state at t - 1   is discarded.

It’s vital to keep in mind that as time passes, this technique may lose some information. If the above equation were the true belief state, neither P_{3,1}   nor P_{2,2}   would be provable separately, and neither would appear in the 1-CNF belief state. On the other hand, we know that the entire 1-CNF belief state must be true because every literal in it is proven from the prior belief state, and the original belief state is a true statement. As a result, the set of possible states represented by the 1-CNF belief state encompasses all states that are in fact feasible when the whole percept history is taken into account. The CNF belief state serves as a basic outer envelope, or conservative approximation, to the exact belief state. The concept of cautious approximations to complicated sets appears to be a repeating subject in several AI fields.



Like Article
Suggest improvement
Previous
Next
Share your thoughts in the comments

Similar Reads