Skip to content
Related Articles
Open in App
Not now

Related Articles

Proofs and Inferences in Proving Propositional Theorem

Improve Article
Save Article
  • Last Updated : 28 Feb, 2022
Improve Article
Save Article

This article discusses how to use inference rules to create proof—a series of conclusions that leads to the desired result. The most well-known rule is known as Modus Ponens (Latin for affirming mode) and is expressed as

\frac{\alpha \Rightarrow \beta, \quad \alpha}{\beta}

Inferences in Proving Propositional Theorem

The notation signifies that the sentence may be deduced whenever any sentences of the type are supplied. If \text { ( WumpusAhead } \wedge \text { WumpusAlive) } \Rightarrow \text { Shoot }    and \text { (WumpusAhead } \wedge \text { WumpusAlive) }    are both supplied, Shoot may be deduced.

And-Elimination is another helpful inference rule, which states that any of the conjuncts can be inferred from conjunction:

\frac{\alpha \wedge \beta}{\alpha}

WumpusAlive can be deduced from \text { (WumpusAhead } \wedge \text { WumpusAlive) }   , for example. One may readily demonstrate that Modus Ponens and And-Elimination are sound once and for all by evaluating the potential truth values of \alpha    and \beta   . These principles may then be applied to each situation in which they apply, resulting in good conclusions without the necessity of enumerating models.

\begin{aligned} (\alpha \wedge \beta) & \equiv(\beta \wedge \alpha) \quad \text { commutativity of } \wedge \\ (\alpha \vee \beta) & \equiv(\beta \vee \alpha) \quad \text { commutativity of } \vee \\ ((\alpha \wedge \beta) \wedge \gamma) & \equiv(\alpha \wedge(\beta \wedge \gamma)) \quad \text { associativity of } \wedge \\ ((\alpha \vee \beta) \vee \gamma) & \equiv(\alpha \vee(\beta \vee \gamma)) \quad \text { associativity of } \vee \\ \neg(\neg \alpha) & \equiv \alpha \quad \text { double-negation elimination } \\ (\alpha \Rightarrow \beta) & \equiv(\neg \beta \Rightarrow \neg \alpha) \quad \text { contraposition } \\ (\alpha \Rightarrow \beta) & \equiv(\neg \alpha \vee \beta) \quad \text { implication elimination } \\ (\alpha \Leftrightarrow \beta) & \equiv((\alpha \Rightarrow \beta) \wedge(\beta \Rightarrow \alpha)) \quad \text { biconditional elimination } \\ \neg(\alpha \wedge \beta) & \equiv(\neg \alpha \vee \neg \beta) \quad \text { De Morgan } \\ \neg(\alpha \vee \beta) & \equiv(\neg \alpha \wedge \neg \beta) \quad \text { De Morgan } \\ (\alpha \wedge(\beta \vee \gamma)) & \equiv((\alpha \wedge \beta) \vee(\alpha \wedge \gamma)) \quad \text { distributivity of } \wedge \text { over } \vee \\ (\alpha \vee(\beta \wedge \gamma)) & \equiv((\alpha \vee \beta) \wedge(\alpha \vee \gamma)) \quad \text { distributivity of } \vee \text { over } \wedge \end{aligned}

The equations above show all of the logical equivalences that can be utilized as inference rules. The equivalence for biconditional elimination, for example, produces the two inference rules.

\frac{\alpha \Leftrightarrow \beta}{(\alpha \Rightarrow \beta) \wedge(\beta \Rightarrow \alpha)} \quad \text { and } \quad \frac{(\alpha \Rightarrow \beta) \wedge(\beta \Rightarrow \alpha)}{\alpha \Leftrightarrow \beta}

Some inference rules do not function in both directions in the same way. We can’t, for example, run Modus Ponens in the reverse direction to get \alpha \Rightarrow \beta   and \alpha \text{ from } \beta  .

Let’s look at how these equivalences and inference rules may be applied in the wumpus environment. We begin with the knowledge base including R1 through R5 and demonstrate how to establish \neg P_{1,2}   i.e. that [1,2] does not include any pits. To generate R6, we first apply biconditional elimination to R2: 

R_{6}: \quad\left(B_{1,1} \Rightarrow\left(P_{1,2} \vee P_{2,1}\right)\right) \wedge\left(\left(P_{1,2} \vee P_{2,1}\right) \Rightarrow B_{1,1}\right)

After that, we apply And-Elimination on R6 to get  R_{7}: \quad\left(\left(P_{1,2} \vee P_{2,1}\right) \Rightarrow B_{1,1}\right)

 For contrapositives, logical equivalence yields R_{8}: \quad\left(\neg B_{1,1} \Rightarrow \neg\left(P_{1,2} \vee P_{2,1}\right)\right)

With R8 and the percept R_{4} \text { (i.e., } \neg B_{1,1} \text { ) }  , we can now apply Modus Ponens to get R_{9}: \quad \neg\left(P_{1,2} \vee P_{2,1}\right)  .

Finally, we use De Morgan’s rule to arrive at the following conclusion: R_{10}: \quad \neg P_{1,2} \wedge \neg P_{2,1}

That is to say, neither [1,2] nor [2,1] have a pit in them.

We found this proof by hand, but any of the search techniques may be used to produce a proof-like sequence of steps. All we have to do now is define a proof problem:

  • Initial State: the starting point for knowledge.
  • Actions: the set of actions is made up of all the inference rules that have been applied to all the sentences that fit the inference rule’s upper half.
  • Consequences: Adding the statement to the bottom part of the inference rule is the result of an action.
  • Objective: The objective is to arrive at a state that contains the phrase we are attempting to verify.

As a result, looking for proofs is a viable alternative to counting models.

My Personal Notes arrow_drop_up
Related Articles

Start Your Coding Journey Now!