Open In App

Resolution Completeness and clauses in Artificial Intelligence

Last Updated : 06 Feb, 2024
Improve
Improve
Like Article
Like
Save
Share
Report

Prerequisite: 

To wrap up our resolution topic, we’ll try to understand why PL-RESOLUTION is complete. To accomplish so, we propose the resolution closure RC (S)     of a collection of clauses S    , which is the set of all clauses derivable from clauses in S     or their derivatives by applying the resolution rule repeatedly. The resolution closure is the final value of the variable clauses computed by PL-RESOLUTION. Because there are only a limited number of separate phrases that can be created from the symbols P_{1}, \ldots, P_{k}     that exist in S, RC (S)     must be finite. (Note that without the factoring phase, which eliminates numerous copies of literal, this would not be true.) As a result, PL-RESOLUTION always ends.

The ground resolution theorem is a completeness theorem for resolution in propositional logic: If a group of clauses is unsatisfiable, the empty clause is included in the resolution closure of those clauses.

The contrapositive of this theorem is demonstrated: if the closure RC (S)     does not contain the empty clause, then S     is satisfiable. In reality, we can build a model for S     that includes appropriate truth values for P_{1}, \ldots, P_{k}    . The following is the building procedure: For i     from 1 to k, assign false to \neg P_{i}     if a clause in RC (S)     has the literal P_{i}     and all of its other literals are false under the assignment specified for P_{1}, \ldots, P_{i-1}     Otherwise, P_{i}     should be set to true.

This assignment to P_{1}, \ldots, P_{k}    . Assume the opposite—that assigning symbol P_{i}     at some point i     in the sequence causes some sentence C     to become false. For this to happen, all other literals in C     must have already been faked by assignments to P_{1}, \ldots, P_{i-1}    . As a result, C     must now resemble either {\text  (false  \vee \text { false } \vee \text { ...... false }\vee  { P_{i} }}     or \text { (false } \vee \text { false } \vee \cdots \text { false } \vee \neg P_{i} \text { ) }    . If just one of these clauses is in RC (S)    , the algorithm will give the necessary truth value to Pi to make C     true, therefore C     can only be falsified if both are in R C(S)    . Now, because R C(S)     is closed during resolution, it will include the resolvent of these two clauses, which will have all of its literal falsified by the assignments to P_{1}, \ldots, P_{i-1}    . This contradicts our belief that the first false clause arrives in stage i    . As a result, we’ve established that the construction never falsifies a clause in RC (S)    ; that is, it always creates a model of R C(S)    , and hence a model of S (because S is included in R C(S)    .

Definite and Horn Clauses

It is a highly essential inference method because of the completeness of resolution. However, in many cases, the entire strength of resolution isn’t required. Some real-world knowledge bases adhere to particular constraints on the types of sentences they include, allowing them to employ a more limited and efficient inference procedure.

The definite sentence, which is a disjunction of literals of which exactly one is affirmative, is one such constrained form. The sentence \left(\neg L_{1,1} \vee \neg \text { Breeze } \vee B_{1,1}\right)    for example, is a definite clause, but \left(\neg B_{1,1} \vee P_{1,2} \vee P_{2,1}\right)    is not.

The Horn clause, which is a disjunction of literals, only one of which is affirmative, is a little more generic. All definite clauses, as well as clauses with no positive literals, are Horn clauses; they are referred to be goal clauses. Horn clauses are closed when they are resolved: when two Horn clauses are resolved, a Horn clause is returned.

\begin{aligned} \text { CNFSentence } & \rightarrow \text { Clause }_{1} \wedge \cdots \wedge \text { Clause }_{n} \\ \text { Clause } & \rightarrow \text { Literal }_{1} \vee \cdots \vee \text { Literal }_{m} \\ \text { Literal } & \rightarrow \text { Symbol } \mid \neg \text { Symbol } \\ \text { Symbol } & \rightarrow P|Q| R \mid \cdots \\ \text { HornClauseForm } & \rightarrow \text { DefiniteClauseForm } \mid \text { GoalClauseForm } \\ \text { DefiniteClauseForm } & \rightarrow\left(\text { Symbol }_{1} \wedge \cdots \wedge \text { Symbol }_{l}\right) \Rightarrow \text { Symbol } \\ \text { GoalClauseForm } & \rightarrow\left(\text { Symbol }_{1} \wedge \cdots \wedge \text { Symbol }_{l}\right) \Rightarrow \text { False } \end{aligned}

This equation shows a grammar for Horn clauses, definite clauses, and conjunctive normal form. Although a clause is written as A \wedge B \Rightarrow C    is still a definite clause when written as \neg A \vee \neg B \vee C   , only the former is regarded the standard form for definite clauses. Another type is the k \text {-CNF }    sentence, which is a CNF sentence with at most k literals in each clause.

Only definite clause knowledge bases are interesting for three reasons:

  1. Every definite sentence can be inferred with a single positive literal as the conclusion and positive literal conjunction as the premise. The definite phrase \left(\neg L_{1,1} \vee \neg \text { Breeze } \vee B_{1,1}\right)   , for example, can be represented as the implication \left(L_{1,1} \wedge \text { Breeze }\right) \Rightarrow B_{1,1}    The phrase is simpler to grasp in its implication form: if the agent is in [1,1] and there is a wind, then [1,1] is breezy. The premise is known as the body, and the conclusion is known as the head in Horn form. A fact is a statement that consists of a single positive literal, such as L_{1,1}   \text { True } \Rightarrow L_{1,1}    can also be stated in implication form, but it’s easier to just write L_{1,1}   .
  2. The forward-chaining and backward-chaining techniques may be used to infer using Horn clauses. Both of these algorithms are natural in the sense that the inference processes are clear and simple to follow for humans. Logic programming is based on this form of inference.
  3. Horn clauses may determine entailment in a time that is proportional to the size of the knowledge base, which is a nice surprise.


Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads