Open In App

Rule-Based Classifier – Machine Learning

Rule-based classifiers are just another type of classifier which makes the class decision depending by using various “if..else” rules. These rules are easily interpretable and thus these classifiers are generally used to generate descriptive models. The condition used with “if” is called the antecedent and the predicted class of each rule is called the consequent.

Properties of rule-based classifiers:



An obvious question, which comes into the mind after knowing that the rules are not mutually exclusive is that how would the class be decided in case different rules with different consequent cover the record.

There are two solutions to the above problem:



Example:
Below is the dataset to classify mushrooms as edible or poisonous:

Class Cap Shape Cap Surface Bruises Odour Stalk Shape Population Habitat
edible flat scaly yes anise tapering scattered grasses
poisonous convex scaly yes pungent enlargening several grasses
edible convex smooth yes almond enlargening numerous grasses
edible convex scaly yes almond tapering scattered meadows
edible flat fibrous yes anise enlargening several woods
edible flat fibrous no none enlargening several urban
poisonous conical scaly yes pungent enlargening scattered urban
edible flat smooth yes anise enlargening numerous meadows
poisonous convex smooth yes pungent enlargening several urban
Rules: 
The given rules are not mutually exclusive.

How to generate a rule:

Sequential Rule Generation.

Rules can be generated either using general-to-specific approach or specific-to-general approach. In the general-to-specific approach, start with a rule with no antecedent and keep on adding conditions to it till we see major improvements in our evaluation metrics. While for the other we keep on removing the conditions from a rule covering a very specific case. The evaluation metric can be accuracy, information gain, likelihood ratio etc.

Algorithm for generating the model incrementally:

The algorithm given below generates a model with unordered rules and ordered classes, i.e. we can decide which class to give priority while generating the rules.


A <-Set of attributes
T <-Set of training records
Y <-Set of classes
Y' <-Ordered Y according to relevance
R <-Set of rules generated, initially to an empty list
for each class y in Y'
while the majority of class y records are not covered
generate a new rule for class y, using methods given above
Add this rule to R
Remove the records covered by this rule from T
end while
end for
Add rule {}->y' where y' is the default class

Classifing a record:
The classification algorithm described below assumes that the rules are unordered and the classes are weighted.


R <-Set of rules generated using training Set
T <-Test Record
W <-class name to Weight mapping, predefined, given as input
F <-class name to Vote mapping, generated for each test record, to be calculated
for each rule r in R
check if r covers T
if so then add W of predicted_class to F of predicted_class
end for
Output the class with the highest calculated vote in F

Note: The rule set can be also created indirectly by pruning(simplifying) other already generated models like a decision tree.

Article Tags :