Open In App

How to Calculate Precision in R Programming?

Last Updated : 26 Oct, 2022
Improve
Improve
Like Article
Like
Save
Share
Report

In this article, we going to learn how to calculate precision using the confusion matrix in the R programming language.

Precision

A numerical quantity’s precision indicates how precisely the amount is expressed. Typically, this is measured in bits, although it can also be in decimal digits. It relates to mathematical precision, which refers to the number of digits required to denote a value.

Confusion matrix

This matrix for measuring performance summarises how well a classification algorithm performed. Each class of projected and actual values is listed along with the total number of correct and wrong predictions. It provides information on the types of errors produced, which is more essential than just the faults the classifier makes.

How to Calculate Precision in R Programming?

Confusion Matrix

Step-Wise Procedure To Calculate Precision In R programming language

Step 1: This step will define two vectors, one will have the actual values, and the other will have predicted values from any model; further, we will be using these two vectors to calculate the precision of the predicted values. 

R




# Creating data sets
actual_v <-  c(1,0,1,0,1,0,1,
               0,1,0,1,0,1,0,1,0,1) 
predict_v <- c(1,0,1,0,1,1,0,
               0,1,1,0,0,1,1,1,0,0)
  
# printing data sets
print(actual_v)
print(predict_v)


Output:

> print(actual_v)
 [1] 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
> print(predict_v)
 [1] 1 0 1 0 1 1 0 0 1 1 0 0 1 1 1 0 0

Step 2: In this step, we are creating a confusion matrix, which will provide a 2×2 matrix. From this matrix, we can get the values of True Positive and False Positive, which will be further used in the formula to calculate precision, respectively.

R




# Creating Data sets
actual_v <-  c(1,0,1,0,1,0,1,
               0,1,0,1,0,1,0,1,0,1) 
predict_v <- c(1,0,1,0,1,1,0,
               0,1,1,0,0,1,1,1,0,0)
  
# Assuming thershold to be 0.5
table(ACTUAL=actual_v,
      PREDICTED=predict_v > 0.5)


Output:

      PREDICTED
ACTUAL FALSE TRUE
     0     5    3
     1     3    6

As we can see in the output we have [actual false, predicted false] is 5 because in the actual data false values are also false in the predicted data. Similarly, we can analyze other values.

Step 3: This is the final step, where we will calculate the precision using the True Positive and False Positive values from the confusion matrix to the formula to get the precision value.

The formula of Precision:

Precision : True Positive / (True Positive + False Positive)

From the above-calculated confusion matrix:

TP(True Positive) = 6
FP(False Positive) = 3
Precision=6/(6+3) = 0.6666667

R




# calculating precision
precision <- 6/(3+6)
# Printing precision
print("The Precision score is:")
print(precision)


Output:

[1] "The Precision score is:"
> print(precision)
[1] 0.6666667


Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads