CAP popularly called the ‘Cumulative Accuracy Profile’ is used in the performance evaluation of the classification model. It helps us to understand and conclude about the robustness of the classification model. In order to visualize this, three distinct curves are plotted in our plot:
- A random plot
- A plot obtained by using a SVM classifier or a random forest classifier
- A perfect plot( an ideal line)
We are working the DATA to understand the concept.
Data Head : User ID Gender Age EstimatedSalary Purchased 0 15624510 Male 19 19000 0 1 15810944 Male 35 20000 0 2 15668575 Female 26 43000 0 3 15603246 Female 27 57000 0 4 15804002 Male 19 76000 0
Code : Data Input Output.
Input : Age EstimatedSalary 0 19 19000 1 35 20000 2 26 43000 3 27 57000 4 19 76000 5 27 58000 6 27 84000 7 32 150000 8 25 33000 9 35 65000
Code : Splitting dataset for training and testing.
Code : Random Forest Classifier
Code : Finding the classifier accuracy.
Accuracy : 91.66666666666666
The random plot is made under the assumption that we have plotted the total number of points ranging from 0 to the total number of data points in the dataset. The y-axis has been kept as the total number of points for which the dependent variable from our dataset has the outcome as 1. The random plot can be understood like a linearly increasing relationship. An example is a model that predicts whether a product is bought (positive outcome) by each individual from a group of people (classifying parameter) based on factors such as their gender, age, income etc. If group members would be contacted at random, the cumulative number of products sold would rise linearly toward a maximum value corresponding to the total number of buyers within the group. This distribution is called the “random” CAP.
Code : Random Model
Random Forest Classifier Line
Code : Random forest classification algorithm is applied to the dataset for the random classifier line plot.
Explanation: pred is the prediction made by the random classifier. We zip the prediction and test values and sort it in the reverse order so that higher values come first and then the lower values. We extract only the y_test values in an array and store it in lm. np.cumsum() creates an array of values while cumulatively adding all previous values in the array to the present value.The x-values will be ranging from 0 to the total + 1. We add one to the total cause arange() does not include one to the array and we want the x axis to range from 0 to the total.
We then plot the perfect plot(or the ideal line). A perfect prediction determines exactly which group members will buy the product, such that the maximum number of products sold will be reached with a minimum number of calls. This produces a steep line on the CAP curve that stays flat once the maximum is reached (contacting all other group members will not lead to more products sold), which is the “perfect” CAP.
Explanation: A perfect model finds positive outcomes in the same number of tries as the number of positive outcomes. We have total of 41 positive outcomes in our dataset and so at exactly 41, the maximum is achieved.
In any case, our classifier algorithm should not produce a line that lies under the random line. It is considered to be a really bad model in that case. Since the plotted classifier line is close to the ideal line we can say that our model is a really good fit. Take the area under the perfect plot and call it aP. Take the area under the prediction model and call it aR. Then take the ratio as aR/aP. This ratio is called the Accuracy Rate. The closer is the value to 1, the better the model. This is one way to analyse it.
Another way to analyse it would be to project a line from about 50% from the axis on the prediction model and project it on the y-axis. Let us say that we obtain the projection value as X%.
-> 60% : it is a really bad model -> 60%<X<70% : it is still a bad model but better than the first case obviously -> 70%<X<80% : it is a good model -> 80%<X<90% : it is a very good model -> 90%<X<100% : it is extraordinarily good and might be one of the overfitting cases.
So according to this analysis, we can determine how accurate our model is.
- Python - Cumulative Records Product
- Python - Cumulative List Split
- Python program to find Cumulative sum of a list
- Python - Cumulative product of dictionary value lists
- Python | Mathematical Median of Cumulative Records
- Download Instagram profile pic using Python
- Getting Instagram profile details using Python
- Python | Pandas series.cumprod() to find Cumulative product of a Series
- Python | Pandas Series.cummin() to find cumulative minimum of a series
- Python | Pandas series.cummax() to find Cumulative maximum of a series
- Python | Pandas Series.cumsum() to find cumulative sum of a Series
- Text Analysis in Python 3
- Python | Data analysis using Pandas
- Python | Sentiment Analysis using VADER
- Data analysis and Visualization with Python
- Exploratory Data Analysis in Python | Set 1
- Exploratory Data Analysis in Python | Set 2
- Principal Component Analysis with Python
- Facebook Sentiment Analysis using python
- Exploratory Data Analysis in Python
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to firstname.lastname@example.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.
Improved By : Akanksha_Rai