The coverage error tells us how many top-scored final prediction labels we have to include without missing any ground truth label. This is useful if we want to know the average number of top-scored-prediction required to predict in order to not miss any ground truth label.
Given a binary indicator matrix of ground-truth labels . The score associated with each label is denoted by where,
The coverage error is defined as:
where rank is defined as
Code: To check for coverage Error for any prediction scores with true-labels using scikit-learn.
coverage error of 2.0
Let’s calculate the coverage error of above example manually
Our first sample has ground-truth value of [1, 0, 1]. To cover both true labels we need to look our predictions (here [0.75, 0.5, 1]) into descending order. Thus, we need top-2 predicted labels in this sample. Similarly for second and third samples, we need top-1 and top-2 predicted samples. Averaging these results over a number of samples gives us an output of 2.0.
The best value of coverage is when it is equal to average number of true class labels.
- Multilabel Ranking Metrics-Label Ranking Average Precision | ML
- MultiLabel Ranking Metrics - Ranking Loss | ML
- Normalized Discounted Cumulative Gain - Multilabel Ranking Metrics | ML
- tf-idf Model for Page Ranking
- Ranking Rows of Pandas DataFrame
- PyQt5 QSpinBox - How to get the font metrics
- Python | Similarity metrics of strings
- PyQt5 QCalendarWidget - Accessing Font Metrics
- ML | Models Score and Error
- ML | Log Loss and Mean Squared Error
- Python | Mean Squared Error
- Python | Assertion Error
- NZEC error in Python
- Python | 404 Error handling in Flask
- Python IMDbPY - Error Handling
- Floating point error in Python
- ML | Mathematical explanation of RMSE and R-squared error
- Python | Prompt for Password at Runtime and Termination with Error Message