Multilabel Ranking Metrics-Label Ranking Average Precision | ML

Label Ranking average precision (LRAP) measures the average precision of the predictive model but instead using precision-recall. It measures the label rankings of each sample. Its value is always greater than 0. The best value of this metric is 1. This metric is related to average precision but used label ranking instead of precision and recall
LRAP basically asks the question that for each of the given samples what percents of the higher-ranked labels were true labels.
Given a binary indicator matrix of ground-truth labels

y\epsilon \left \{ 0, 1 \right \}^{n_{samples} * n_{labels}}.

The score associated with each label is denoted by \hat{f} where,

 \hat{f}\epsilon \left \{ \mathbb{R} \right \}^{n_{samples} * n_{labels}}

Then we can calculate LRAP using following formula:

 LRAP(y, \hat{f}) = \dfrac{1}{n_{samples}}*\sum_{i=0}^{n_{samples}-1}\dfrac{1}{\left \| y_i \right \|_{0}} \sum _{j:y_{ij}=1} \dfrac{\left |L_{ij} \right |}{rank_{ij}}

where,

 L_{ij} = \left \{ k\colon y_{ik} =1, \hat{f_{ik}}\geq\hat{f_{ij}} \right \}

and

 rank_{ij} = \left | \left \{  k \colon \hat{f_{ik}}\geq\hat{f_{ij}} \right \}\right |

Code : Python code to implement LRAP

filter_none

edit
close

play_arrow

link
brightness_4
code

# import numpy and scikit-learn libraries
import numpy as np
from sklearn.metrics import label_ranking_average_precision_score
   
# take sample datasets
y_true = np.array([[1, 0, 0], 
                   [1, 0, 1], 
                   [1, 1, 0]])
y_score = np.array([[0.75, 0.5, 1], 
                    [1, 0.2, 0.1],
                    [0.9, 0.7, 0.6]])
   
# print the output
print(label_ranking_average_precision_score(
    y_true, y_score))

chevron_right


Output :

0.777

To understand above example, Let’s take three categories human (represented by [1, 0, 0]), cat(represented by [0, 1, 0]), dog(represented by [0, 0, 1]). We were provided three samples such as [1, 0, 0], [1, 0, 1], [1, 1, 0] . This means we have total number of 5 ground truth labels (3 of humans, 1 of cat and 1 of dog). In the first sample for example, only true label human got 2nd place in prediction label. so, rank = 2. Next we need to find out how many correct labels along the way. There is only one correct label that is human so the numerator value is 1. Hence the fraction becomes 1/2 = 0.5.
Therefore, the LRAP value of 1st sample is:


 LRAP_{1st}=\dfrac{\left ( 0.5 \right )}{1} = 0.5

In the second sample, the first rank prediction is of human, followed by cat and dog. The fraction for the human is 1/1 = 1 and the dog is 2/3 = 0.66 (number of true label ranking along the way/ranking of dog class in the predicted label).
LRAP value of 2nd sample is:

 LRAP_{2nd}=\dfrac{\left ( 1+0.66 \ right )}{2} = 0.83

Similarly, for the third sample, the value of fractions for the human class is 1/1 = 1 and the cat class is 2/2 = 1. LRAP value of 3rd sample is:


 LRAP_{3rd}=\dfrac{\left ( 2 \right )}{2} = 1

Therefore total LRAP is the sum of LRAP’s on each sample divided by the number of samples.

 LRAP =  \sum_{i=0}^{n_{samples}-1}\left ( 1 + 0.83 + 0.5 \right ) \\ LRAP =  \dfrac{1}{3}\left ( 2.33 \right ) \\          \\ LRAP =   .77




My Personal Notes arrow_drop_up

Check out this Author's contributed articles.

If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.

Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.


Article Tags :
Practice Tags :


Be the First to upvote.


Please write to us at contribute@geeksforgeeks.org to report any issue with the above content.