Normalized Discounted Cumulative Gain – Multilabel Ranking Metrics | ML

Discounted Cumulative Gain
Discounted Cumulative Gain (DCG) is the metric of measuring ranking quality. It is mostly used in information retrieval problems such as measuring the effectiveness of the search engine algorithm by ranking the articles it displays according to their relevance in terms of the search keyword.

Let’s consider that a search engine that outputs 5 documents named ( D1, D2, D3, D4, D5) are output in that order. We need to define the relevence scale (0-3) where:

  • 0 : not relevent
  • 1-2 : somewhat relevent
  • 3 : completely relevent

Suppose these documents have relevance scores:

  • D1 : 3
  • D2 : 2
  • D3 : 0
  • D4 : 0
  • D5 : 1


The Cumulative Gain is the sum of these relevance scores and can be calculated as:


 CG = \sum_{i=1}^{5}(rel)_{i} = 3+2+0+0+1 =6

The discounted cumulative gain can be calculated by the formula:

 DCG = \sum_{i=1}^{5} \dfrac{rel_i}{\log_2 (i+1)}

Therefore the discounted cumulative gain of above example is:

 DCG_5 = \dfrac{3}{log_2 (2) } + \dfrac{2}{log_2 (3) }+ \dfrac{0}{log_2 (4) } +\dfrac{0}{log_2 (5) } + \dfrac{1}{log_2 (6) }\\  DCG_5 = \dfrac{3}{log_2 (2) } + \dfrac{2}{log_2 (3) }+ \dfrac{0}{log_2 (4) } +\dfrac{0}{log_2 (5) } + \dfrac{1}{log_2 (6) } \\  DCG_5 = 3 + \dfrac{2}{1.585} + 0 + 0 + \dfrac{1}{2.585}\\  DCG_5 =  3 + 1.26 + 0.3868 \\  DCG_5 \simeq 4.67

Now we need to arrange these articles in descending order by rankings and calculate DCG to get the Ideal Discounted Cumulative Gain (IDCG) ranking.



  IDCG_5 = \dfrac{3}{log_2 (2) } + \dfrac{2}{log_2 (3) }+ \dfrac{1}{log_2 (4) } +\dfrac{0}{log_2 (5) } + \dfrac{0}{log_2 (6) } \\  IDCG_5 = 3 + \dfrac{2}{1.585} + \dfrac{1}{2}+ 0 + 0 \\  IDCG_5 =  3 + 1.26 + 0.5\\  IDCG_5 = 4.76

Now, we calculate our Normalized DCG using the following formula :


 nDCG  = \dfrac{DCG_5}{IDCG_5} \\  nDCG = \dfrac{4.67}{4.76} \\  nDCG \simeq 0.98\\

Code : Python program for Normalized Discounted Cumulative Gain

filter_none

edit
close

play_arrow

link
brightness_4
code

# import required package 
from sklearn.metrics import ndcg_score, dcg_score
import numpy as np
   
# Releveance scores in Ideal order
true_relevance = np.asarray([[3, 2, 1, 0, 0]])
   
# Releveance scores in output order
relevance_score = np.asarray([[3, 2, 0, 0, 1]])
   
# DCG score
dcg = dcg_score(true_relevance, relevance_score)
print("DCG score : ", dcg)
   
# IDCG score
idcg = dcg_score(true_relevance, true_relevance)
print("IDCG score : ", idcg)
   
# Normalized DCG score
ndcg = dcg / idcg
print("nDCG score : ", ndcg)
   
# or we can use the scikit-learn ndcg_score package
print("nDCG score (from function) : ", ndcg_score(
    true_relevance, relevance_score))

chevron_right


Output:

DCG score :  4.670624189796882
IDCG score :  4.761859507142915
nDCG score :  0.980840401274087
nDCG score (from function) :  0.980840401274087

Limitations of Normalized Discounted Cumulative Gain (NDCG):

  • NDCG metrics does not penalize the bad documents outputs. For Example: [3] and [3, 0, 0] has same NDCG but in second out there are two irrelevent documents.
  • Because no specific standard defined for the number of output documents. DCG does not seem to deal with missing any relevant document in the output. For example, two outputs [3, 3, 3] and [3, 3, 3, 3] of similar input are considered equally good. For output-1 the DCG3 is calculated but for output-2 the DCG4 is calculated.

References:

  • DCG Wikipedia article
  • Jarvelin, K., & Kekalainen, J. (2002). Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems (TOIS), 20(4), 422-446.



My Personal Notes arrow_drop_up

Check out this Author's contributed articles.

If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.

Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.


Article Tags :
Practice Tags :


Be the First to upvote.


Please write to us at contribute@geeksforgeeks.org to report any issue with the above content.