An entirely complete clustering is one where each cluster has information that directs a place toward a similar class cluster. Completeness portrays the closeness of the clustering algorithm to this (completeness_score) perfection.
This metric is autonomous of the outright values of the labels. A permutation of the cluster label values won’t change the score value in any way.
sklearn.metrics.completeness_score()
Syntax: sklearn.metrics.completeness_score(labels_true, labels_pred)
Parameters:
- labels_true:<int array, shape = [n_samples]>: It accepts the ground truth class labels to be used as a reference.
- labels_pred: <array-like of shape (n_samples,)>: It accepts the cluster labels to evaluate.
Returns: completeness score between 0.0 and 1.0. 1.0 stands for perfectly completeness labeling.
Switching label_true with label_pred will return the homogeneity_score.
Example 1:
Python3
import pandas as pd
from sklearn import datasets
from sklearn.cluster import KMeans
from sklearn.metrics import completeness_score
digits = datasets.load_digits()
Y = digits.target
X = digits.data
kmeans = KMeans(n_clusters = 2 )
kmeans.fit(X)
labels = kmeans.predict(X)
print (completeness_score(Y, labels))
|
Output:
0.8471148027985769
Example 2: Perfectly completeness:
Python3
from sklearn.metrics.cluster import completeness_score
Cscore = completeness_score([ 0 , 1 , 0 , 1 ],
[ 1 , 0 , 1 , 0 ])
print (Cscore)
|
Output:
1.0
Example 3: Non-perfect labeling that further split classes into more clusters can be perfectly completeness:
Python3
from sklearn.metrics.cluster import completeness_score
Cscore = completeness_score([ 0 , 1 , 2 , 3 ],
[ 0 , 0 , 1 , 1 ])
print (Cscore)
|
Output:
0.9999999999999999
Example 4: Include samples from different classes don’t make for completeness labeling:
Python3
from sklearn.metrics.cluster import completeness_score
Cscore = completeness_score([ 0 , 0 , 0 , 0 ],
[ 0 , 1 , 2 , 3 ])
print (Cscore)
|
Output:
0.0