# ML | Hierarchical clustering (Agglomerative and Divisive clustering)

In data mining and statistics, hierarchical clustering analysis is a method of cluster analysis which seeks to build a hierarchy of clusters i.e. tree type structure based on the hierarchy.

**Basically, there are two types of hierarchical cluster analysis strategies –**

**Agglomerative Clustering:**Also known as bottom-up approach or hierarchical agglomerative clustering (HAC). A structure that is more informative than the unstructured set of clusters returned by flat clustering. This clustering algorithm does not require us to prespecify the number of clusters. Bottom-up algorithms treat each data as a singleton cluster at the outset and then successively agglomerates pairs of clusters until all clusters have been merged into a single cluster that contains all data.**Algorithm :**

given a dataset (d

_{1}, d_{2}, d_{3}, ....d_{N}) of size N # compute the distance matrix for i=1 to N: # as the distance matrix is symmetric about # the primary diagonal so we compute only lower # part of the primary diagonal for j=1 to i: dis_mat[i][j] = distance[d_{i}, d_{j}] each data point is a singleton cluster**repeat**merge the two cluster having minimum distance update the distance matrix**untill**only a single cluster remains

Python implementation of the above algorithm using scikit-learn library:`from`

`sklearn.cluster`

`import`

`AgglomerativeClustering`

`import`

`numpy as np`

`# randomly chosen dataset`

`X`

`=`

`np.array([[`

`1`

`,`

`2`

`], [`

`1`

`,`

`4`

`], [`

`1`

`,`

`0`

`],`

`[`

`4`

`,`

`2`

`], [`

`4`

`,`

`4`

`], [`

`4`

`,`

`0`

`]])`

`# here we need to mention the number of clusters`

`# otherwise the result will be a single cluster`

`# containing all the data`

`clustering`

`=`

`AgglomerativeClustering(n_clusters`

`=`

`2`

`).fit(X)`

`# print the class labels`

`print`

`(clustering.labels_)`

*chevron_right**filter_none***Output :**[1, 1, 1, 0, 0, 0]

**Divisive clustering :**Also known as top-down approach. This algorithm also does not require to prespecify the number of clusters. Top-down clustering requires a method for splitting a cluster that contains the whole data and proceeds by splitting clusters recursively until individual data have been splitted into singleton cluster.**Algorithm :**given a dataset (d

_{1}, d_{2}, d_{3}, ....d_{N}) of size N at the top we have all data in one cluster the cluster is split using a flat clustering method eg. K-Means etc**repeat**choose the best cluster among all the clusters to split split that cluster by the flat clustering algorithm**untill**each data is in its own singleton cluster

**Hierarchical Agglomerative vs Divisive clustering –**

- Divisive clustering is more
*complex*as compared to agglomerative clustering, as in case of divisive clustering we need a flat clustering method as “subroutine” to split each cluster until we have each data having its own singleton cluster. - Divisive clustering is more
*efficient*if we do not generate a complete hierarchy all the way down to individual data leaves. Time complexity of a naive agglomerative clustering is**O(n**because we exhaustively scan the N x N matrix dist_mat for the lowest distance in each of N-1 iterations. Using priority queue data structure we can reduce this complexity to^{3})**O(n**. By using some more optimizations it can be brought down to^{2}logn)**O(n**. Whereas for divisive clustering given a fixed number of top levels, using an efficient flat algorithm like K-Means, divisive algorithms are linear in the number of patterns and clusters.^{2}) - Divisive algorithm is also more
*accurate*. Agglomerative clustering makes decisions by considering the local patterns or neighbor points without initially taking into account the global distribution of data. These early decisions cannot be undone. whereas divisive clustering takes into consideration the global distribution of data when making top-level partitioning decisions.

## Recommended Posts:

- Implementing Agglomerative Clustering using Sklearn
- Difference between CURE Clustering and DBSCAN Clustering
- DBSCAN Clustering in ML | Density based clustering
- ML | Fuzzy Clustering
- ML | Classification vs Clustering
- ML | Spectral Clustering
- ML | K-Medoids clustering with example
- ML | Mean-Shift Clustering
- ML | Types of Linkages in Clustering
- ML | OPTICS Clustering Explanation
- Clustering in Machine Learning
- Different Types of Clustering Algorithm
- K means Clustering - Introduction
- Criterion Function Of Clustering
- ML | OPTICS Clustering Implementing using Sklearn

If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.

Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.