Davies–Bouldin index
The Davies–Bouldin index (DBI), introduced by David L. Davies and Donald W. Bouldin in 1979, is a metric for evaluating
Preliminaries
Given n dimensional points, let Ci be a cluster of data points. Let Xj be an n-dimensional feature vector assigned to cluster Ci.
Here is the centroid of Ci and Ti is the size of the cluster i. is the qth root of the qth moment of the points in cluster i about the mean. If then is the average distance between the feature vectors in cluster i and the centroid of the cluster. Usually the value of p is 2, which makes the distance a
- is a measure of separation between cluster and cluster .
- is the kth element of , and there are n such elements in A for it is an n dimensional centroid.[inconsistent]
Here k indexes the features of the data, and this is essentially the Euclidean distance between the centers of clusters i and j when p equals 2.
Definition
Let Ri,j be a measure of how good the clustering scheme is. This measure, by definition has to account for Mi,j the separation between the ith and the jth cluster, which ideally has to be as large as possible, and Si, the within cluster scatter for cluster i, which has to be as low as possible. Hence the Davies–Bouldin index is defined as the ratio of Si and Mi,j such that these properties are conserved:
- .
- .
- When and then .
- When and then .
With this formulation, the lower the value, the better the separation of the clusters and the 'tightness' inside the clusters.
A solution that satisfies these properties is:
This is used to define Di:
If N is the number of clusters:
DB is called the Davies–Bouldin index. This is dependent both on the data as well as the algorithm. Di chooses the worst-case scenario, and this value is equal to Ri,j for the most similar cluster to cluster i. There could be many variations to this formulation, like choosing the average of the cluster similarity, weighted average and so on.
Explanation
Lower index values indicate a better clustering result. The index is improved (lowered) by increased separation between clusters and decreased variation within clusters.
These conditions constrain the index so defined to be symmetric and non-negative. Due to the way it is defined, as a function of the ratio of the within cluster scatter, to the between cluster separation, a lower value will mean that the clustering is better. It happens to be the average similarity between each cluster and its most similar one, averaged over all the clusters, where the similarity is defined as Si above. This affirms the idea that no cluster has to be similar to another, and hence the best clustering scheme essentially minimizes the Davies–Bouldin index. This index thus defined is an average over all the i clusters, and hence a good measure of deciding how many clusters actually exists in the data is to plot it against the number of clusters it is calculated over. The number i for which this value is the lowest is a good measure of the number of clusters the data could be ideally classified into. This has applications in deciding the value of k in the
Soft version of Davies-Bouldin index
Recently, the Davies–Bouldin index has been extended to the domain of soft clustering categories.
Implementations
The scikit-learn Python open source library provides an implementation of this metric in the sklearn.metrics module.[3]
R provides a similar implementation in its clusterSim package.[4]
A Java implementation is found in ELKI, and can be compared to many other clustering quality indexes.
See also
- Silhouette score
- Dunn index
- Cluster analysis
- Calinski-Harabasz index
- Determining the number of clusters in a data set
Notes and references
- S2CID 13254783.
- )
- ^ "sklearn.metrics.davies_bouldin_score". scikit-learn. Retrieved 2023-11-22.
- ^ "R: Davies-Bouldin index". search.r-project.org. Retrieved 2023-11-22.