K-means clustering explained
k-means clustering is a method of vector quantization, originally from signal processing, that aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or cluster centroid), serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells. k-means clustering minimizes within-cluster variances (squared Euclidean distances), but not regular Euclidean distances, which would be the more difficult Weber problem: the mean optimizes squared errors, whereas only the geometric median minimizes Euclidean distances. For instance, better Euclidean solutions can be found using k-medians and k-medoids.
The problem is computationally difficult (NP-hard); however, efficient heuristic algorithms converge quickly to a local optimum. These are usually similar to the expectation–maximization algorithm for mixtures of Gaussian distributions via an iterative refinement approach employed by both k-means and Gaussian mixture modeling. They both use cluster centers to model the data; however, k-means clustering tends to find clusters of comparable spatial extent, while the Gaussian mixture model allows clusters to have different shapes.
The unsupervised k-means algorithm has a loose relationship to the k-nearest neighbor classifier, a popular supervised machine learning technique for classification that is often confused with k-means due to the name. Applying the 1-nearest neighbor classifier to the cluster centers obtained by k-means classifies new data into the existing clusters. This is known as nearest centroid classifier or Rocchio algorithm.
Description
Given a set of observations, where each observation is a
-dimensional real vector,
k-means clustering aims to partition the
n observations into sets so as to minimize the within-cluster sum of squares (WCSS) (i.e.
variance). Formally, the objective is to find:
where
μi is the mean (also called centroid) of points in
, i.e.
is the size of
, and
is the usual L2 norm . This is equivalent to minimizing the pairwise squared deviations of points in the same cluster:The equivalence can be deduced from identity . Since the total variance is constant, this is equivalent to maximizing the sum of squared deviations between points in different
clusters (between-cluster sum of squares, BCSS).[1] This deterministic relationship is also related to the law of total variance in probability theory.History
The term "k-means" was first used by James MacQueen in 1967,[2] though the idea goes back to Hugo Steinhaus in 1956.[3] The standard algorithm was first proposed by Stuart Lloyd of Bell Labs in 1957 as a technique for pulse-code modulation, although it was not published as a journal article until 1982.[4] In 1965, Edward W. Forgy published essentially the same method, which is why it is sometimes referred to as the Lloyd–Forgy algorithm.[5]
Algorithms
Standard algorithm (naive k-means)
The most common algorithm uses an iterative refinement technique. Due to its ubiquity, it is often called "the k-means algorithm"; it is also referred to as Lloyd's algorithm, particularly in the computer science community. It is sometimes also referred to as "naïve k-means", because there exist much faster alternatives.[6]
Given an initial set of means (see below), the algorithm proceeds by alternating between two steps:[7]
- Assignment step: Assign each observation to the cluster with the nearest mean: that with the least squared Euclidean distance.[8] (Mathematically, this means partitioning the observations according to the Voronoi diagram generated by the means.) where each
is assigned to exactly one
, even if it could be assigned to two or more of them.
- Update step: Recalculate means (centroids) for observations assigned to each cluster.
The objective function in k-means is the WCSS (within cluster sum of squares). After each iteration, the WCSS decreases and so we have a nonnegative monotonically decreasing sequence. This guarantees that the k-means always converges, but not necessarily to the global optimum.
The algorithm has converged when the assignments no longer change or equivalently, when the WCSS has become stable. The algorithm is not guaranteed to find the optimum.[9]
The algorithm is often presented as assigning objects to the nearest cluster by distance. Using a different distance function other than (squared) Euclidean distance may prevent the algorithm from converging. Various modifications of k-means such as spherical k-means and k-medoids have been proposed to allow using other distance measures.
- Pseudocode
The below pseudocode outlines the implementation of the standard k-means clustering algorithm. Initialization of centroids, distance metric between points and centroids, and the calculation of new centroids are design choices and will vary with different implementations. In this example pseudocode, argmin is used to find the index of the minimum value.def k_means_cluster(k, points): # Initialization: choose k centroids (Forgy, Random Partition, etc.) centroids = [c1, c2, ..., ck] # Initialize clusters list clusters = for _ in range(k)] # Loop until convergence converged = false while not converged: # Clear previous clusters clusters = for _ in range(k)] # Assign each point to the "closest" centroid for point in points: distances_to_each_centroid = [distance(point, centroid) for centroid in centroids] cluster_assignment = argmin(distances_to_each_centroid) clusters[cluster_assignment].append(point) # Calculate new centroids # (the standard implementation uses the mean of all points in a # cluster to determine the new centroid) new_centroids = [calculate_centroid(cluster) for cluster in clusters] converged = (new_centroids
centroids) centroids = new_centroids if converged: return clusters
Initialization methods
Commonly used initialization methods are Forgy and Random Partition.[10] The Forgy method randomly chooses k observations from the dataset and uses these as the initial means. The Random Partition method first randomly assigns a cluster to each observation and then proceeds to the update step, thus computing the initial mean to be the centroid of the cluster's randomly assigned points. The Forgy method tends to spread the initial means out, while Random Partition places all of them close to the center of the data set. According to Hamerly et al., the Random Partition method is generally preferable for algorithms such as the k-harmonic means and fuzzy k-means. For expectation maximization and standard k-means algorithms, the Forgy method of initialization is preferable. A comprehensive study by Celebi et al.,[11] however, found that popular initialization methods such as Forgy, Random Partition, and Maximin often perform poorly, whereas Bradley and Fayyad's approach[12] performs "consistently" in "the best group" and k-means++ performs "generally well".
The algorithm does not guarantee convergence to the global optimum. The result may depend on the initial clusters. As the algorithm is usually fast, it is common to run it multiple times with different starting conditions. However, worst-case performance can be slow: in particular certain point sets, even in two dimensions, converge in exponential time, that is .[13] These point sets do not seem to arise in practice: this is corroborated by the fact that the smoothed running time of k-means is polynomial.[14]
The "assignment" step is referred to as the "expectation step", while the "update step" is a maximization step, making this algorithm a variant of the generalized expectation–maximization algorithm.
Complexity
Finding the optimal solution to the k-means clustering problem for observations in d dimensions is:
- NP-hard in general Euclidean space (of d dimensions) even for two clusters,[15] [16]
- NP-hard for a general number of clusters k even in the plane,[17]
- if k and d (the dimension) are fixed, the problem can be exactly solved in time
, where
n is the number of entities to be clustered.
[18] Thus, a variety of heuristic algorithms such as Lloyd's algorithm given above are generally used.
The running time of Lloyd's algorithm (and most variants) is
,
[19] where:
- n is the number of d-dimensional vectors (to be clustered)
- k the number of clusters
- i the number of iterations needed until convergence.
On data that does have a clustering structure, the number of iterations until convergence is often small, and results only improve slightly after the first dozen iterations. Lloyd's algorithm is therefore often considered to be of "linear" complexity in practice, although it is in the worst case superpolynomial when performed until convergence.[20]
- In the worst-case, Lloyd's algorithm needs
iterations, so that the worst-case complexity of Lloyd's algorithm is superpolynomial.
- Lloyd's k-means algorithm has polynomial smoothed running time. It is shown that for arbitrary set of n points in
, if each point is independently perturbed by a normal distribution with mean and variance
, then the expected running time of -means algorithm is bounded by
O(n34k34d8log4(n)/\sigma6)
, which is a polynomial in,, and
.
- Better bounds are proven for simple cases. For example, it is shown that the running time of k-means algorithm is bounded by
for points in an
integer lattice
.
[21] Lloyd's algorithm is the standard approach for this problem. However, it spends a lot of processing time computing the distances between each of the k cluster centers and the n data points. Since points usually stay in the same clusters after a few iterations, much of this work is unnecessary, making the naïve implementation very inefficient. Some implementations use caching and the triangle inequality in order to create bounds and accelerate Lloyd's algorithm.[22] [23] [24] [25]
Optimal number of clusters
See main article: Determining the number of clusters in a data set. Finding the optimal number of clusters (k) for k-means clustering is a crucial step to ensure that the clustering results are meaningful and useful.[26] Several techniques are available to determine a suitable number of clusters. Here are some of commonly used methods:
- Elbow method (clustering): This method involves plotting the explained variation as a function of the number of clusters, and picking the elbow of the curve as the number of clusters to use.[27] However, the notion of an "elbow" is not well-defined and this is known to be not reliable.[28]
- Silhouette (clustering): Silhouette analysis measures the quality of clustering and provides an insight into the separation distance between the resulting clusters.[29] A higher silhouette score indicates that the object is well matched to its own cluster and poorly matched to neighboring clusters.
- Gap statistic: The Gap Statistic compares the total within intra-cluster variation for different values of k with their expected values under null reference distribution of the data.[30] The optimal k is the value that yields the largest gap statistic.
- Davies–Bouldin index: The Davies-Bouldin index is a measure of the how much separation there is between clusters.[31] Lower values of the Davies-Bouldin index indicate a model with better separation.
- Calinski-Harabasz index: This Index evaluates clusters based on their compactness and separation. The index is calculated using the ratio of between-cluster variance to within-cluster variance, with higher values indicate better-defined clusters.[32]
- Rand index: It calculates the proportion of agreement between the two clusters, considering both the pairs of elements that are correctly assigned to the same or different clusters.[33] Higher values indicate greater similarity and better clustering quality. To provide a more accurate measure, the Adjusted Rand Index (ARI), introduced by Hubert and Arabie in 1985, corrects the Rand Index by adjusting for the expected similarity of all pairings due to chance.[34]
Variations
norm (
Taxicab geometry).
- k-medoids (also: Partitioning Around Medoids, PAM) uses the medoid instead of the mean, and this way minimizes the sum of distances for arbitrary distance functions.
- Fuzzy C-Means Clustering is a soft version of k-means, where each data point has a fuzzy degree of belonging to each cluster.
- Gaussian mixture models trained with expectation–maximization algorithm (EM algorithm) maintains probabilistic assignments to clusters, instead of deterministic assignments, and multivariate Gaussian distributions instead of means.
- k-means++ chooses initial centers in a way that gives a provable upper bound on the WCSS objective.
- The filtering algorithm uses k-d trees to speed up each k-means step.[35]
- Some methods attempt to speed up each k-means step using the triangle inequality.[36]
- Escape local optima by swapping points between clusters.
- The Spherical k-means clustering algorithm is suitable for textual data.[37]
- Hierarchical variants such as Bisecting k-means,[38] X-means clustering[39] and G-means clustering[40] repeatedly split clusters to build a hierarchy, and can also try to automatically determine the optimal number of clusters in a dataset.
- Internal cluster evaluation measures such as cluster silhouette can be helpful at determining the number of clusters.
- Minkowski weighted k-means automatically calculates cluster specific feature weights, supporting the intuitive idea that a feature may have different degrees of relevance at different features.[41] These weights can also be used to re-scale a given data set, increasing the likelihood of a cluster validity index to be optimized at the expected number of clusters.[42]
- Mini-batch k-means: k-means variation using "mini batch" samples for data sets that do not fit into memory.[43]
- Otsu's method
Hartigan–Wong method
Hartigan and Wong's method provides a variation of k-means algorithm which progresses towards a local minimum of the minimum sum-of-squares problem with different solution updates. The method is a local search that iteratively attempts to relocate a sample into a different cluster as long as this process improves the objective function. When no sample can be relocated into a different cluster with an improvement of the objective, the method stops (in a local minimum). In a similar way as the classical k-means, the approach remains a heuristic since it does not necessarily guarantee that the final solution is globally optimum.
Let
be the individual cost of
defined by
, with
the center of the cluster.
- Assignment step: Hartigan and Wong's method starts by partitioning the points into random clusters
}.
- Update step: Next it determines the
and
for which the following function reaches a maximum
For the
that reach this maximum,
moves from the cluster
to the cluster
.
- Termination: The algorithm terminates once
is less than zero for all
.
Different move acceptance strategies can be used. In a first-improvement strategy, any improving relocation can be applied, whereas in a best-improvement strategy, all possible relocations are iteratively tested and only the best is applied at each iteration. The former approach favors speed, whether the latter approach generally favors solution quality at the expense of additional computational time. The function
used to calculate the result of a relocation can also be efficiently evaluated by using equality
[44]
Global optimization and meta-heuristics
The classical k-means algorithm and its variations are known to only converge to local minima of the minimum-sum-of-squares clustering problem defined asMany studies have attempted to improve the convergence behavior of the algorithm and maximize the chances of attaining the global optimum (or at least, local minima of better quality). Initialization and restart techniques discussed in the previous sections are one alternative to find better solutions. More recently, global optimization algorithms based on branch-and-bound and semidefinite programming have produced ‘’provenly optimal’’ solutions for datasets with up to 4,177 entities and 20,531 features.[45] As expected, due to the NP-hardness of the subjacent optimization problem, the computational time of optimal algorithms for k-means quickly increases beyond this size. Optimal solutions for small- and medium-scale still remain valuable as a benchmark tool, to evaluate the quality of other heuristics. To find high-quality local minima within a controlled computational time but without optimality guarantees, other works have explored metaheuristics and other global optimization techniques, e.g., based on incremental approaches and convex optimization,[46] random swaps[47] (i.e., iterated local search), variable neighborhood search[48] and genetic algorithms.[49] [50] It is indeed known that finding better local minima of the minimum sum-of-squares clustering problem can make the difference between failure and success to recover cluster structures in feature spaces of high dimension.
Discussion
Three key features of k-means that make it efficient are often regarded as its biggest drawbacks:
- Euclidean distance is used as a metric and variance is used as a measure of cluster scatter.
- The number of clusters k is an input parameter: an inappropriate choice of k may yield poor results. That is why, when performing k-means, it is important to run diagnostic checks for determining the number of clusters in the data set.
- Convergence to a local minimum may produce counterintuitive ("wrong") results (see example in Fig.).
A key limitation of k-means is its cluster model. The concept is based on spherical clusters that are separable so that the mean converges towards the cluster center. The clusters are expected to be of similar size, so that the assignment to the nearest cluster center is the correct assignment. When for example applying k-means with a value of
onto the well-known
Iris flower data set, the result often fails to separate the three
Iris species contained in the data set. With
, the two visible clusters (one containing two species) will be discovered, whereas with
one of the two clusters will be split into two even parts. In fact,
is more appropriate for this data set, despite the data set's containing 3
classes. As with any other clustering algorithm, the
k-means result makes assumptions that the data satisfy certain criteria. It works well on some data sets, and fails on others.
The result of k-means can be seen as the Voronoi cells of the cluster means. Since data is split halfway between cluster means, this can lead to suboptimal splits as can be seen in the "mouse" example. The Gaussian models used by the expectation–maximization algorithm (arguably a generalization of k-means) are more flexible by having both variances and covariances. The EM result is thus able to accommodate clusters of variable size much better than k-means as well as correlated clusters (not in this example). In counterpart, EM requires the optimization of a larger number of free parameters and poses some methodological issues due to vanishing clusters or badly-conditioned covariance matrices. k-means is closely related to nonparametric Bayesian modeling.[51]
Applications
k-means clustering is rather easy to apply to even large data sets, particularly when using heuristics such as Lloyd's algorithm. It has been successfully used in market segmentation, computer vision, and astronomy among many other domains. It often is used as a preprocessing step for other algorithms, for example to find a starting configuration.
Vector quantization
See main article: Vector quantization. Vector quantization, a technique commonly used in signal processing and computer graphics, involves reducing the color palette of an image to a fixed number of colors, known as k. One popular method for achieving vector quantization is through k-means clustering. In this process, k-means is applied to the color space of an image to partition it into k clusters, with each cluster representing a distinct color in the image. This technique is particularly useful in image segmentation tasks, where it helps identify and group similar colors together.Example: In the field of computer graphics, k-means clustering is often employed for color quantization in image compression. By reducing the number of colors used to represent an image, file sizes can be significantly reduced without significant loss of visual quality. For instance, consider an image with millions of colors. By applying k-means clustering with k set to a smaller number, the image can be represented using a more limited color palette, resulting in a compressed version that consumes less storage space and bandwidth. Other uses of vector quantization include non-random sampling, as k-means can easily be used to choose k different but prototypical objects from a large data set for further analysis.
Cluster analysis
See main article: Cluster analysis. Cluster analysis, a fundamental task in data mining and machine learning, involves grouping a set of data points into clusters based on their similarity. k-means clustering is a popular algorithm used for partitioning data into k clusters, where each cluster is represented by its centroid.
However, the pure k-means algorithm is not very flexible, and as such is of limited use (except for when vector quantization as above is actually the desired use case). In particular, the parameter k is known to be hard to choose (as discussed above) when not given by external constraints. Another limitation is that it cannot be used with arbitrary distance functions or on non-numerical data. For these use cases, many other algorithms are superior.
Example: In marketing, k-means clustering is frequently employed for market segmentation, where customers with similar characteristics or behaviors are grouped together. For instance, a retail company may use k-means clustering to segment its customer base into distinct groups based on factors such as purchasing behavior, demographics, and geographic location. These customer segments can then be targeted with tailored marketing strategies and product offerings to maximize sales and customer satisfaction.
Feature learning
See main article: Feature learning. k-means clustering has been used as a feature learning (or dictionary learning) step, in either (semi-)supervised learning or unsupervised learning.[52] The basic approach is first to train a k-means clustering representation, using the input training data (which need not be labelled). Then, to project any input datum into the new feature space, an "encoding" function, such as the thresholded matrix-product of the datum with the centroid locations, computes the distance from the datum to each centroid, or simply an indicator function for the nearest centroid,[53] or some smooth transformation of the distance.[54] Alternatively, transforming the sample-cluster distance through a Gaussian RBF, obtains the hidden layer of a radial basis function network.[55]
This use of k-means has been successfully combined with simple, linear classifiers for semi-supervised learning in NLP (specifically for named-entity recognition)[56] and in computer vision. On an object recognition task, it was found to exhibit comparable performance with more sophisticated feature learning approaches such as autoencoders and restricted Boltzmann machines. However, it generally requires more data, for equivalent performance, because each data point only contributes to one "feature".
Example: In natural language processing (NLP), k-means clustering has been integrated with simple linear classifiers for semi-supervised learning tasks such as named-entity recognition]] (NER). By first clustering unlabeled text data using k-means, meaningful features can be extracted to improve the performance of NER models. For instance, k-means clustering can be applied to identify clusters of words or phrases that frequently co-occur in the input text, which can then be used as features for training the NER model. This approach has been shown to achieve comparable performance with more complex feature learning techniques such as autoencoders and restricted Boltzmann machines, albeit with a greater requirement for labeled data.
Recent Developments
Recent advancements in the application of k-means clustering include improvements in initialization techniques, such as the use of k-means++ initialization to select initial cluster centroids in a more effective manner. Additionally, researchers have explored the integration of k-means clustering with deep learning methods, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), to enhance the performance of various tasks in computer vision, natural language processing, and other domains.
Relation to other algorithms
Gaussian mixture model
The slow "standard algorithm" for k-means clustering, and its associated expectation–maximization algorithm, is a special case of a Gaussian mixture model, specifically, the limiting case when fixing all covariances to be diagonal, equal and have infinitesimal small variance.[57] Instead of small variances, a hard cluster assignment can also be used to show another equivalence of k-means clustering to a special case of "hard" Gaussian mixture modelling.[58] This does not mean that it is efficient to use Gaussian mixture modelling to compute k-means, but just that there is a theoretical relationship, and that Gaussian mixture modelling can be interpreted as a generalization of k-means; on the contrary, it has been suggested to use k-means clustering to find starting points for Gaussian mixture modelling on difficult data.
k-SVD
See main article: k-SVD. Another generalization of the k-means algorithm is the k-SVD algorithm, which estimates data points as a sparse linear combination of "codebook vectors". k-means corresponds to the special case of using a single codebook vector, with a weight of 1.[59]
Principal component analysis
See main article: Principal component analysis. The relaxed solution of -means clustering, specified by the cluster indicators, is given by principal component analysis (PCA).[60] [61] The intuition is that k-means describe spherically shaped (ball-like) clusters. If the data has 2 clusters, the line connecting the two centroids is the best 1-dimensional projection direction, which is also the first PCA direction. Cutting the line at the center of mass separates the clusters (this is the continuous relaxation of the discrete cluster indicator). If the data have three clusters, the 2-dimensional plane spanned by three cluster centroids is the best 2-D projection. This plane is also defined by the first two PCA dimensions. Well-separated clusters are effectively modelled by ball-shaped clusters and thus discovered by k-means. Non-ball-shaped clusters are hard to separate when they are close. For example, two half-moon shaped clusters intertwined in space do not separate well when projected onto PCA subspace. k-means should not be expected to do well on this data.[62] It is straightforward to produce counterexamples to the statement that the cluster centroid subspace is spanned by the principal directions.[63]
Mean shift clustering
See main article: Mean shift. Basic mean shift clustering algorithms maintain a set of data points the same size as the input data set. Initially, this set is copied from the input set. All points are then iteratively moved towards the mean of the points surrounding them. By contrast, k-means restricts the set of clusters to k clusters, usually much less than the number of points in the input data set, using the mean of all points in the prior cluster that are closer to that point than any other for the centroid (e.g. within the Voronoi partition of each updating point). A mean shift algorithm that is similar then to k-means, called likelihood mean shift, replaces the set of points undergoing replacement by the mean of all points in the input set that are within a given distance of the changing set.[64] An advantage of mean shift clustering over k-means is the detection of an arbitrary number of clusters in the data set, as there is not a parameter determining the number of clusters. Mean shift can be much slower than k-means, and still requires selection of a bandwidth parameter.
Independent component analysis
See main article: Independent component analysis. Under sparsity assumptions and when input data is pre-processed with the whitening transformation, k-means produces the solution to the linear independent component analysis (ICA) task. This aids in explaining the successful application of k-means to feature learning.[65]
Bilateral filtering
See main article: Bilateral filter. k-means implicitly assumes that the ordering of the input data set does not matter. The bilateral filter is similar to k-means and mean shift in that it maintains a set of data points that are iteratively replaced by means. However, the bilateral filter restricts the calculation of the (kernel weighted) mean to include only points that are close in the ordering of the input data. This makes it applicable to problems such as image denoising, where the spatial arrangement of pixels in an image is of critical importance.
Similar problems
The set of squared error minimizing cluster functions also includes the k-medoids algorithm, an approach which forces the center point of each cluster to be one of the actual points, i.e., it uses medoids in place of centroids.
Software implementations
Different implementations of the algorithm exhibit performance differences, with the fastest on a test data set finishing in 10 seconds, the slowest taking 25,988 seconds (~7 hours). The differences can be attributed to implementation quality, language and compiler differences, different termination criteria and precision levels, and the use of indexes for acceleration.
Free Software/Open Source
The following implementations are available under Free/Open Source Software licenses, with publicly available source code.
- Accord.NET contains C# implementations for k-means, k-means++ and k-modes.
- ALGLIB contains parallelized C++ and C# implementations for k-means and k-means++.
- AOSP contains a Java implementation for k-means.
- CrimeStat implements two spatial k-means algorithms, one of which allows the user to define the starting locations.
- ELKI contains k-means (with Lloyd and MacQueen iteration, along with different initializations such as k-means++ initialization) and various more advanced clustering algorithms.
- Smile contains k-means and various more other algorithms and results visualization (for java, kotlin and scala).
- Julia contains a k-means implementation in the JuliaStats Clustering package.
- KNIME contains nodes for k-means and k-medoids.
- Mahout contains a MapReduce based k-means.
- mlpack contains a C++ implementation of k-means.
- Octave contains k-means.
- OpenCV contains a k-means implementation.
- Orange includes a component for k-means clustering with automatic selection of k and cluster silhouette scoring.
- PSPP contains k-means, The QUICK CLUSTER command performs k-means clustering on the dataset.
- R contains three k-means variations.
- SciPy and scikit-learn contain multiple k-means implementations.
- Spark MLlib implements a distributed k-means algorithm.
- Torch contains an unsup package that provides k-means clustering.
- Weka contains k-means and x-means.
Proprietary
The following implementations are available under proprietary license terms, and may not have publicly available source code.
See also
Notes and References
- Kriegel . Hans-Peter . Hans-Peter Kriegel . Schubert . Erich . Zimek . Arthur . Arthur Zimek . 2016 . The (black) art of runtime evaluation: Are we comparing algorithms or implementations? . Knowledge and Information Systems . 52 . 2 . 341–378 . 10.1007/s10115-016-1004-2 . 40772241 . 0219-1377 .
- MacQueen . J. B. . 1967 . Some Methods for classification and Analysis of Multivariate Observations . Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability . University of California Press . 1 . 281 - 297 . 0214227 . 0214.46201 . 2009-04-07 .
- Steinhaus . Hugo . Hugo Steinhaus . 1957 . Sur la division des corps matériels en parties . Bull. Acad. Polon. Sci. . fr . 4 . 12 . 801 - 804 . 0090073 . 0079.16403 .
- Lloyd . Stuart P. . 1957 . Least square quantization in PCM . Bell Telephone Laboratories Paper . Published in journal much later: Lloyd . Stuart P. . 1982 . Least squares quantization in PCM . . 28 . 2 . 129 - 137 . 10.1109/TIT.1982.1056489 . 2009-04-15 . 10.1.1.131.1338 . 10833328 .
- Edward W. . Forgy . 1965 . Cluster analysis of multivariate data: efficiency versus interpretability of classifications . Biometrics . 21 . 3 . 768–769 . 2528559 .
- Book: Pelleg. Dan . Moore. Andrew . Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining . Accelerating exact k -means algorithms with geometric reasoning . 1999 . http://portal.acm.org/citation.cfm?doid=312129.312248 . en. San Diego, California, United States . ACM Press . 277–281 . 10.1145/312129.312248 . 9781581131437 . 13907420.
- Book: MacKay, David . Information Theory, Inference and Learning Algorithms . Cambridge University Press . 2003 . 978-0-521-64298-9 . 284 - 292 . Chapter 20. An Example Inference Task: Clustering . 2012999 . mackay2003 . David MacKay (scientist) . http://www.inference.phy.cam.ac.uk/mackay/itprnn/ps/284.292.pdf .
- Since the square root is a monotone function, this also is the minimum Euclidean distance assignment.
- Hartigan . J. A. . Wong . M. A. . 1979 . Algorithm AS 136: A k-Means Clustering Algorithm . . 28 . 1 . 100 - 108 . 2346830 .
- Hamerly . Greg . Elkan . Charles . 2002 . Alternatives to the k-means algorithm that find better clusterings . Proceedings of the eleventh international conference on Information and knowledge management (CIKM) .
- Celebi . M. E. . Kingravi . H. A. . Vela . P. A. . 2013 . A comparative study of efficient initialization methods for the k-means clustering algorithm . . 40 . 1 . 200 - 210 . 1209.1960 . 10.1016/j.eswa.2012.07.021 . 6954668 .
- Bradley . Paul S. . Fayyad . Usama M. . Usama Fayyad . 1998 . Refining Initial Points for k-Means Clustering . Proceedings of the Fifteenth International Conference on Machine Learning .
- Vattani . A. . 2011 . k-means requires exponentially many iterations even in the plane . . 45 . 4 . 596 - 616 . 10.1007/s00454-011-9340-1 . 42683406 . free .
- Arthur . David . Manthey . B. . Roeglin . H. . 2009 . k-means has polynomial smoothed complexity . Proceedings of the 50th Symposium on Foundations of Computer Science (FOCS) . 0904.1113 .
- Aloise . D. . Deshpande . A. . Hansen . P. . Popat . P. . 2009 . NP-hardness of Euclidean sum-of-squares clustering . . 75 . 2 . 245 - 249 . 10.1007/s10994-009-5103-0 . free .
- Dasgupta . S. . Freund . Y. . July 2009 . Random Projection Trees for Vector Quantization . IEEE Transactions on Information Theory . 55 . 7 . 3229–42 . 0805.1390 . 10.1109/TIT.2009.2021326 . 666114 .
- Book: Mahajan . Meena . Nimbhorkar . Prajakta . Varadarajan . Kasturi . WALCOM: Algorithms and Computation . The Planar k-Means Problem is NP-Hard . 2009 . Lecture Notes in Computer Science . 5431 . 274–285 . 10.1007/978-3-642-00202-1_24 . 978-3-642-00201-4 .
- Inaba . M. . Katoh . N. . Imai . H. . 1994 . Applications of weighted Voronoi diagrams and randomization to variance-based k-clustering . . 332–9 . 10.1145/177424.178042 . free .
- Book: Introduction to information retrieval . Manning . Christopher D. . 2008 . Cambridge University Press . Raghavan . Prabhakar . Schütze . Hinrich . 978-0521865715 . 190786122 .
- Book: Arthur . David . Vassilvitskii . Sergei . Proceedings of the twenty-second annual symposium on Computational geometry . How slow is the k -means method? . 2006-01-01 . SCG '06 . ACM . 144–153 . 10.1145/1137856.1137880 . 978-1595933409 . 3084311 .
- Web site: A theoretical analysis of Lloyd's algorithm for k-means clustering . Abhishek . Bhowmick . 2009 . https://web.archive.org/web/20151208140946/https://gautam5.cse.iitk.ac.in/opencs/sites/default/files/final.pdf . 2015-12-08. See also here.
- Book: Phillips, Steven J. . Acceleration of k-Means and Related Clustering Algorithms . 2409 . 2002 . Springer . 978-3-540-43977-6 . Mount . David M. . Lecture Notes in Computer Science . 166–177 . 10.1007/3-540-45643-0_13 . Stein . Clifford . Acceleration of K-Means and Related Clustering Algorithms .
- Elkan . Charles . 2003 . Using the triangle inequality to accelerate k-means . Proceedings of the Twentieth International Conference on Machine Learning (ICML) .
- Book: Hamerly, Greg . Making k-means even faster . Proceedings of the 2010 SIAM International Conference on Data Mining . 2010 . 978-0-89871-703-7 . 130–140 . 10.1137/1.9781611972801.12.
- Book: Hamerly . Greg . Drake . Jonathan . Partitional Clustering Algorithms . Accelerating Lloyd's Algorithm for k-Means Clustering . 2015 . 41–78 . 10.1007/978-3-319-09259-1_2 . 978-3-319-09258-4 .
- Abiodun M. Ikotun, Absalom E. Ezugwu, Laith Abualigah, Belal Abuhaija, Jia Heming, K-means clustering algorithms: A comprehensive review, variants analysis, and advances in the era of big data, Information Sciences, Volume 622, 2023, Pages 178–210, ISSN 0020-0255, https://doi.org/10.1016/j.ins.2022.11.139.
- 276. doi:10.1007/BF02289263. S2CID 120467216.
- Schubert, Erich (2023-06-22). "Stop using the elbow criterion for k-means and how to choose the number of clusters instead". ACM SIGKDD Explorations Newsletter. 25 (1): 36–42. arXiv:2212.12189. doi:10.1145/3606274.3606278. ISSN 1931-0145.
- Peter J. Rousseeuw (1987). "Silhouettes: a Graphical Aid to the Interpretation and Validation of Cluster Analysis". Computational and Applied Mathematics. 20: 53–65. doi:10.1016/0377-0427(87)90125-7.
- Robert Tibshirani; Guenther Walther; Trevor Hastie (2001). "Estimating the number of clusters in a data set via the gap statistic". Journal of the Royal Statistical Society, Series B. 63 (2): 411–423. doi:10.1111/1467-9868.00293. S2CID 59738652.
- Davies, David L.; Bouldin, Donald W. (1979). "A Cluster Separation Measure". IEEE Transactions on Pattern Analysis and Machine Intelligence. PAMI-1 (2): 224–227. doi:10.1109/TPAMI.1979.4766909. S2CID 13254783.
- Caliński, Tadeusz; Harabasz, Jerzy (1974). "A dendrite method for cluster analysis". Communications in Statistics. 3 (1): 1–27. doi:10.1080/03610927408827101.
- W. M. Rand (1971). "Objective criteria for the evaluation of clustering methods". Journal of the American Statistical Association. 66 (336). American Statistical Association: 846–850. doi:10.2307/2284239. JSTOR 2284239.
- Hubert, L., & Arabie, P. (1985).Hubert, L., & Arabie, P. (1985). Comparing partitions. Journal of Classification, 2(1), 193-218.https://doi.org/10.1007/BF01908075
- Kanungo . Tapas . Mount . David M. . David Mount . Nathan Netanyahu . Netanyahu . Nathan S. . Piatko . Christine D.. Christine Piatko . Silverman . Ruth . Wu . Angela Y. . 2002 . An efficient k-means clustering algorithm: Analysis and implementation . IEEE Transactions on Pattern Analysis and Machine Intelligence . 24 . 7 . 881 - 892 . 10.1109/TPAMI.2002.1017616 . 12003435 . 2009-04-24 .
- Drake . Jonathan . 2012 . Accelerated k-means with adaptive distance bounds . The 5th NIPS Workshop on Optimization for Machine Learning, OPT2012 .
- Dhillon . I. S. . Modha . D. M. . 2001 . Concept decompositions for large sparse text data using clustering . Machine Learning . 42 . 1 . 143 - 175 . 10.1023/a:1007612920971 . free .
- Steinbach . M. . Karypis . G. . Kumar . V. . 2000 . "A comparison of document clustering techniques". In . KDD Workshop on Text Mining . 400 . 1. 525–526 .
- Pelleg, D.; & Moore, A. W. (2000, June). "X-means: Extending k-means with Efficient Estimation of the Number of Clusters". In ICML, Vol. 1
- Hamerly . Greg . Elkan . Charles . 2004 . Learning the k in k-means . . 16 . 281 .
- Amorim . R. C. . Mirkin . B. . 2012 . Minkowski Metric, Feature Weighting and Anomalous Cluster Initialisation in k-Means Clustering . Pattern Recognition . 45 . 3 . 1061 - 1075 . 10.1016/j.patcog.2011.08.012 .
- Amorim . R. C. . Hennig . C. . 2015 . Recovering the number of clusters in data sets with noise features using feature rescaling factors . Information Sciences . 324 . 126 - 145 . 1602.06989 . 10.1016/j.ins.2015.06.039 . 315803 .
- Sculley . David . 2010 . Web-scale k-means clustering . ACM . 1177–1178 . 2016-12-21 . Proceedings of the 19th international conference on World Wide Web .
- Web site: Hartigan's Method: k-means Clustering without Voronoi . Telgarsky . Matus .
- Piccialli . Veronica . Sudoso . Antonio M. . Wiegele . Angelika . 2022-03-28 . SOS-SDP: An Exact Solver for Minimum Sum-of-Squares Clustering . INFORMS Journal on Computing . 34 . 4 . en . 2144–2162 . 10.1287/ijoc.2022.1166 . 2104.11542 . 233388043 . 1091-9856.
- Bagirov . A. M. . Taheri . S. . Ugon. J.. 2016 . Nonsmooth DC programming approach to the minimum sum-of-squares clustering problems . Pattern Recognition . 53 . 12 - 24 . 10.1016/j.patcog.2015.11.011 . 2016PatRe..53...12B .
- Fränti . Pasi . 2018 . Efficiency of random swap clustering . Journal of Big Data . 5 . 1. 1 - 21 . 10.1186/s40537-018-0122-y. free .
- Hansen . P. . Mladenovic . N. . 2001 . J-Means: A new local search heuristic for minimum sum of squares clustering. Pattern Recognition . 34 . 2. 405 - 413 . 10.1016/S0031-3203(99)00216-2 . 2001PatRe..34..405H .
- Krishna . K. . Murty . M. N. . 1999 . Genetic k-means algorithm . IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics . 29 . 3. 433 - 439 . 10.1109/3477.764879. 18252317 .
- Gribel . Daniel . Vidal . Thibaut. 2019 . HG-means: A scalable hybrid metaheuristic for minimum sum-of-squares clustering . Pattern Recognition . 88 . 569 - 583 . 10.1016/j.patcog.2018.12.022 . 1804.09813 . 13746584 .
- Book: Kulis . Brian . Jordan . Michael I. . 2012-06-26 . Revisiting k-means: new algorithms via Bayesian nonparametrics . https://icml.cc/2012/papers/291.pdf . ICML . 1131–1138 . 9781450312851 .
- Book: 2012 . Learning feature representations with k-means . Neural Networks: Tricks of the Trade . Springer . https://cs.stanford.edu/~acoates/papers/coatesng_nntot2012.pdf . Ng . Andrew Y. . Coates . Adam . Montavon, G. . Orr, G. B. . Müller, K.-R. . Klaus-Robert Müller.
- Csurka . Gabriella . Dance . Christopher C. . Fan . Lixin . Willamowski . Jutta . Bray . Cédric . 2004 . Visual categorization with bags of keypoints . ECCV Workshop on Statistical Learning in Computer Vision .
- Coates . Adam . Lee . Honglak . Ng . Andrew Y. . 2011 . An analysis of single-layer networks in unsupervised feature learning . dead . International Conference on Artificial Intelligence and Statistics (AISTATS) . https://web.archive.org/web/20130510120705/http://www.stanford.edu/~acoates/papers/coatesleeng_aistats_2011.pdf . 2013-05-10 .
- Schwenker . Friedhelm . Kestler . Hans A. . Palm . Günther . 2001 . Three learning phases for radial-basis-function networks . Neural Networks . 14 . 4–5 . 439–458 . 10.1.1.109.312 . 10.1016/s0893-6080(01)00027-2 . 11411631 .
- Lin . Dekang . Wu . Xiaoyun . 2009 . Phrase clustering for discriminative learning . Annual Meeting of the ACL and IJCNLP . 1030–1038 .
- Book: Numerical Recipes: The Art of Scientific Computing . Press . W. H. . Teukolsky . S. A. . Vetterling . W. T. . Flannery . B. P. . Cambridge University Press . 2007 . 978-0-521-88068-8 . 3rd . New York (NY) . Section 16.1. Gaussian Mixture Models and k-Means Clustering . http://apps.nrbook.com/empanel/index.html#pg=842 .
- Book: Kevin P. Murphy. Machine learning : a probabilistic perspective. MIT Press. 2012. 978-0-262-30524-2. Cambridge, Mass.. 810414751.
- Aharon . Michal. Michal Aharon . Elad . Michael . Bruckstein . Alfred . 2006 . K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation . IEEE Transactions on Signal Processing . 54 . 11 . 4311 . 2006ITSP...54.4311A . 10.1109/TSP.2006.881199 . 7477309 .
- Hongyuan . Zha . Chris . Ding . Ming . Gu . Xiaofeng . He . Horst D. . Simon . December 2001 . Spectral Relaxation for k-means Clustering . Neural Information Processing Systems Vol.14 (NIPS 2001) . 1057–1064 .
- Chris . Ding . Xiaofeng . He . July 2004 . K-means Clustering via Principal Component Analysis . 225–232 . Proceedings of International Conference on Machine Learning (ICML 2004) .
- Drineas . Petros . Alan M. . Frieze . Ravi . Kannan . Santosh . Vempala . Vishwanathan . Vinay . 2004 . Clustering large graphs via the singular value decomposition . Machine Learning . 56 . 1–3 . 9–33 . 10.1023/b:mach.0000033113.59016.96 . 5892850 . 2012-08-02 . free .
- 1410.6801 . cs.DS . Michael B. . Cohen . Sam . Elder . Dimensionality reduction for k-means clustering and low rank approximation (Appendix B) . 2014. Cameron . Musco . Christopher . Musco . Madalina . Persu .
- Little . Max A. . Jones . Nick S. . 2011 . Generalized methods and solvers for noise removal from piecewise constant signals. I. Background theory . . 467 . 2135 . 3088–3114 . 10.1098/rspa.2010.0671 . 3191861 . 22003312.
- Alon . Vinnikov . Shai . Shalev-Shwartz . 2014 . K-means Recovers ICA Filters when Independent Components are Sparse . Proceedings of the International Conference on Machine Learning (ICML 2014) .