what's xxx
k-means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. The problem is computationally difficult (NP-hard)
k-means clustering tends to find clusters of comparable spatial extent, while the expectation-maximization mechanism allows clusters to have different shapes.
Given a set of observations $(x_1, x_2, …, x_n)$, where each observation is a d-dimensional real vector, k-means clustering aims to partition the n observations into k sets (k ≤ n) $S = {S_1, S_2, …, S_k}$ so as to minimize the within-cluster sum of squares 平方和(WCSS):
$underset{mathbf{S}} {operatorname{arg\,min}} sum_{i=1}^{k} sum_{mathbf x_j in S_i} left| mathbf x_j - oldsymbolmu_i
ight|^2 $
where $μ_i$ is the mean of points in $S_i$.
Algorithm
heuristic
1. Assignment step: $S_i^{(t)} = ig { x_p : ig | x_p - m^{(t)}_i ig |^2 le ig | x_p - m^{(t)}_j ig |^2 forall j, 1 le j le k ig}$,
where each $x_p$ is assigned to exactly one $S^{(t)}$, even if it could be is assigned to two or more of them.
2. Update step: Calculate the new means to be the centroids of the observations in the new clusters.
$m^{(t+1)}_i = frac{1}{|S^{(t)}_i|} sum_{x_j in S^{(t)}_i} x_j $
Since the arithmetic mean is a least-squares estimator, this also minimizes the within-cluster sum of squares (WCSS) objective.
The algorithm has converged when the assignments no longer change. Since both steps optimize the WCSS objective, and there only exists a finite number of such partitionings, the algorithm must converge to a (local) optimum. There is no guarantee that the global optimum is found using this algorithm.