diff --git a/docs/mllib-clustering.md b/docs/mllib-clustering.md index d5f6ae379a85e8c0fbb07434ae9ff97b098af781..8990e95796b67ff978653d175791ccbd4a083973 100644 --- a/docs/mllib-clustering.md +++ b/docs/mllib-clustering.md @@ -24,13 +24,11 @@ variant of the [k-means++](http://en.wikipedia.org/wiki/K-means%2B%2B) method called [kmeans||](http://theory.stanford.edu/~sergei/papers/vldb12-kmpar.pdf). The implementation in `spark.mllib` has the following parameters: -* *k* is the number of desired clusters. +* *k* is the number of desired clusters. Note that it is possible for fewer than k clusters to be returned, for example, if there are fewer than k distinct points to cluster. * *maxIterations* is the maximum number of iterations to run. * *initializationMode* specifies either random initialization or initialization via k-means\|\|. -* *runs* is the number of times to run the k-means algorithm (k-means is not -guaranteed to find a globally optimal solution, and when run multiple times on -a given dataset, the algorithm returns the best clustering result). +* *runs* This param has no effect since Spark 2.0.0. * *initializationSteps* determines the number of steps in the k-means\|\| algorithm. * *epsilon* determines the distance threshold within which we consider k-means to have converged. * *initialModel* is an optional set of cluster centers used for initialization. If this parameter is supplied, only one run is performed. diff --git a/examples/src/main/python/mllib/k_means_example.py b/examples/src/main/python/mllib/k_means_example.py index 5c397e62ef10e8d199f9adbf52a4479d7b6fe0c8..d6058f45020c4f4f4a909f4994b7267b963cafa3 100644 --- a/examples/src/main/python/mllib/k_means_example.py +++ b/examples/src/main/python/mllib/k_means_example.py @@ -36,8 +36,7 @@ if __name__ == "__main__": parsedData = data.map(lambda line: array([float(x) for x in line.split(' ')])) # Build the model (cluster the data) - clusters = KMeans.train(parsedData, 2, maxIterations=10, - runs=10, initializationMode="random") + clusters = KMeans.train(parsedData, 2, maxIterations=10, initializationMode="random") # Evaluate clustering by computing Within Set Sum of Squared Errors def error(point):