Skip to content
Snippets Groups Projects
  • Joseph K. Bradley's avatar
    5ffd5d38
    [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide · 5ffd5d38
    Joseph K. Bradley authored
    ## What changes were proposed in this pull request?
    
    Made DataFrame-based API primary
    * Spark doc menu bar and other places now link to ml-guide.html, not mllib-guide.html
    * mllib-guide.html keeps RDD-specific list of features, with a link at the top redirecting people to ml-guide.html
    * ml-guide.html includes a "maintenance mode" announcement about the RDD-based API
      * **Reviewers: please check this carefully**
    * (minor) Titles for DF API no longer include "- spark.ml" suffix.  Titles for RDD API have "- RDD-based API" suffix
    * Moved migration guide to ml-guide from mllib-guide
      * Also moved past guides from mllib-migration-guides to ml-migration-guides, with a redirect link on mllib-migration-guides
      * **Reviewers**: I did not change any of the content of the migration guides.
    
    Reorganized DataFrame-based guide:
    * ml-guide.html mimics the old mllib-guide.html page in terms of content: overview, migration guide, etc.
    * Moved Pipeline description into ml-pipeline.html and moved tuning into ml-tuning.html
      * **Reviewers**: I did not change the content of these guides, except some intro text.
    * Sidebar remains the same, but with pipeline and tuning sections added
    
    Other:
    * ml-classification-regression.html: Moved text about linear methods to new section in page
    
    ## How was this patch tested?
    
    Generated docs locally
    
    Author: Joseph K. Bradley <joseph@databricks.com>
    
    Closes #14213 from jkbradley/ml-guide-2.0.
    5ffd5d38
    History
    [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide
    Joseph K. Bradley authored
    ## What changes were proposed in this pull request?
    
    Made DataFrame-based API primary
    * Spark doc menu bar and other places now link to ml-guide.html, not mllib-guide.html
    * mllib-guide.html keeps RDD-specific list of features, with a link at the top redirecting people to ml-guide.html
    * ml-guide.html includes a "maintenance mode" announcement about the RDD-based API
      * **Reviewers: please check this carefully**
    * (minor) Titles for DF API no longer include "- spark.ml" suffix.  Titles for RDD API have "- RDD-based API" suffix
    * Moved migration guide to ml-guide from mllib-guide
      * Also moved past guides from mllib-migration-guides to ml-migration-guides, with a redirect link on mllib-migration-guides
      * **Reviewers**: I did not change any of the content of the migration guides.
    
    Reorganized DataFrame-based guide:
    * ml-guide.html mimics the old mllib-guide.html page in terms of content: overview, migration guide, etc.
    * Moved Pipeline description into ml-pipeline.html and moved tuning into ml-tuning.html
      * **Reviewers**: I did not change the content of these guides, except some intro text.
    * Sidebar remains the same, but with pipeline and tuning sections added
    
    Other:
    * ml-classification-regression.html: Moved text about linear methods to new section in page
    
    ## How was this patch tested?
    
    Generated docs locally
    
    Author: Joseph K. Bradley <joseph@databricks.com>
    
    Closes #14213 from jkbradley/ml-guide-2.0.
mllib-clustering.md 23.81 KiB
layout: global
title: Clustering - RDD-based API
displayTitle: Clustering - RDD-based API

Clustering is an unsupervised learning problem whereby we aim to group subsets of entities with one another based on some notion of similarity. Clustering is often used for exploratory analysis and/or as a component of a hierarchical supervised learning pipeline (in which distinct classifiers or regression models are trained for each cluster).

The spark.mllib package supports the following models:

  • Table of contents {:toc}

K-means

K-means is one of the most commonly used clustering algorithms that clusters the data points into a predefined number of clusters. The spark.mllib implementation includes a parallelized variant of the k-means++ method called kmeans||. The implementation in spark.mllib has the following parameters:

  • k is the number of desired clusters.
  • maxIterations is the maximum number of iterations to run.
  • initializationMode specifies either random initialization or initialization via k-means||.
  • runs is the number of times to run the k-means algorithm (k-means is not guaranteed to find a globally optimal solution, and when run multiple times on a given dataset, the algorithm returns the best clustering result).
  • initializationSteps determines the number of steps in the k-means|| algorithm.
  • epsilon determines the distance threshold within which we consider k-means to have converged.
  • initialModel is an optional set of cluster centers used for initialization. If this parameter is supplied, only one run is performed.

Examples

The following code snippets can be executed in `spark-shell`.

In the following example after loading and parsing data, we use the KMeans object to cluster the data into two clusters. The number of desired clusters is passed to the algorithm. We then compute Within Set Sum of Squared Error (WSSSE). You can reduce this error measure by increasing k. In fact the optimal k is usually one where there is an "elbow" in the WSSSE graph.

Refer to the KMeans Scala docs and KMeansModel Scala docs for details on the API.

{% include_example scala/org/apache/spark/examples/mllib/KMeansExample.scala %}

All of MLlib's methods use Java-friendly types, so you can import and call them there the same way you do in Scala. The only caveat is that the methods take Scala RDD objects, while the Spark Java API uses a separate `JavaRDD` class. You can convert a Java RDD to a Scala one by calling `.rdd()` on your `JavaRDD` object. A self-contained application example that is equivalent to the provided example in Scala is given below:

Refer to the KMeans Java docs and KMeansModel Java docs for details on the API.

{% include_example java/org/apache/spark/examples/mllib/JavaKMeansExample.java %}

The following examples can be tested in the PySpark shell.

In the following example after loading and parsing data, we use the KMeans object to cluster the data into two clusters. The number of desired clusters is passed to the algorithm. We then compute Within Set Sum of Squared Error (WSSSE). You can reduce this error measure by increasing k. In fact the optimal k is usually one where there is an "elbow" in the WSSSE graph.

Refer to the KMeans Python docs and KMeansModel Python docs for more details on the API.

{% include_example python/mllib/k_means_example.py %}

Gaussian mixture

A Gaussian Mixture Model represents a composite distribution whereby points are drawn from one of k Gaussian sub-distributions, each with its own probability. The spark.mllib implementation uses the expectation-maximization algorithm to induce the maximum-likelihood model given a set of samples. The implementation has the following parameters:

  • k is the number of desired clusters.
  • convergenceTol is the maximum change in log-likelihood at which we consider convergence achieved.
  • maxIterations is the maximum number of iterations to perform without reaching convergence.
  • initialModel is an optional starting point from which to start the EM algorithm. If this parameter is omitted, a random starting point will be constructed from the data.

Examples

In the following example after loading and parsing data, we use a GaussianMixture object to cluster the data into two clusters. The number of desired clusters is passed to the algorithm. We then output the parameters of the mixture model.

Refer to the GaussianMixture Scala docs and GaussianMixtureModel Scala docs for details on the API.

{% include_example scala/org/apache/spark/examples/mllib/GaussianMixtureExample.scala %}

All of MLlib's methods use Java-friendly types, so you can import and call them there the same way you do in Scala. The only caveat is that the methods take Scala RDD objects, while the Spark Java API uses a separate `JavaRDD` class. You can convert a Java RDD to a Scala one by calling `.rdd()` on your `JavaRDD` object. A self-contained application example that is equivalent to the provided example in Scala is given below:

Refer to the GaussianMixture Java docs and GaussianMixtureModel Java docs for details on the API.

{% include_example java/org/apache/spark/examples/mllib/JavaGaussianMixtureExample.java %}

In the following example after loading and parsing data, we use a GaussianMixture object to cluster the data into two clusters. The number of desired clusters is passed to the algorithm. We then output the parameters of the mixture model.

Refer to the GaussianMixture Python docs and GaussianMixtureModel Python docs for more details on the API.

{% include_example python/mllib/gaussian_mixture_example.py %}

Power iteration clustering (PIC)

Power iteration clustering (PIC) is a scalable and efficient algorithm for clustering vertices of a graph given pairwise similarities as edge properties, described in Lin and Cohen, Power Iteration Clustering. It computes a pseudo-eigenvector of the normalized affinity matrix of the graph via power iteration and uses it to cluster vertices. spark.mllib includes an implementation of PIC using GraphX as its backend. It takes an RDD of (srcId, dstId, similarity) tuples and outputs a model with the clustering assignments. The similarities must be nonnegative. PIC assumes that the similarity measure is symmetric. A pair (srcId, dstId) regardless of the ordering should appear at most once in the input data. If a pair is missing from input, their similarity is treated as zero. spark.mllib's PIC implementation takes the following (hyper-)parameters:

  • k: number of clusters
  • maxIterations: maximum number of power iterations
  • initializationMode: initialization model. This can be either "random", which is the default, to use a random vector as vertex properties, or "degree" to use normalized sum similarities.

Examples

In the following, we show code snippets to demonstrate how to use PIC in spark.mllib.

PowerIterationClustering implements the PIC algorithm. It takes an RDD of (srcId: Long, dstId: Long, similarity: Double) tuples representing the affinity matrix. Calling PowerIterationClustering.run returns a PowerIterationClusteringModel, which contains the computed clustering assignments.

Refer to the PowerIterationClustering Scala docs and PowerIterationClusteringModel Scala docs for details on the API.

{% include_example scala/org/apache/spark/examples/mllib/PowerIterationClusteringExample.scala %}

PowerIterationClustering implements the PIC algorithm. It takes an JavaRDD of (srcId: Long, dstId: Long, similarity: Double) tuples representing the affinity matrix. Calling PowerIterationClustering.run returns a PowerIterationClusteringModel which contains the computed clustering assignments.

Refer to the PowerIterationClustering Java docs and PowerIterationClusteringModel Java docs for details on the API.

{% include_example java/org/apache/spark/examples/mllib/JavaPowerIterationClusteringExample.java %}

PowerIterationClustering implements the PIC algorithm. It takes an RDD of (srcId: Long, dstId: Long, similarity: Double) tuples representing the affinity matrix. Calling PowerIterationClustering.run returns a PowerIterationClusteringModel, which contains the computed clustering assignments.

Refer to the PowerIterationClustering Python docs and PowerIterationClusteringModel Python docs for more details on the API.

{% include_example python/mllib/power_iteration_clustering_example.py %}

Latent Dirichlet allocation (LDA)

Latent Dirichlet allocation (LDA) is a topic model which infers topics from a collection of text documents. LDA can be thought of as a clustering algorithm as follows:

  • Topics correspond to cluster centers, and documents correspond to examples (rows) in a dataset.
  • Topics and documents both exist in a feature space, where feature vectors are vectors of word counts (bag of words).
  • Rather than estimating a clustering using a traditional distance, LDA uses a function based on a statistical model of how text documents are generated.

LDA supports different inference algorithms via setOptimizer function. EMLDAOptimizer learns clustering using expectation-maximization on the likelihood function and yields comprehensive results, while OnlineLDAOptimizer uses iterative mini-batch sampling for online variational inference and is generally memory friendly.

LDA takes in a collection of documents as vectors of word counts and the following parameters (set using the builder pattern):

  • k: Number of topics (i.e., cluster centers)
  • optimizer: Optimizer to use for learning the LDA model, either EMLDAOptimizer or OnlineLDAOptimizer
  • docConcentration: Dirichlet parameter for prior over documents' distributions over topics. Larger values encourage smoother inferred distributions.
  • topicConcentration: Dirichlet parameter for prior over topics' distributions over terms (words). Larger values encourage smoother inferred distributions.
  • maxIterations: Limit on the number of iterations.
  • checkpointInterval: If using checkpointing (set in the Spark configuration), this parameter specifies the frequency with which checkpoints will be created. If maxIterations is large, using checkpointing can help reduce shuffle file sizes on disk and help with failure recovery.

All of spark.mllib's LDA models support:

  • describeTopics: Returns topics as arrays of most important terms and term weights
  • topicsMatrix: Returns a vocabSize by k matrix where each column is a topic

Note: LDA is still an experimental feature under active development. As a result, certain features are only available in one of the two optimizers / models generated by the optimizer. Currently, a distributed model can be converted into a local model, but not vice-versa.

The following discussion will describe each optimizer/model pair separately.