Skip to content
Snippets Groups Projects
  • Joseph K. Bradley's avatar
    5ffd5d38
    [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide · 5ffd5d38
    Joseph K. Bradley authored
    ## What changes were proposed in this pull request?
    
    Made DataFrame-based API primary
    * Spark doc menu bar and other places now link to ml-guide.html, not mllib-guide.html
    * mllib-guide.html keeps RDD-specific list of features, with a link at the top redirecting people to ml-guide.html
    * ml-guide.html includes a "maintenance mode" announcement about the RDD-based API
      * **Reviewers: please check this carefully**
    * (minor) Titles for DF API no longer include "- spark.ml" suffix.  Titles for RDD API have "- RDD-based API" suffix
    * Moved migration guide to ml-guide from mllib-guide
      * Also moved past guides from mllib-migration-guides to ml-migration-guides, with a redirect link on mllib-migration-guides
      * **Reviewers**: I did not change any of the content of the migration guides.
    
    Reorganized DataFrame-based guide:
    * ml-guide.html mimics the old mllib-guide.html page in terms of content: overview, migration guide, etc.
    * Moved Pipeline description into ml-pipeline.html and moved tuning into ml-tuning.html
      * **Reviewers**: I did not change the content of these guides, except some intro text.
    * Sidebar remains the same, but with pipeline and tuning sections added
    
    Other:
    * ml-classification-regression.html: Moved text about linear methods to new section in page
    
    ## How was this patch tested?
    
    Generated docs locally
    
    Author: Joseph K. Bradley <joseph@databricks.com>
    
    Closes #14213 from jkbradley/ml-guide-2.0.
    5ffd5d38
    History
    [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide
    Joseph K. Bradley authored
    ## What changes were proposed in this pull request?
    
    Made DataFrame-based API primary
    * Spark doc menu bar and other places now link to ml-guide.html, not mllib-guide.html
    * mllib-guide.html keeps RDD-specific list of features, with a link at the top redirecting people to ml-guide.html
    * ml-guide.html includes a "maintenance mode" announcement about the RDD-based API
      * **Reviewers: please check this carefully**
    * (minor) Titles for DF API no longer include "- spark.ml" suffix.  Titles for RDD API have "- RDD-based API" suffix
    * Moved migration guide to ml-guide from mllib-guide
      * Also moved past guides from mllib-migration-guides to ml-migration-guides, with a redirect link on mllib-migration-guides
      * **Reviewers**: I did not change any of the content of the migration guides.
    
    Reorganized DataFrame-based guide:
    * ml-guide.html mimics the old mllib-guide.html page in terms of content: overview, migration guide, etc.
    * Moved Pipeline description into ml-pipeline.html and moved tuning into ml-tuning.html
      * **Reviewers**: I did not change the content of these guides, except some intro text.
    * Sidebar remains the same, but with pipeline and tuning sections added
    
    Other:
    * ml-classification-regression.html: Moved text about linear methods to new section in page
    
    ## How was this patch tested?
    
    Generated docs locally
    
    Author: Joseph K. Bradley <joseph@databricks.com>
    
    Closes #14213 from jkbradley/ml-guide-2.0.
mllib-feature-extraction.md 15.11 KiB
layout: global
title: Feature Extraction and Transformation - RDD-based API
displayTitle: Feature Extraction and Transformation - RDD-based API
  • Table of contents {:toc}

TF-IDF

Note We recommend using the DataFrame-based API, which is detailed in the ML user guide on TF-IDF.

Term frequency-inverse document frequency (TF-IDF) is a feature vectorization method widely used in text mining to reflect the importance of a term to a document in the corpus. Denote a term by $t$, a document by $d$, and the corpus by $D$. Term frequency $TF(t, d)$ is the number of times that term $t$ appears in document $d$, while document frequency $DF(t, D)$ is the number of documents that contains term $t$. If we only use term frequency to measure the importance, it is very easy to over-emphasize terms that appear very often but carry little information about the document, e.g., "a", "the", and "of". If a term appears very often across the corpus, it means it doesn't carry special information about a particular document. Inverse document frequency is a numerical measure of how much information a term provides: \[ IDF(t, D) = \log \frac{|D| + 1}{DF(t, D) + 1}, \] where $|D|$ is the total number of documents in the corpus. Since logarithm is used, if a term appears in all documents, its IDF value becomes 0. Note that a smoothing term is applied to avoid dividing by zero for terms outside the corpus. The TF-IDF measure is simply the product of TF and IDF: \[ TFIDF(t, d, D) = TF(t, d) \cdot IDF(t, D). \] There are several variants on the definition of term frequency and document frequency. In spark.mllib, we separate TF and IDF to make them flexible.

Our implementation of term frequency utilizes the hashing trick. A raw feature is mapped into an index (term) by applying a hash function. Then term frequencies are calculated based on the mapped indices. This approach avoids the need to compute a global term-to-index map, which can be expensive for a large corpus, but it suffers from potential hash collisions, where different raw features may become the same term after hashing. To reduce the chance of collision, we can increase the target feature dimension, i.e., the number of buckets of the hash table. The default feature dimension is $2^{20} = 1,048,576$.

Note: spark.mllib doesn't provide tools for text segmentation. We refer users to the Stanford NLP Group and scalanlp/chalk.

TF and IDF are implemented in HashingTF and IDF. HashingTF takes an RDD[Iterable[_]] as the input. Each record could be an iterable of strings or other types.

Refer to the HashingTF Scala docs for details on the API.

{% include_example scala/org/apache/spark/examples/mllib/TFIDFExample.scala %}

TF and IDF are implemented in HashingTF and IDF. HashingTF takes an RDD of list as the input. Each record could be an iterable of strings or other types.

Refer to the HashingTF Python docs for details on the API.

{% include_example python/mllib/tf_idf_example.py %}

Word2Vec

Word2Vec computes distributed vector representation of words. The main advantage of the distributed representations is that similar words are close in the vector space, which makes generalization to novel patterns easier and model estimation more robust. Distributed vector representation is showed to be useful in many natural language processing applications such as named entity recognition, disambiguation, parsing, tagging and machine translation.

Model

In our implementation of Word2Vec, we used skip-gram model. The training objective of skip-gram is to learn word vector representations that are good at predicting its context in the same sentence. Mathematically, given a sequence of training words $w_1, w_2, \dots, w_T$, the objective of the skip-gram model is to maximize the average log-likelihood \[ \frac{1}{T} \sum_{t = 1}^{T}\sum_{j=-k}^{j=k} \log p(w_{t+j} | w_t) \] where k is the size of the training window.

In the skip-gram model, every word w is associated with two vectors u_w and v_w which are vector representations of w as word and context respectively. The probability of correctly predicting word w_i given word w_j is determined by the softmax model, which is \[ p(w_i | w_j ) = \frac{\exp(u_{w_i}^{\top}v_{w_j})}{\sum_{l=1}^{V} \exp(u_l^{\top}v_{w_j})} \] where V is the vocabulary size.

The skip-gram model with softmax is expensive because the cost of computing \log p(w_i | w_j) is proportional to V, which can be easily in order of millions. To speed up training of Word2Vec, we used hierarchical softmax, which reduced the complexity of computing of \log p(w_i | w_j) to O(\log(V))

Example

The example below demonstrates how to load a text file, parse it as an RDD of Seq[String], construct a Word2Vec instance and then fit a Word2VecModel with the input data. Finally, we display the top 40 synonyms of the specified word. To run the example, first download the text8 data and extract it to your preferred directory. Here we assume the extracted file is text8 and in same directory as you run the spark shell.

Refer to the Word2Vec Scala docs for details on the API.

{% include_example scala/org/apache/spark/examples/mllib/Word2VecExample.scala %}

Refer to the Word2Vec Python docs for more details on the API.

{% include_example python/mllib/word2vec_example.py %}

StandardScaler

Standardizes features by scaling to unit variance and/or removing the mean using column summary statistics on the samples in the training set. This is a very common pre-processing step.

For example, RBF kernel of Support Vector Machines or the L1 and L2 regularized linear models typically work better when all features have unit variance and/or zero mean.