Skip to content
Snippets Groups Projects
  1. Sep 21, 2016
    • Sean Owen's avatar
      [SPARK-11918][ML] Better error from WLS for cases like singular input · b4a4421b
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Update error handling for Cholesky decomposition to provide a little more info when input is singular.
      
      ## How was this patch tested?
      
      New test case; jenkins tests.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #15177 from srowen/SPARK-11918.
      b4a4421b
    • VinceShieh's avatar
      [SPARK-17219][ML] Add NaN value handling in Bucketizer · 57dc326b
      VinceShieh authored
      ## What changes were proposed in this pull request?
      This PR fixes an issue when Bucketizer is called to handle a dataset containing NaN value.
      Sometimes, null value might also be useful to users, so in these cases, Bucketizer should
      reserve one extra bucket for NaN values, instead of throwing an illegal exception.
      Before:
      ```
      Bucketizer.transform on NaN value threw an illegal exception.
      ```
      After:
      ```
      NaN values will be grouped in an extra bucket.
      ```
      ## How was this patch tested?
      New test cases added in `BucketizerSuite`.
      Signed-off-by: VinceShieh <vincent.xieintel.com>
      
      Author: VinceShieh <vincent.xie@intel.com>
      
      Closes #14858 from VinceShieh/spark-17219.
      Unverified
      57dc326b
    • Peng, Meng's avatar
      [SPARK-17017][MLLIB][ML] add a chiSquare Selector based on False Positive Rate (FPR) test · b366f184
      Peng, Meng authored
      ## What changes were proposed in this pull request?
      
      Univariate feature selection works by selecting the best features based on univariate statistical tests. False Positive Rate (FPR) is a popular univariate statistical test for feature selection. We add a chiSquare Selector based on False Positive Rate (FPR) test in this PR, like it is implemented in scikit-learn.
      http://scikit-learn.org/stable/modules/feature_selection.html#univariate-feature-selection
      
      ## How was this patch tested?
      
      Add Scala ut
      
      Author: Peng, Meng <peng.meng@intel.com>
      
      Closes #14597 from mpjlu/fprChiSquare.
      Unverified
      b366f184
    • William Benton's avatar
      [SPARK-17595][MLLIB] Use a bounded priority queue to find synonyms in Word2VecModel · 7654385f
      William Benton authored
      ## What changes were proposed in this pull request?
      
      The code in `Word2VecModel.findSynonyms` to choose the vocabulary elements with the highest similarity to the query vector currently sorts the collection of similarities for every vocabulary element. This involves making multiple copies of the collection of similarities while doing a (relatively) expensive sort. It would be more efficient to find the best matches by maintaining a bounded priority queue and populating it with a single pass over the vocabulary, and that is exactly what this patch does.
      
      ## How was this patch tested?
      
      This patch adds no user-visible functionality and its correctness should be exercised by existing tests.  To ensure that this approach is actually faster, I made a microbenchmark for `findSynonyms`:
      
      ```
      object W2VTiming {
        import org.apache.spark.{SparkContext, SparkConf}
        import org.apache.spark.mllib.feature.Word2VecModel
        def run(modelPath: String, scOpt: Option[SparkContext] = None) {
          val sc = scOpt.getOrElse(new SparkContext(new SparkConf(true).setMaster("local[*]").setAppName("test")))
          val model = Word2VecModel.load(sc, modelPath)
          val keys = model.getVectors.keys
          val start = System.currentTimeMillis
          for(key <- keys) {
            model.findSynonyms(key, 5)
            model.findSynonyms(key, 10)
            model.findSynonyms(key, 25)
            model.findSynonyms(key, 50)
          }
          val finish = System.currentTimeMillis
          println("run completed in " + (finish - start) + "ms")
        }
      }
      ```
      
      I ran this test on a model generated from the complete works of Jane Austen and found that the new approach was over 3x faster than the old approach.  (If the `num` argument to `findSynonyms` is very close to the vocabulary size, the new approach will have less of an advantage over the old one.)
      
      Author: William Benton <willb@redhat.com>
      
      Closes #15150 from willb/SPARK-17595.
      Unverified
      7654385f
  2. Sep 19, 2016
    • sethah's avatar
      [SPARK-17163][ML] Unified LogisticRegression interface · 26145a5a
      sethah authored
      ## What changes were proposed in this pull request?
      
      Merge `MultinomialLogisticRegression` into `LogisticRegression` and remove `MultinomialLogisticRegression`.
      
      Marked as WIP because we should discuss the coefficients API in the model. See discussion below.
      
      JIRA: [SPARK-17163](https://issues.apache.org/jira/browse/SPARK-17163)
      
      ## How was this patch tested?
      
      Merged test suites and added some new unit tests.
      
      ## Design
      
      ### Switching between binomial and multinomial
      
      We default to automatically detecting whether we should run binomial or multinomial lor. We expose a new parameter called `family` which defaults to auto. When "auto" is used, we run normal binomial lor with pivoting if there are 1 or 2 label classes. Otherwise, we run multinomial. If the user explicitly sets the family, then we abide by that setting. In the case where "binomial" is set but multiclass lor is detected, we throw an error.
      
      ### coefficients/intercept model API (TODO)
      
      This is the biggest design point remaining, IMO. We need to decide how to store the coefficients and intercepts in the model, and in turn how to expose them via the API. Two important points:
      
      * We must maintain compatibility with the old API, i.e. we must expose `def coefficients: Vector` and `def intercept: Double`
      * There are two separate cases: binomial lr where we have a single set of coefficients and a single intercept and multinomial lr where we have `numClasses` sets of coefficients and `numClasses` intercepts.
      
      Some options:
      
      1. **Store the binomial coefficients as a `2 x numFeatures` matrix.** This means that we would center the model coefficients before storing them in the model. The BLOR algorithm gives `1 * numFeatures` coefficients, but we would convert them to `2 x numFeatures` coefficients before storing them, effectively doubling the storage in the model. This has the advantage that we can make the code cleaner (i.e. less `if (isMultinomial) ... else ...`) and we don't have to reason about the different cases as much. It has the disadvantage that we double the storage space and we could see small regressions at prediction time since there are 2x the number of operations in the prediction algorithms. Additionally, we still have to produce the uncentered coefficients/intercept via the API, so we will have to either ALSO store the uncentered version, or compute it in `def coefficients: Vector` every time.
      
      2. **Store the binomial coefficients as a `1 x numFeatures` matrix.** We still store the coefficients as a matrix and the intercepts as a vector. When users call `coefficients` we return them a `Vector` that is backed by the same underlying array as the `coefficientMatrix`, so we don't duplicate any data. At prediction time, we use the old prediction methods that are specialized for binary LOR. The benefits here are that we don't store extra data, and we won't see any regressions in performance. The cost of this is that we have separate implementations for predict methods in the binary vs multiclass case. The duplicated code is really not very high, but it's still a bit messy.
      
      If we do decide to store the 2x coefficients, we would likely want to see some performance tests to understand the potential regressions.
      
      **Update:** We have chosen option 2
      
      ### Threshold/thresholds (TODO)
      
      Currently, when `threshold` is set we clear whatever value is in `thresholds` and when `thresholds` is set we clear whatever value is in `threshold`. [SPARK-11543](https://issues.apache.org/jira/browse/SPARK-11543) was created to prefer thresholds over threshold. We should decide if we should implement this behavior now or if we want to do it in a separate JIRA.
      
      **Update:** Let's leave it for a follow up PR
      
      ## Follow up
      
      * Summary model for multiclass logistic regression [SPARK-17139](https://issues.apache.org/jira/browse/SPARK-17139)
      * Thresholds vs threshold [SPARK-11543](https://issues.apache.org/jira/browse/SPARK-11543)
      
      Author: sethah <seth.hendrickson16@gmail.com>
      
      Closes #14834 from sethah/SPARK-17163.
      26145a5a
  3. Sep 17, 2016
    • William Benton's avatar
      [SPARK-17548][MLLIB] Word2VecModel.findSynonyms no longer spuriously rejects... · 25cbbe6c
      William Benton authored
      [SPARK-17548][MLLIB] Word2VecModel.findSynonyms no longer spuriously rejects the best match when invoked with a vector
      
      ## What changes were proposed in this pull request?
      
      This pull request changes the behavior of `Word2VecModel.findSynonyms` so that it will not spuriously reject the best match when invoked with a vector that does not correspond to a word in the model's vocabulary.  Instead of blindly discarding the best match, the changed implementation discards a match that corresponds to the query word (in cases where `findSynonyms` is invoked with a word) or that has an identical angle to the query vector.
      
      ## How was this patch tested?
      
      I added a test to `Word2VecSuite` to ensure that the word with the most similar vector from a supplied vector would not be spuriously rejected.
      
      Author: William Benton <willb@redhat.com>
      
      Closes #15105 from willb/fix/findSynonyms.
      Unverified
      25cbbe6c
  4. Sep 15, 2016
    • WeichenXu's avatar
      [SPARK-17507][ML][MLLIB] check weight vector size in ANN · d15b4f90
      WeichenXu authored
      ## What changes were proposed in this pull request?
      
      as the TODO described,
      check weight vector size and if wrong throw exception.
      
      ## How was this patch tested?
      
      existing tests.
      
      Author: WeichenXu <WeichenXu123@outlook.com>
      
      Closes #15060 from WeichenXu123/check_input_weight_size_of_ann.
      d15b4f90
  5. Sep 11, 2016
  6. Sep 10, 2016
    • Yanbo Liang's avatar
      [SPARK-15509][FOLLOW-UP][ML][SPARKR] R MLlib algorithms should support input... · bcdd259c
      Yanbo Liang authored
      [SPARK-15509][FOLLOW-UP][ML][SPARKR] R MLlib algorithms should support input columns "features" and "label"
      
      ## What changes were proposed in this pull request?
      #13584 resolved the issue of features and label columns conflict with ```RFormula``` default ones when loading libsvm data, but it still left some issues should be resolved:
      1, It’s not necessary to check and rename label column.
      Since we have considerations on the design of ```RFormula```, it can handle the case of label column already exists(with restriction of the existing label column should be numeric/boolean type). So it’s not necessary to change the column name to avoid conflict. If the label column is not numeric/boolean type, ```RFormula``` will throw exception.
      
      2, We should rename features column name to new one if there is conflict, but appending a random value is enough since it was used internally only. We done similar work when implementing ```SQLTransformer```.
      
      3, We should set correct new features column for the estimators. Take ```GLM``` as example:
      ```GLM``` estimator should set features column with the changed one(rFormula.getFeaturesCol) rather than the default “features”. Although it’s same when training model, but it involves problems when predicting. The following is the prediction result of GLM before this PR:
      ![image](https://cloud.githubusercontent.com/assets/1962026/18308227/84c3c452-74a8-11e6-9caa-9d6d846cc957.png)
      We should drop the internal used feature column name, otherwise, it will appear on the prediction DataFrame which will confused users. And this behavior is same as other scenarios which does not exist column name conflict.
      After this PR:
      ![image](https://cloud.githubusercontent.com/assets/1962026/18308240/92082a04-74a8-11e6-9226-801f52b856d9.png)
      
      ## How was this patch tested?
      Existing unit tests.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #14993 from yanboliang/spark-15509.
      bcdd259c
  7. Sep 07, 2016
    • Liwei Lin's avatar
      [SPARK-17359][SQL][MLLIB] Use ArrayBuffer.+=(A) instead of... · 3ce3a282
      Liwei Lin authored
      [SPARK-17359][SQL][MLLIB] Use ArrayBuffer.+=(A) instead of ArrayBuffer.append(A) in performance critical paths
      
      ## What changes were proposed in this pull request?
      
      We should generally use `ArrayBuffer.+=(A)` rather than `ArrayBuffer.append(A)`, because `append(A)` would involve extra boxing / unboxing.
      
      ## How was this patch tested?
      
      N/A
      
      Author: Liwei Lin <lwlin7@gmail.com>
      
      Closes #14914 from lw-lin/append_to_plus_eq_v2.
      3ce3a282
  8. Sep 06, 2016
    • Zheng RuiFeng's avatar
      [MINOR] Remove unnecessary check in MLSerDe · 8bbb08a3
      Zheng RuiFeng authored
      ## What changes were proposed in this pull request?
      1, remove unnecessary `require()`, because it will make following check useless.
      2, update the error msg.
      
      ## How was this patch tested?
      no test
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #14972 from zhengruifeng/del_unnecessary_check.
      8bbb08a3
    • Yanbo Liang's avatar
      [MINOR][ML] Correct weights doc of MultilayerPerceptronClassificationModel. · 39d538dd
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      ```weights``` of ```MultilayerPerceptronClassificationModel``` should be the output weights of layers rather than initial weights, this PR correct it.
      
      ## How was this patch tested?
      Doc change.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #14967 from yanboliang/mlp-weights.
      39d538dd
  9. Sep 05, 2016
    • Wenchen Fan's avatar
      [SPARK-17279][SQL] better error message for exceptions during ScalaUDF execution · 8d08f43d
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      If `ScalaUDF` throws exceptions during executing user code, sometimes it's hard for users to figure out what's wrong, especially when they use Spark shell. An example
      ```
      org.apache.spark.SparkException: Job aborted due to stage failure: Task 12 in stage 325.0 failed 4 times, most recent failure: Lost task 12.3 in stage 325.0 (TID 35622, 10.0.207.202): java.lang.NullPointerException
      	at line8414e872fb8b42aba390efc153d1611a12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:40)
      	at line8414e872fb8b42aba390efc153d1611a12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:40)
      	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
      ...
      ```
      We should catch these exceptions and rethrow them with better error message, to say that the exception is happened in scala udf.
      
      This PR also does some clean up for `ScalaUDF` and add a unit test suite for it.
      
      ## How was this patch tested?
      
      the new test suite
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #14850 from cloud-fan/npe.
      8d08f43d
  10. Sep 04, 2016
    • Yanbo Liang's avatar
      [MINOR][ML][MLLIB] Remove work around for breeze sparse matrix. · 1b001b52
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      Since we have updated breeze version to 0.12, we should remove work around for bug of breeze sparse matrix in v0.11.
      I checked all mllib code and found this is the only work around for breeze 0.11.
      
      ## How was this patch tested?
      Existing tests.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #14953 from yanboliang/matrices.
      1b001b52
    • Sean Owen's avatar
      [SPARK-17311][MLLIB] Standardize Python-Java MLlib API to accept optional long seeds in all cases · cdeb97a8
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Related to https://github.com/apache/spark/pull/14524 -- just the 'fix' rather than a behavior change.
      
      - PythonMLlibAPI methods that take a seed now always take a `java.lang.Long` consistently, allowing the Python API to specify "no seed"
      - .mllib's Word2VecModel seemed to be an odd man out in .mllib in that it picked its own random seed. Instead it defaults to None, meaning, letting the Scala implementation pick a seed
      - BisectingKMeansModel arguably should not hard-code a seed for consistency with .mllib, I think. However I left it.
      
      ## How was this patch tested?
      
      Existing tests
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #14826 from srowen/SPARK-16832.2.
      cdeb97a8
    • Shivansh's avatar
      [SPARK-17308] Improved the spark core code by replacing all pattern match on... · e75c162e
      Shivansh authored
      [SPARK-17308] Improved the spark core code by replacing all pattern match on boolean value by if/else block.
      
      ## What changes were proposed in this pull request?
      Improved the code quality of spark by replacing all pattern match on boolean value by if/else block.
      
      ## How was this patch tested?
      
      By running the tests
      
      Author: Shivansh <shiv4nsh@gmail.com>
      
      Closes #14873 from shiv4nsh/SPARK-17308.
      e75c162e
  11. Sep 03, 2016
    • Junyang Qian's avatar
      [SPARK-17315][SPARKR] Kolmogorov-Smirnov test SparkR wrapper · abb2f921
      Junyang Qian authored
      ## What changes were proposed in this pull request?
      
      This PR tries to add Kolmogorov-Smirnov Test wrapper to SparkR. This wrapper implementation only supports one sample test against normal distribution.
      
      ## How was this patch tested?
      
      R unit test.
      
      Author: Junyang Qian <junyangq@databricks.com>
      
      Closes #14881 from junyangq/SPARK-17315.
      abb2f921
    • WeichenXu's avatar
      [SPARK-17363][ML][MLLIB] fix MultivariantOnlineSummerizer.numNonZeros · 7a8a81d7
      WeichenXu authored
      ## What changes were proposed in this pull request?
      
      fix `MultivariantOnlineSummerizer.numNonZeros` method,
      return `nnz` array, instead of  `weightSum` array
      
      ## How was this patch tested?
      
      Existing test.
      
      Author: WeichenXu <WeichenXu123@outlook.com>
      
      Closes #14923 from WeichenXu123/fix_MultivariantOnlineSummerizer_numNonZeros.
      7a8a81d7
  12. Sep 02, 2016
    • Xin Ren's avatar
      [SPARK-15509][ML][SPARKR] R MLlib algorithms should support input columns "features" and "label" · 6969dcc7
      Xin Ren authored
      https://issues.apache.org/jira/browse/SPARK-15509
      
      ## What changes were proposed in this pull request?
      
      Currently in SparkR, when you load a LibSVM dataset using the sqlContext and then pass it to an MLlib algorithm, the ML wrappers will fail since they will try to create a "features" column, which conflicts with the existing "features" column from the LibSVM loader. E.g., using the "mnist" dataset from LibSVM:
      `training <- loadDF(sqlContext, ".../mnist", "libsvm")`
      `model <- naiveBayes(label ~ features, training)`
      This fails with:
      ```
      16/05/24 11:52:41 ERROR RBackendHandler: fit on org.apache.spark.ml.r.NaiveBayesWrapper failed
      Error in invokeJava(isStatic = TRUE, className, methodName, ...) :
        java.lang.IllegalArgumentException: Output column features already exists.
      	at org.apache.spark.ml.feature.VectorAssembler.transformSchema(VectorAssembler.scala:120)
      	at org.apache.spark.ml.Pipeline$$anonfun$transformSchema$4.apply(Pipeline.scala:179)
      	at org.apache.spark.ml.Pipeline$$anonfun$transformSchema$4.apply(Pipeline.scala:179)
      	at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)
      	at scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)
      	at scala.collection.mutable.ArrayOps$ofRef.foldLeft(ArrayOps.scala:186)
      	at org.apache.spark.ml.Pipeline.transformSchema(Pipeline.scala:179)
      	at org.apache.spark.ml.PipelineStage.transformSchema(Pipeline.scala:67)
      	at org.apache.spark.ml.Pipeline.fit(Pipeline.scala:131)
      	at org.apache.spark.ml.feature.RFormula.fit(RFormula.scala:169)
      	at org.apache.spark.ml.r.NaiveBayesWrapper$.fit(NaiveBayesWrapper.scala:62)
      	at org.apache.spark.ml.r.NaiveBayesWrapper.fit(NaiveBayesWrapper.sca
      The same issue appears for the "label" column once you rename the "features" column.
      ```
      The cause is, when using `loadDF()` to generate dataframes, sometimes it’s with default column name `“label”` and `“features”`, and these two name will conflict with default column names `setDefault(labelCol, "label")` and ` setDefault(featuresCol, "features")` of `SharedParams.scala`
      
      ## How was this patch tested?
      
      Test on my local machine.
      
      Author: Xin Ren <iamshrek@126.com>
      
      Closes #13584 from keypointt/SPARK-15509.
      6969dcc7
  13. Sep 01, 2016
    • Sean Owen's avatar
      [SPARK-17331][CORE][MLLIB] Avoid allocating 0-length arrays · 3893e8c5
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Avoid allocating some 0-length arrays, esp. in UTF8String, and by using Array.empty in Scala over Array[T]()
      
      ## How was this patch tested?
      
      Jenkins
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #14895 from srowen/SPARK-17331.
      3893e8c5
  14. Aug 31, 2016
  15. Aug 30, 2016
    • Xin Ren's avatar
      [MINOR][MLLIB][SQL] Clean up unused variables and unused import · 27209252
      Xin Ren authored
      ## What changes were proposed in this pull request?
      
      Clean up unused variables and unused import statements, unnecessary `return` and `toArray`, and some more style improvement,  when I walk through the code examples.
      
      ## How was this patch tested?
      
      Testet manually on local laptop.
      
      Author: Xin Ren <iamshrek@126.com>
      
      Closes #14836 from keypointt/codeWalkThroughML.
      27209252
  16. Aug 27, 2016
    • Sean Owen's avatar
      [SPARK-17001][ML] Enable standardScaler to standardize sparse vectors when withMean=True · e07baf14
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Allow centering / mean scaling of sparse vectors in StandardScaler, if requested. This is for compatibility with `VectorAssembler` in common usages.
      
      ## How was this patch tested?
      
      Jenkins tests, including new caes to reflect the new behavior.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #14663 from srowen/SPARK-17001.
      e07baf14
    • Peng, Meng's avatar
      [ML][MLLIB] The require condition and message doesn't match in SparseMatrix. · 40168dbe
      Peng, Meng authored
      ## What changes were proposed in this pull request?
      The require condition and message doesn't match, and the condition also should be optimized.
      Small change.  Please kindly let me know if JIRA required.
      
      ## How was this patch tested?
      No additional test required.
      
      Author: Peng, Meng <peng.meng@intel.com>
      
      Closes #14824 from mpjlu/smallChangeForMatrixRequire.
      40168dbe
  17. Aug 26, 2016
    • Peng, Meng's avatar
      [SPARK-17207][MLLIB] fix comparing Vector bug in TestingUtils · c0949dc9
      Peng, Meng authored
      ## What changes were proposed in this pull request?
      
      fix comparing Vector bug in TestingUtils.
      There is the same bug for Matrix comparing. How to check the length of Matrix should be discussed first.
      
      ## How was this patch tested?
      
      (Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
      
      (If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
      
      Author: Peng, Meng <peng.meng@intel.com>
      
      Closes #14785 from mpjlu/testUtils.
      c0949dc9
  18. Aug 24, 2016
    • Xin Ren's avatar
      [SPARK-16445][MLLIB][SPARKR] Multilayer Perceptron Classifier wrapper in SparkR · 2fbdb606
      Xin Ren authored
      https://issues.apache.org/jira/browse/SPARK-16445
      
      ## What changes were proposed in this pull request?
      
      Create Multilayer Perceptron Classifier wrapper in SparkR
      
      ## How was this patch tested?
      
      Tested manually on local machine
      
      Author: Xin Ren <iamshrek@126.com>
      
      Closes #14447 from keypointt/SPARK-16445.
      2fbdb606
    • VinceShieh's avatar
      [SPARK-17086][ML] Fix InvalidArgumentException issue in QuantileDiscretizer... · 92c0eaf3
      VinceShieh authored
      [SPARK-17086][ML] Fix InvalidArgumentException issue in QuantileDiscretizer when some quantiles are duplicated
      
      ## What changes were proposed in this pull request?
      
      In cases when QuantileDiscretizerSuite is called upon a numeric array with duplicated elements,  we will  take the unique elements generated from approxQuantiles as input for Bucketizer.
      
      ## How was this patch tested?
      
      An unit test is added in QuantileDiscretizerSuite
      
      QuantileDiscretizer.fit will throw an illegal exception when calling setSplits on a list of splits
      with duplicated elements. Bucketizer.setSplits should only accept either a numeric vector of two
      or more unique cut points, although that may produce less number of buckets than requested.
      
      Signed-off-by: VinceShieh <vincent.xieintel.com>
      
      Author: VinceShieh <vincent.xie@intel.com>
      
      Closes #14747 from VinceShieh/SPARK-17086.
      92c0eaf3
  19. Aug 23, 2016
    • Zheng RuiFeng's avatar
      [TRIVIAL] Typo Fix · 6555ef0c
      Zheng RuiFeng authored
      ## What changes were proposed in this pull request?
      Fix a typo
      
      ## How was this patch tested?
      no tests
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #14772 from zhengruifeng/minor_numClasses.
      6555ef0c
    • Jagadeesan's avatar
      [SPARK-17095] [Documentation] [Latex and Scala doc do not play nicely] · 97d461b7
      Jagadeesan authored
      ## What changes were proposed in this pull request?
      
      In Latex, it is common to find "}}}" when closing several expressions at once. [SPARK-16822](https://issues.apache.org/jira/browse/SPARK-16822) added Mathjax to render Latex equations in scaladoc. However, when scala doc sees "}}}" or "{{{" it treats it as a special character for code block. This results in some very strange output.
      
      Author: Jagadeesan <as2@us.ibm.com>
      
      Closes #14688 from jagadeesanas2/SPARK-17095.
      97d461b7
  20. Aug 22, 2016
    • hqzizania's avatar
      [SPARK-17090][FOLLOW-UP][ML] Add expert param support to SharedParamsCodeGen · 37f0ab70
      hqzizania authored
      ## What changes were proposed in this pull request?
      
      Add expert param support to SharedParamsCodeGen where aggregationDepth a expert param is added.
      
      Author: hqzizania <hqzizania@gmail.com>
      
      Closes #14738 from hqzizania/SPARK-17090-minor.
      37f0ab70
    • Holden Karau's avatar
      [SPARK-15113][PYSPARK][ML] Add missing num features num classes · b264cbb1
      Holden Karau authored
      ## What changes were proposed in this pull request?
      
      Add missing `numFeatures` and `numClasses` to the wrapped Java models in PySpark ML pipelines. Also tag `DecisionTreeClassificationModel` as Expiremental to match Scala doc.
      
      ## How was this patch tested?
      
      Extended doctests
      
      Author: Holden Karau <holden@us.ibm.com>
      
      Closes #12889 from holdenk/SPARK-15113-add-missing-numFeatures-numClasses.
      b264cbb1
    • Wenchen Fan's avatar
      [SPARK-16498][SQL] move hive hack for data source table into HiveExternalCatalog · b2074b66
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      Spark SQL doesn't have its own meta store yet, and use hive's currently. However, hive's meta store has some limitations(e.g. columns can't be too many, not case-preserving, bad decimal type support, etc.), so we have some hacks to successfully store data source table metadata into hive meta store, i.e. put all the information in table properties.
      
      This PR moves these hacks to `HiveExternalCatalog`, tries to isolate hive specific logic in one place.
      
      changes overview:
      
      1.  **before this PR**: we need to put metadata(schema, partition columns, etc.) of data source tables to table properties before saving it to external catalog, even the external catalog doesn't use hive metastore(e.g. `InMemoryCatalog`)
      **after this PR**: the table properties tricks are only in `HiveExternalCatalog`, the caller side doesn't need to take care of it anymore.
      
      2. **before this PR**: because the table properties tricks are done outside of external catalog, so we also need to revert these tricks when we read the table metadata from external catalog and use it. e.g. in `DescribeTableCommand` we will read schema and partition columns from table properties.
      **after this PR**: The table metadata read from external catalog is exactly the same with what we saved to it.
      
      bonus: now we can create data source table using `SessionCatalog`, if schema is specified.
      breaks: `schemaStringLengthThreshold` is not configurable anymore. `hive.default.rcfile.serde` is not configurable anymore.
      
      ## How was this patch tested?
      
      existing tests.
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #14155 from cloud-fan/catalog-table.
      b2074b66
  21. Aug 20, 2016
    • hqzizania's avatar
      [SPARK-17090][ML] Make tree aggregation level in linear/logistic regression configurable · 61ef74f2
      hqzizania authored
      ## What changes were proposed in this pull request?
      
      Linear/logistic regression use treeAggregate with default depth (always = 2) for collecting coefficient gradient updates to the driver. For high dimensional problems, this can cause OOM error on the driver. This patch makes it configurable to avoid this problem if users' input data has many features. It adds a HasTreeDepth API in `sharedParams.scala`, and extends it to both Linear regression and logistic regression in .ml
      
      Author: hqzizania <hqzizania@gmail.com>
      
      Closes #14717 from hqzizania/SPARK-17090.
      61ef74f2
  22. Aug 19, 2016
    • Junyang Qian's avatar
      [SPARK-16443][SPARKR] Alternating Least Squares (ALS) wrapper · acac7a50
      Junyang Qian authored
      ## What changes were proposed in this pull request?
      
      Add Alternating Least Squares wrapper in SparkR. Unit tests have been updated.
      
      ## How was this patch tested?
      
      SparkR unit tests.
      
      (If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
      
      ![screen shot 2016-07-27 at 3 50 31 pm](https://cloud.githubusercontent.com/assets/15318264/17195347/f7a6352a-5411-11e6-8e21-61a48070192a.png)
      ![screen shot 2016-07-27 at 3 50 46 pm](https://cloud.githubusercontent.com/assets/15318264/17195348/f7a7d452-5411-11e6-845f-6d292283bc28.png)
      
      Author: Junyang Qian <junyangq@databricks.com>
      
      Closes #14384 from junyangq/SPARK-16443.
      acac7a50
    • Yanbo Liang's avatar
      [SPARK-17141][ML] MinMaxScaler should remain NaN value. · 864be935
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      In the existing code, ```MinMaxScaler``` handle ```NaN``` value indeterminately.
      * If a column has identity value, that is ```max == min```, ```MinMaxScalerModel``` transformation will output ```0.5``` for all rows even the original value is ```NaN```.
      * Otherwise, it will remain ```NaN``` after transformation.
      
      I think we should unify the behavior by remaining ```NaN``` value at any condition, since we don't know how to transform a ```NaN``` value. In Python sklearn, it will throw exception when there is ```NaN``` in the dataset.
      
      ## How was this patch tested?
      Unit tests.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #14716 from yanboliang/spark-17141.
      864be935
    • sethah's avatar
      [SPARK-7159][ML] Add multiclass logistic regression to Spark ML · 287bea13
      sethah authored
      ## What changes were proposed in this pull request?
      
      This patch adds a new estimator/transformer `MultinomialLogisticRegression` to spark ML.
      
      JIRA: [SPARK-7159](https://issues.apache.org/jira/browse/SPARK-7159)
      
      ## How was this patch tested?
      
      Added new test suite `MultinomialLogisticRegressionSuite`.
      
      ## Approach
      
      ### Do not use a "pivot" class in the algorithm formulation
      
      Many implementations of multinomial logistic regression treat the problem as K - 1 independent binary logistic regression models where K is the number of possible outcomes in the output variable. In this case, one outcome is chosen as a "pivot" and the other K - 1 outcomes are regressed against the pivot. This is somewhat undesirable since the coefficients returned will be different for different choices of pivot variables. An alternative approach to the problem models class conditional probabilites using the softmax function and will return uniquely identifiable coefficients (assuming regularization is applied). This second approach is used in R's glmnet and was also recommended by dbtsai.
      
      ### Separate multinomial logistic regression and binary logistic regression
      
      The initial design makes multinomial logistic regression a separate estimator/transformer than the existing LogisticRegression estimator/transformer. An alternative design would be to merge them into one.
      
      **Arguments for:**
      
      * The multinomial case without pivot is distinctly different than the current binary case since the binary case uses a pivot class.
      * The current logistic regression model in ML uses a vector of coefficients and a scalar intercept. In the multinomial case, we require a matrix of coefficients and a vector of intercepts. There are potential workarounds for this issue if we were to merge the two estimators, but none are particularly elegant.
      
      **Arguments against:**
      
      * It may be inconvenient for users to have to switch the estimator class when transitioning between binary and multiclass (although the new multinomial estimator can be used for two class outcomes).
      * Some portions of the code are repeated.
      
      This is a major design point and warrants more discussion.
      
      ### Mean centering
      
      When no regularization is applied, the coefficients will not be uniquely identifiable. This is not hard to show and is discussed in further detail [here](https://core.ac.uk/download/files/153/6287975.pdf). R's glmnet deals with this by choosing the minimum l2 regularized solution (i.e. mean centering). Additionally, the intercepts are never regularized so they are always mean centered. This is the approach taken in this PR as well.
      
      ### Feature scaling
      
      In current ML logistic regression, the features are always standardized when running the optimization algorithm. They are always returned to the user in the original feature space, however. This same approach is maintained in this patch as well, but the implementation details are different. In ML logistic regression, the unregularized feature values are divided by the column standard deviation in every gradient update iteration. In contrast, MLlib transforms the entire input dataset to the scaled space _before_ optimizaton. In ML, this means that `numFeatures * numClasses` extra scalar divisions are required in every iteration. Performance testing shows that this has significant (4x in some cases) slow downs in each iteration. This can be avoided by transforming the input to the scaled space ala MLlib once, before iteration begins. This does add some overhead initially, but can make significant time savings in some cases.
      
      One issue with this approach is that if the input data is already cached, there may not be enough memory to cache the transformed data, which would make the algorithm _much_ slower. The tradeoffs here merit more discussion.
      
      ### Specifying and inferring the number of outcome classes
      
      The estimator checks the dataframe label column for metadata which specifies the number of values. If they are not specified, the length of the `histogram` variable is used, which is essentially the maximum value found in the column. The assumption then, is that the labels are zero-indexed when they are provided to the algorithm.
      
      ## Performance
      
      Below are some performance tests I have run so far. I am happy to add more cases or trials if we deem them necessary.
      
      Test cluster: 4 bare metal nodes, 128 GB RAM each, 48 cores each
      
      Notes:
      
      * Time in units of seconds
      * Metric is classification accuracy
      
      | algo   |   elasticNetParam | fitIntercept   |   metric |   maxIter |   numPoints |   numClasses |   numFeatures |    time | standardization   |   regParam |
      |--------|-------------------|----------------|----------|-----------|-------------|--------------|---------------|---------|-------------------|------------|
      | ml     |                 0 | true           | 0.746415 |        30 |      100000 |            3 |        100000 | 327.923 | true              |          0 |
      | mllib  |                 0 | true           | 0.743785 |        30 |      100000 |            3 |        100000 | 390.217 | true              |          0 |
      
      | algo   |   elasticNetParam | fitIntercept   |   metric |   maxIter |   numPoints |   numClasses |   numFeatures |    time | standardization   |   regParam |
      |--------|-------------------|----------------|----------|-----------|-------------|--------------|---------------|---------|-------------------|------------|
      | ml     |                 0 | true           | 0.973238 |        30 |     2000000 |            3 |         10000 | 385.476 | true              |          0 |
      | mllib  |                 0 | true           | 0.949828 |        30 |     2000000 |            3 |         10000 | 550.403 | true              |          0 |
      
      | algo   |   elasticNetParam | fitIntercept   |   metric |   maxIter |   numPoints |   numClasses |   numFeatures |    time | standardization   |   regParam |
      |--------|-------------------|----------------|----------|-----------|-------------|--------------|---------------|---------|-------------------|------------|
      | mllib  |                 0 | true           | 0.864358 |        30 |     2000000 |            3 |         10000 | 543.359 | true              |        0.1 |
      | ml     |                 0 | true           | 0.867418 |        30 |     2000000 |            3 |         10000 | 401.955 | true              |        0.1 |
      
      | algo   |   elasticNetParam | fitIntercept   |   metric |   maxIter |   numPoints |   numClasses |   numFeatures |    time | standardization   |   regParam |
      |--------|-------------------|----------------|----------|-----------|-------------|--------------|---------------|---------|-------------------|------------|
      | ml     |                 1 | true           | 0.807449 |        30 |     2000000 |            3 |         10000 | 334.892 | true              |       0.05 |
      
      | algo   |   elasticNetParam | fitIntercept   |   metric |   maxIter |   numPoints |   numClasses |   numFeatures |    time | standardization   |   regParam |
      |--------|-------------------|----------------|----------|-----------|-------------|--------------|---------------|---------|-------------------|------------|
      | ml     |                 0 | true           | 0.602006 |        30 |     2000000 |          500 |           100 | 112.319 | true              |          0 |
      | mllib  |                 0 | true           | 0.567226 |        30 |     2000000 |          500 |           100 | 263.768 | true              |          0 |e           | 0.567226 |        30 |     2000000 |          500 |           100 | 263.768 | true              |          0 |
      
      ## References
      
      Friedman, et al. ["Regularization Paths for Generalized Linear Models via Coordinate Descent"](https://core.ac.uk/download/files/153/6287975.pdf)
      [http://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html](http://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html)
      
      ## Follow up items
      * Consider using level 2 BLAS routines in the gradient computations - [SPARK-17134](https://issues.apache.org/jira/browse/SPARK-17134)
      * Add model summary for MLOR - [SPARK-17139](https://issues.apache.org/jira/browse/SPARK-17139)
      * Add initial model to MLOR and add test for intercept priors - [SPARK-17140](https://issues.apache.org/jira/browse/SPARK-17140)
      * Python API - [SPARK-17138](https://issues.apache.org/jira/browse/SPARK-17138)
      * Consider changing the tree aggregation level for MLOR/BLOR or making it user configurable to avoid memory problems with high dimensional data - [SPARK-17090](https://issues.apache.org/jira/browse/SPARK-17090)
      * Refactor helper classes out of `LogisticRegression.scala` - [SPARK-17135](https://issues.apache.org/jira/browse/SPARK-17135)
      * Design optimizer interface for added flexibility in ML algos - [SPARK-17136](https://issues.apache.org/jira/browse/SPARK-17136)
      * Support compressing the coefficients and intercepts for MLOR models - [SPARK-17137](https://issues.apache.org/jira/browse/SPARK-17137)
      
      Author: sethah <seth.hendrickson16@gmail.com>
      
      Closes #13796 from sethah/SPARK-7159_M.
      287bea13
  23. Aug 18, 2016
    • Xusen Yin's avatar
      [SPARK-16447][ML][SPARKR] LDA wrapper in SparkR · b72bb62d
      Xusen Yin authored
      ## What changes were proposed in this pull request?
      
      Add LDA Wrapper in SparkR with the following interfaces:
      
      - spark.lda(data, ...)
      
      - spark.posterior(object, newData, ...)
      
      - spark.perplexity(object, ...)
      
      - summary(object)
      
      - write.ml(object)
      
      - read.ml(path)
      
      ## How was this patch tested?
      
      Test with SparkR unit test.
      
      Author: Xusen Yin <yinxusen@gmail.com>
      
      Closes #14229 from yinxusen/SPARK-16447.
      b72bb62d
  24. Aug 17, 2016
    • Yanbo Liang's avatar
      [SPARK-16446][SPARKR][ML] Gaussian Mixture Model wrapper in SparkR · 4d92af31
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      Gaussian Mixture Model wrapper in SparkR, similarly to R's ```mvnormalmixEM```.
      
      ## How was this patch tested?
      Unit test.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #14392 from yanboliang/spark-16446.
      4d92af31
    • wm624@hotmail.com's avatar
      [SPARK-16444][SPARKR] Isotonic Regression wrapper in SparkR · 363793f2
      wm624@hotmail.com authored
      ## What changes were proposed in this pull request?
      
      (Please fill in changes proposed in this fix)
      
      Add Isotonic Regression wrapper in SparkR
      
      Wrappers in R and Scala are added.
      Unit tests
      Documentation
      
      ## How was this patch tested?
      Manually tested with sudo ./R/run-tests.sh
      
      (Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
      
      Author: wm624@hotmail.com <wm624@hotmail.com>
      
      Closes #14182 from wangmiao1981/isoR.
      363793f2
Loading