Skip to content
Snippets Groups Projects
  1. Jan 04, 2017
    • Niranjan Padmanabhan's avatar
      [MINOR][DOCS] Remove consecutive duplicated words/typo in Spark Repo · a1e40b1f
      Niranjan Padmanabhan authored
      ## What changes were proposed in this pull request?
      There are many locations in the Spark repo where the same word occurs consecutively. Sometimes they are appropriately placed, but many times they are not. This PR removes the inappropriately duplicated words.
      
      ## How was this patch tested?
      N/A since only docs or comments were updated.
      
      Author: Niranjan Padmanabhan <niranjan.padmanabhan@gmail.com>
      
      Closes #16455 from neurons/np.structure_streaming_doc.
      Unverified
      a1e40b1f
    • Zheng RuiFeng's avatar
      [SPARK-19054][ML] Eliminate extra pass in NB · 7a825058
      Zheng RuiFeng authored
      ## What changes were proposed in this pull request?
      eliminate unnecessary extra pass in NB's train
      
      ## How was this patch tested?
      existing tests
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #16453 from zhengruifeng/nb_getNC.
      Unverified
      7a825058
  2. Jan 03, 2017
    • Weiqing Yang's avatar
      [MINOR] Add missing sc.stop() to end of examples · e5c307c5
      Weiqing Yang authored
      ## What changes were proposed in this pull request?
      
      Add `finally` clause for `sc.stop()` in the `test("register and deregister Spark listener from SparkContext")`.
      
      ## How was this patch tested?
      Pass the build and unit tests.
      
      Author: Weiqing Yang <yangweiqing001@gmail.com>
      
      Closes #16426 from weiqingy/testIssue.
      Unverified
      e5c307c5
  3. Dec 30, 2016
    • Sean Owen's avatar
      [SPARK-18808][ML][MLLIB] ml.KMeansModel.transform is very inefficient · 56d3a7eb
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      mllib.KMeansModel.clusterCentersWithNorm is a method than ends up being called every time `predict` is called on a single vector, which is bad news for now the ml.KMeansModel Transformer works, which necessarily transforms one vector at a time.
      
      This causes the model to just store the vectors with norms upfront. The extra norm should be small compared to the vectors. This would avoid this form of overhead on this and other code paths.
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #16328 from srowen/SPARK-18808.
      Unverified
      56d3a7eb
  4. Dec 29, 2016
    • Ilya Matiach's avatar
      [SPARK-18698][ML] Adding public constructor that takes uid for IndexToString · 87bc4112
      Ilya Matiach authored
      ## What changes were proposed in this pull request?
      
      Based on SPARK-18698, this adds a public constructor that takes a UID for IndexToString.  Other transforms have similar constructors.
      
      ## How was this patch tested?
      
      A unit test was added to verify the new functionality.
      
      Author: Ilya Matiach <ilmat@microsoft.com>
      
      Closes #16436 from imatiach-msft/ilmat/fix-indextostring.
      87bc4112
  5. Dec 28, 2016
    • sethah's avatar
      [SPARK-17772][ML][TEST] Add test functions for ML sample weights · 6a475ae4
      sethah authored
      ## What changes were proposed in this pull request?
      
      More and more ML algos are accepting sample weights, and they have been tested rather heterogeneously and with code duplication. This patch adds extensible helper methods to `MLTestingUtils` that can be reused by various algorithms accepting sample weights. Up to now, there seems to be a few tests that have been implemented commonly:
      
      * Check that oversampling is the same as giving the instances sample weights proportional to the number of samples
      * Check that outliers with tiny sample weights do not affect the algorithm's performance
      
      This patch adds an additional test:
      
      * Check that algorithms are invariant to constant scaling of the sample weights. i.e. uniform sample weights with `w_i = 1.0` is effectively the same as uniform sample weights with `w_i = 10000` or `w_i = 0.0001`
      
      The instances of these tests occurred in LinearRegression, NaiveBayes, and LogisticRegression. Those tests have been removed/modified to use the new helper methods. These helper functions will be of use when [SPARK-9478](https://issues.apache.org/jira/browse/SPARK-9478) is implemented.
      
      ## How was this patch tested?
      
      This patch only involves modifying test suites.
      
      ## Other notes
      
      Both IsotonicRegression and GeneralizedLinearRegression also extend `HasWeightCol`. I did not modify these test suites because it will make this patch easier to review, and because they did not duplicate the same tests as the three suites that were modified. If we want to change them later, we can create a JIRA for it now, but it's open for debate.
      
      Author: sethah <seth.hendrickson16@gmail.com>
      
      Closes #15721 from sethah/SPARK-17772.
      6a475ae4
    • Yanbo Liang's avatar
      [MINOR][ML] Correct test cases of LoR raw2prediction & probability2prediction. · 9cff67f3
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      Correct test cases of ```LogisticRegression``` raw2prediction & probability2prediction.
      
      ## How was this patch tested?
      Changed unit tests.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #16407 from yanboliang/raw-probability.
      9cff67f3
    • Peng's avatar
      [SPARK-17645][MLLIB][ML] add feature selector method based on: False Discovery... · 79ff8536
      Peng authored
      [SPARK-17645][MLLIB][ML] add feature selector method based on: False Discovery Rate (FDR) and Family wise error rate (FWE)
      
      ## What changes were proposed in this pull request?
      
      Univariate feature selection works by selecting the best features based on univariate statistical tests.
      FDR and FWE are a popular univariate statistical test for feature selection.
      In 2005, the Benjamini and Hochberg paper on FDR was identified as one of the 25 most-cited statistical papers. The FDR uses the Benjamini-Hochberg procedure in this PR. https://en.wikipedia.org/wiki/False_discovery_rate.
      In statistics, FWE is the probability of making one or more false discoveries, or type I errors, among all the hypotheses when performing multiple hypotheses tests.
      https://en.wikipedia.org/wiki/Family-wise_error_rate
      
      We add  FDR and FWE methods for ChiSqSelector in this PR, like it is implemented in scikit-learn.
      http://scikit-learn.org/stable/modules/feature_selection.html#univariate-feature-selection
      ## How was this patch tested?
      
      ut will be added soon
      
      (Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
      
      (If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
      
      Author: Peng <peng.meng@intel.com>
      Author: Peng, Meng <peng.meng@intel.com>
      
      Closes #15212 from mpjlu/fdr_fwe.
      79ff8536
  6. Dec 21, 2016
    • Ryan Williams's avatar
      [SPARK-17807][CORE] split test-tags into test-JAR · afd9bc1d
      Ryan Williams authored
      Remove spark-tag's compile-scope dependency (and, indirectly, spark-core's compile-scope transitive-dependency) on scalatest by splitting test-oriented tags into spark-tags' test JAR.
      
      Alternative to #16303.
      
      Author: Ryan Williams <ryan.blake.williams@gmail.com>
      
      Closes #16311 from ryan-williams/tt.
      afd9bc1d
  7. Dec 19, 2016
  8. Dec 13, 2016
    • Anthony Truchet's avatar
      [SPARK-18471][MLLIB] In LBFGS, avoid sending huge vectors of 0 · 9e8a9d7c
      Anthony Truchet authored
      ## What changes were proposed in this pull request?
      
      CostFun used to send a dense vector of zeroes as a closure in a
      treeAggregate call. To avoid that, we replace treeAggregate by
      mapPartition + treeReduce, creating a zero vector inside the mapPartition
      block in-place.
      
      ## How was this patch tested?
      
      Unit test for module mllib run locally for correctness.
      
      As for performance we run an heavy optimization on our production data (50 iterations on 128 MB weight vectors) and have seen significant decrease in terms both of runtime and container being killed by lack of off-heap memory.
      
      Author: Anthony Truchet <a.truchet@criteo.com>
      Author: sethah <seth.hendrickson16@gmail.com>
      Author: Anthony Truchet <AnthonyTruchet@users.noreply.github.com>
      
      Closes #16037 from AnthonyTruchet/ENG-17719-lbfgs-only.
      Unverified
      9e8a9d7c
    • actuaryzhang's avatar
      [SPARK-18715][ML] Fix AIC calculations in Binomial GLM · e57e3938
      actuaryzhang authored
      The AIC calculation in Binomial GLM seems to be off when the response or weight is non-integer: the result is different from that in R. This issue arises when one models rates, i.e, num of successes normalized over num of trials, and uses num of trials as weights. In this case, the effective likelihood is  weight * label ~ binomial(weight, mu), where weight = number of trials, and weight * label = number of successes and mu = is the success rate.
      
      srowen sethah yanboliang HyukjinKwon zhengruifeng
      
      ## What changes were proposed in this pull request?
      I suggest changing the current aic calculation for the Binomial family from
      ```
      -2.0 * predictions.map { case (y: Double, mu: Double, weight: Double) =>
              weight * dist.Binomial(1, mu).logProbabilityOf(math.round(y).toInt)
            }.sum()
      ```
      to the following which generalizes to the case of real-valued response and weights.
      ```
            -2.0 * predictions.map { case (y: Double, mu: Double, weight: Double) =>
              val wt = math.round(weight).toInt
              if (wt == 0){
                0.0
              } else {
                dist.Binomial(wt, mu).logProbabilityOf(math.round(y * weight).toInt)
              }
            }.sum()
      ```
      ## How was this patch tested?
      I will write the unit test once the community wants to include the proposed change. For now, the following modifies existing tests in weighted Binomial GLM to illustrate the issue. The second label is changed from 0 to 0.5.
      
      ```
      val datasetWithWeight = Seq(
          (1.0, 1.0, 0.0, 5.0),
          (0.5, 2.0, 1.0, 2.0),
          (1.0, 3.0, 2.0, 1.0),
          (0.0, 4.0, 3.0, 3.0)
        ).toDF("y", "w", "x1", "x2")
      
      val formula = (new RFormula()
        .setFormula("y ~ x1 + x2")
        .setFeaturesCol("features")
        .setLabelCol("label"))
      val output = formula.fit(datasetWithWeight).transform(datasetWithWeight).select("features", "label", "w")
      
      val glr = new GeneralizedLinearRegression()
          .setFamily("binomial")
          .setWeightCol("w")
          .setFitIntercept(false)
          .setRegParam(0)
      
      val model = glr.fit(output)
      model.summary.aic
      ```
      The AIC from Spark is 17.3227, and the AIC from R is 15.66454.
      
      Author: actuaryzhang <actuaryzhang10@gmail.com>
      
      Closes #16149 from actuaryzhang/aic.
      Unverified
      e57e3938
  9. Dec 11, 2016
  10. Dec 10, 2016
    • Michal Senkyr's avatar
      [SPARK-3359][DOCS] Fix greater-than symbols in Javadoc to allow building with Java 8 · 11432483
      Michal Senkyr authored
      ## What changes were proposed in this pull request?
      
      The API documentation build was failing when using Java 8 due to incorrect character `>` in Javadoc.
      
      Replace `>` with literals in Javadoc to allow the build to pass.
      
      ## How was this patch tested?
      
      Documentation was built and inspected manually to ensure it still displays correctly in the browser
      
      ```
      cd docs && jekyll serve
      ```
      
      Author: Michal Senkyr <mike.senkyr@gmail.com>
      
      Closes #16201 from michalsenkyr/javadoc8-gt-fix.
      Unverified
      11432483
  11. Dec 07, 2016
    • Yanbo Liang's avatar
      [SPARK-18326][SPARKR][ML] Review SparkR ML wrappers API for 2.1 · 97255497
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      Reviewing SparkR ML wrappers API for 2.1 release, mainly two issues:
      * Remove ```probabilityCol``` from the argument list of ```spark.logit``` and ```spark.randomForest```. Since it was used when making prediction and should be an argument of ```predict```, and we will work on this at [SPARK-18618](https://issues.apache.org/jira/browse/SPARK-18618) in the next release cycle.
      * Fix ```spark.als``` params to make it consistent with MLlib.
      
      ## How was this patch tested?
      Existing tests.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #16169 from yanboliang/spark-18326.
      97255497
    • actuaryzhang's avatar
      [SPARK-18701][ML] Fix Poisson GLM failure due to wrong initialization · b8280271
      actuaryzhang authored
      Poisson GLM fails for many standard data sets (see example in test or JIRA). The issue is incorrect initialization leading to almost zero probability and weights. Specifically, the mean is initialized as the response, which could be zero. Applying the log link results in very negative numbers (protected against -Inf), which again leads to close to zero probability and weights in the weighted least squares. Fix and test are included in the commits.
      
      ## What changes were proposed in this pull request?
      Update initialization in Poisson GLM
      
      ## How was this patch tested?
      Add test in GeneralizedLinearRegressionSuite
      
      srowen sethah yanboliang HyukjinKwon mengxr
      
      Author: actuaryzhang <actuaryzhang10@gmail.com>
      
      Closes #16131 from actuaryzhang/master.
      Unverified
      b8280271
    • Yanbo Liang's avatar
      [SPARK-18686][SPARKR][ML] Several cleanup and improvements for spark.logit. · 90b59d1b
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      Several cleanup and improvements for ```spark.logit```:
      * ```summary``` should return coefficients matrix, and should output labels for each class if the model is multinomial logistic regression model.
      * ```summary``` should not return ```areaUnderROC, roc, pr, ...```, since most of them are DataFrame which are less important for R users. Meanwhile, these metrics ignore instance weights (setting all to 1.0) which will be changed in later Spark version. In case it will introduce breaking changes, we do not expose them currently.
      * SparkR test improvement: comparing the training result with native R glmnet.
      * Remove argument ```aggregationDepth``` from ```spark.logit```, since it's an expert Param(related with Spark architecture and job execution) that would be used rarely by R users.
      
      ## How was this patch tested?
      Unit tests.
      
      The ```summary``` output after this change:
      multinomial logistic regression:
      ```
      > df <- suppressWarnings(createDataFrame(iris))
      > model <- spark.logit(df, Species ~ ., regParam = 0.5)
      > summary(model)
      $coefficients
                   versicolor  virginica   setosa
      (Intercept)  1.514031    -2.609108   1.095077
      Sepal_Length 0.02511006  0.2649821   -0.2900921
      Sepal_Width  -0.5291215  -0.02016446 0.549286
      Petal_Length 0.03647411  0.1544119   -0.190886
      Petal_Width  0.000236092 0.4195804   -0.4198165
      ```
      binomial logistic regression:
      ```
      > df <- suppressWarnings(createDataFrame(iris))
      > training <- df[df$Species %in% c("versicolor", "virginica"), ]
      > model <- spark.logit(training, Species ~ ., regParam = 0.5)
      > summary(model)
      $coefficients
                   Estimate
      (Intercept)  -6.053815
      Sepal_Length 0.2449379
      Sepal_Width  0.1648321
      Petal_Length 0.4730718
      Petal_Width  1.031947
      ```
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #16117 from yanboliang/spark-18686.
      90b59d1b
  12. Dec 06, 2016
    • Yuhao's avatar
      [SPARK-18374][ML] Incorrect words in StopWords/english.txt · fac5b75b
      Yuhao authored
      ## What changes were proposed in this pull request?
      
      Currently English stop words list in MLlib contains only the argumented words after removing all the apostrophes, so "wouldn't" become "wouldn" and "t". Yet by default Tokenizer and RegexTokenizer don't split on apostrophes or quotes.
      
      Adding original form to stop words list to match the behavior of Tokenizer and StopwordsRemover. Also remove "won" from list.
      
      see more discussion in the jira: https://issues.apache.org/jira/browse/SPARK-18374
      
      ## How was this patch tested?
      existing ut
      
      Author: Yuhao <yuhao.yang@intel.com>
      Author: Yuhao Yang <hhbyyh@gmail.com>
      
      Closes #16103 from hhbyyh/addstopwords.
      Unverified
      fac5b75b
  13. Dec 05, 2016
  14. Dec 02, 2016
    • Reynold Xin's avatar
      [SPARK-18695] Bump master branch version to 2.2.0-SNAPSHOT · c7c72659
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch bumps master branch version to 2.2.0-SNAPSHOT.
      
      ## How was this patch tested?
      N/A
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #16126 from rxin/SPARK-18695.
      c7c72659
    • Yanbo Liang's avatar
      [SPARK-18291][SPARKR][ML] Revert "[SPARK-18291][SPARKR][ML] SparkR glm predict... · a985dd8e
      Yanbo Liang authored
      [SPARK-18291][SPARKR][ML] Revert "[SPARK-18291][SPARKR][ML] SparkR glm predict should output original label when family = binomial."
      
      ## What changes were proposed in this pull request?
      It's better we can fix this issue by providing an option ```type``` for users to change the ```predict``` output schema, then they could output probabilities, log-space predictions, or original labels. In order to not involve breaking API change for 2.1, so revert this change firstly and will add it back after [SPARK-18618](https://issues.apache.org/jira/browse/SPARK-18618) resolved.
      
      ## How was this patch tested?
      Existing unit tests.
      
      This reverts commit daa975f4.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #16118 from yanboliang/spark-18291-revert.
      a985dd8e
  15. Dec 01, 2016
    • Nathan Howell's avatar
      [SPARK-18658][SQL] Write text records directly to a FileOutputStream · c82f16c1
      Nathan Howell authored
      ## What changes were proposed in this pull request?
      
      This replaces uses of `TextOutputFormat` with an `OutputStream`, which will either write directly to the filesystem or indirectly via a compressor (if so configured). This avoids intermediate buffering.
      
      The inverse of this (reading directly from a stream) is necessary for streaming large JSON records (when `wholeFile` is enabled) so I wanted to keep the read and write paths symmetric.
      
      ## How was this patch tested?
      
      Existing unit tests.
      
      Author: Nathan Howell <nhowell@godaddy.com>
      
      Closes #16089 from NathanHowell/SPARK-18658.
      c82f16c1
  16. Nov 30, 2016
    • wm624@hotmail.com's avatar
      [SPARK-18476][SPARKR][ML] SparkR Logistic Regression should should support output original label. · 2eb6764f
      wm624@hotmail.com authored
      ## What changes were proposed in this pull request?
      
      Similar to SPARK-18401, as a classification algorithm, logistic regression should support output original label instead of supporting index label.
      
      In this PR, original label output is supported and test cases are modified and added. Document is also modified.
      
      ## How was this patch tested?
      
      Unit tests.
      
      Author: wm624@hotmail.com <wm624@hotmail.com>
      
      Closes #15910 from wangmiao1981/audit.
      2eb6764f
    • Yanbo Liang's avatar
      [SPARK-18318][ML] ML, Graph 2.1 QA: API: New Scala APIs, docs · 60022bfd
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      API review for 2.1, except ```LSH``` related classes which are still under development.
      
      ## How was this patch tested?
      Only doc changes, no new tests.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #16009 from yanboliang/spark-18318.
      60022bfd
    • Anthony Truchet's avatar
      [SPARK-18612][MLLIB] Delete broadcasted variable in LBFGS CostFun · c5a64d76
      Anthony Truchet authored
      ## What changes were proposed in this pull request?
      
      Fix a broadcasted variable leak occurring at each invocation of CostFun in L-BFGS.
      
      ## How was this patch tested?
      
      UTests + check that fixed fatal memory consumption on Criteo's use cases.
      
      This contribution is made on behalf of Criteo S.A.
      (http://labs.criteo.com/) under the terms of the Apache v2 License.
      
      Author: Anthony Truchet <a.truchet@criteo.com>
      
      Closes #16040 from AnthonyTruchet/SPARK-18612-lbfgs-cost-fun.
      Unverified
      c5a64d76
  17. Nov 29, 2016
  18. Nov 28, 2016
    • Yun Ni's avatar
      [SPARK-18408][ML] API Improvements for LSH · 05f7c6ff
      Yun Ni authored
      ## What changes were proposed in this pull request?
      
      (1) Change output schema to `Array of Vector` instead of `Vectors`
      (2) Use `numHashTables` as the dimension of Array
      (3) Rename `RandomProjection` to `BucketedRandomProjectionLSH`, `MinHash` to `MinHashLSH`
      (4) Make `randUnitVectors/randCoefficients` private
      (5) Make Multi-Probe NN Search and `hashDistance` private for future discussion
      
      Saved for future PRs:
      (1) AND-amplification and `numHashFunctions` as the dimension of Vector are saved for a future PR.
      (2) `hashDistance` and MultiProbe NN Search needs more discussion. The current implementation is just a backward compatible one.
      
      ## How was this patch tested?
      Related unit tests are modified to make sure the performance of LSH are ensured, and the outputs of the APIs meets expectation.
      
      Author: Yun Ni <yunn@uber.com>
      Author: Yunni <Euler57721@gmail.com>
      
      Closes #15874 from Yunni/SPARK-18408-yunn-api-improvements.
      05f7c6ff
  19. Nov 26, 2016
  20. Nov 25, 2016
    • Zakaria_Hili's avatar
      [SPARK-18356][ML] Improve MLKmeans Performance · 445d4d9e
      Zakaria_Hili authored
      ## What changes were proposed in this pull request?
      
      Spark Kmeans fit() doesn't cache the RDD which generates a lot of warnings :
       WARN KMeans: The input data is not directly cached, which may hurt performance if its parent RDDs are also uncached.
      So, Kmeans should cache the internal rdd before calling the Mllib.Kmeans algo, this helped to improve spark kmeans performance by 14%
      
      https://github.com/ZakariaHili/spark/commit/a9cf905cf7dbd50eeb9a8b4f891f2f41ea672472
      
      hhbyyh
      ## How was this patch tested?
      Pass Kmeans tests and existing tests
      
      Author: Zakaria_Hili <zakahili@gmail.com>
      Author: HILI Zakaria <zakahili@gmail.com>
      
      Closes #15965 from ZakariaHili/zakbranch.
      Unverified
      445d4d9e
    • hyukjinkwon's avatar
      [SPARK-3359][BUILD][DOCS] More changes to resolve javadoc 8 errors that will... · 51b1c155
      hyukjinkwon authored
      [SPARK-3359][BUILD][DOCS] More changes to resolve javadoc 8 errors that will help unidoc/genjavadoc compatibility
      
      ## What changes were proposed in this pull request?
      
      This PR only tries to fix things that looks pretty straightforward and were fixed in other previous PRs before.
      
      This PR roughly fixes several things as below:
      
      - Fix unrecognisable class and method links in javadoc by changing it from `[[..]]` to `` `...` ``
      
        ```
        [error] .../spark/sql/core/target/java/org/apache/spark/sql/streaming/DataStreamReader.java:226: error: reference not found
        [error]    * Loads text files and returns a {link DataFrame} whose schema starts with a string column named
        ```
      
      - Fix an exception annotation and remove code backticks in `throws` annotation
      
        Currently, sbt unidoc with Java 8 complains as below:
      
        ```
        [error] .../java/org/apache/spark/sql/streaming/StreamingQuery.java:72: error: unexpected text
        [error]    * throws StreamingQueryException, if <code>this</code> query has terminated with an exception.
        ```
      
        `throws` should specify the correct class name from `StreamingQueryException,` to `StreamingQueryException` without backticks. (see [JDK-8007644](https://bugs.openjdk.java.net/browse/JDK-8007644)).
      
      - Fix `[[http..]]` to `<a href="http..."></a>`.
      
        ```diff
        -   * [[https://blogs.oracle.com/java-platform-group/entry/diagnosing_tls_ssl_and_https Oracle
        -   * blog page]].
        +   * <a href="https://blogs.oracle.com/java-platform-group/entry/diagnosing_tls_ssl_and_https">
        +   * Oracle blog page</a>.
        ```
      
         `[[http...]]` link markdown in scaladoc is unrecognisable in javadoc.
      
      - It seems class can't have `return` annotation. So, two cases of this were removed.
      
        ```
        [error] .../java/org/apache/spark/mllib/regression/IsotonicRegression.java:27: error: invalid use of return
        [error]    * return New instance of IsotonicRegression.
        ```
      
      - Fix < to `&lt;` and > to `&gt;` according to HTML rules.
      
      - Fix `</p>` complaint
      
      - Exclude unrecognisable in javadoc, `constructor`, `todo` and `groupname`.
      
      ## How was this patch tested?
      
      Manually tested by `jekyll build` with Java 7 and 8
      
      ```
      java version "1.7.0_80"
      Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
      Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
      ```
      
      ```
      java version "1.8.0_45"
      Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
      Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
      ```
      
      Note: this does not yet make sbt unidoc suceed with Java 8 yet but it reduces the number of errors with Java 8.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #15999 from HyukjinKwon/SPARK-3359-errors.
      Unverified
      51b1c155
  21. Nov 24, 2016
  22. Nov 22, 2016
    • Yanbo Liang's avatar
      [SPARK-18501][ML][SPARKR] Fix spark.glm errors when fitting on collinear data · 982b82e3
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      * Fix SparkR ```spark.glm``` errors when fitting on collinear data, since ```standard error of coefficients, t value and p value``` are not available in this condition.
      * Scala/Python GLM summary should throw exception if users get ```standard error of coefficients, t value and p value``` but the underlying WLS was solved by local "l-bfgs".
      
      ## How was this patch tested?
      Add unit tests.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #15930 from yanboliang/spark-18501.
      982b82e3
  23. Nov 21, 2016
    • sethah's avatar
      [SPARK-18282][ML][PYSPARK] Add python clustering summaries for GMM and BKM · e811fbf9
      sethah authored
      ## What changes were proposed in this pull request?
      
      Add model summary APIs for `GaussianMixtureModel` and `BisectingKMeansModel` in pyspark.
      
      ## How was this patch tested?
      
      Unit tests.
      
      Author: sethah <seth.hendrickson16@gmail.com>
      
      Closes #15777 from sethah/pyspark_cluster_summaries.
      e811fbf9
  24. Nov 19, 2016
    • sethah's avatar
      [SPARK-18456][ML][FOLLOWUP] Use matrix abstraction for coefficients in LogisticRegression training · 856e0042
      sethah authored
      ## What changes were proposed in this pull request?
      
      This is a follow up to some of the discussion [here](https://github.com/apache/spark/pull/15593). During LogisticRegression training, we store the coefficients combined with intercepts as a flat vector, but a more natural abstraction is a matrix. Here, we refactor the code to use matrix where possible, which makes the code more readable and greatly simplifies the indexing.
      
      Note: We do not use a Breeze matrix for the cost function as was mentioned in the linked PR. This is because LBFGS/OWLQN require an implicit `MutableInnerProductModule[DenseMatrix[Double], Double]` which is not natively defined in Breeze. We would need to extend Breeze in Spark to define it ourselves. Also, we do not modify the `regParamL1Fun` because OWLQN in Breeze requires a `MutableEnumeratedCoordinateField[(Int, Int), DenseVector[Double]]` (since we still use a dense vector for coefficients). Here again we would have to extend Breeze inside Spark.
      
      ## How was this patch tested?
      
      This is internal code refactoring - the current unit tests passing show us that the change did not break anything. No added functionality in this patch.
      
      Author: sethah <seth.hendrickson16@gmail.com>
      
      Closes #15893 from sethah/logreg_refactor.
      Unverified
      856e0042
    • hyukjinkwon's avatar
      [SPARK-18445][BUILD][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note... · d5b1d5fc
      hyukjinkwon authored
      [SPARK-18445][BUILD][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note that`/`'''Note:'''` across Scala/Java API documentation
      
      ## What changes were proposed in this pull request?
      
      It seems in Scala/Java,
      
      - `Note:`
      - `NOTE:`
      - `Note that`
      - `'''Note:'''`
      - `note`
      
      This PR proposes to fix those to `note` to be consistent.
      
      **Before**
      
      - Scala
        ![2016-11-17 6 16 39](https://cloud.githubusercontent.com/assets/6477701/20383180/1a7aed8c-acf2-11e6-9611-5eaf6d52c2e0.png)
      
      - Java
        ![2016-11-17 6 14 41](https://cloud.githubusercontent.com/assets/6477701/20383096/c8ffc680-acf1-11e6-914a-33460bf1401d.png)
      
      **After**
      
      - Scala
        ![2016-11-17 6 16 44](https://cloud.githubusercontent.com/assets/6477701/20383167/09940490-acf2-11e6-937a-0d5e1dc2cadf.png)
      
      - Java
        ![2016-11-17 6 13 39](https://cloud.githubusercontent.com/assets/6477701/20383132/e7c2a57e-acf1-11e6-9c47-b849674d4d88.png)
      
      ## How was this patch tested?
      
      The notes were found via
      
      ```bash
      grep -r "NOTE: " . | \ # Note:|NOTE:|Note that|'''Note:'''
      grep -v "// NOTE: " | \  # starting with // does not appear in API documentation.
      grep -E '.scala|.java' | \ # java/scala files
      grep -v Suite | \ # exclude tests
      grep -v Test | \ # exclude tests
      grep -e 'org.apache.spark.api.java' \ # packages appear in API documenation
      -e 'org.apache.spark.api.java.function' \ # note that this is a regular expression. So actual matches were mostly `org/apache/spark/api/java/functions ...`
      -e 'org.apache.spark.api.r' \
      ...
      ```
      
      ```bash
      grep -r "Note that " . | \ # Note:|NOTE:|Note that|'''Note:'''
      grep -v "// Note that " | \  # starting with // does not appear in API documentation.
      grep -E '.scala|.java' | \ # java/scala files
      grep -v Suite | \ # exclude tests
      grep -v Test | \ # exclude tests
      grep -e 'org.apache.spark.api.java' \ # packages appear in API documenation
      -e 'org.apache.spark.api.java.function' \
      -e 'org.apache.spark.api.r' \
      ...
      ```
      
      ```bash
      grep -r "Note: " . | \ # Note:|NOTE:|Note that|'''Note:'''
      grep -v "// Note: " | \  # starting with // does not appear in API documentation.
      grep -E '.scala|.java' | \ # java/scala files
      grep -v Suite | \ # exclude tests
      grep -v Test | \ # exclude tests
      grep -e 'org.apache.spark.api.java' \ # packages appear in API documenation
      -e 'org.apache.spark.api.java.function' \
      -e 'org.apache.spark.api.r' \
      ...
      ```
      
      ```bash
      grep -r "'''Note:'''" . | \ # Note:|NOTE:|Note that|'''Note:'''
      grep -v "// '''Note:''' " | \  # starting with // does not appear in API documentation.
      grep -E '.scala|.java' | \ # java/scala files
      grep -v Suite | \ # exclude tests
      grep -v Test | \ # exclude tests
      grep -e 'org.apache.spark.api.java' \ # packages appear in API documenation
      -e 'org.apache.spark.api.java.function' \
      -e 'org.apache.spark.api.r' \
      ...
      ```
      
      And then fixed one by one comparing with API documentation/access modifiers.
      
      After that, manually tested via `jekyll build`.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #15889 from HyukjinKwon/SPARK-18437.
      Unverified
      d5b1d5fc
  25. Nov 17, 2016
    • Zheng RuiFeng's avatar
      [SPARK-18480][DOCS] Fix wrong links for ML guide docs · cdaf4ce9
      Zheng RuiFeng authored
      ## What changes were proposed in this pull request?
      1, There are two `[Graph.partitionBy]` in `graphx-programming-guide.md`, the first one had no effert.
      2, `DataFrame`, `Transformer`, `Pipeline` and `Parameter`  in `ml-pipeline.md` were linked to `ml-guide.html` by mistake.
      3, `PythonMLLibAPI` in `mllib-linear-methods.md` was not accessable, because class `PythonMLLibAPI` is private.
      4, Other link updates.
      ## How was this patch tested?
       manual tests
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #15912 from zhengruifeng/md_fix.
      Unverified
      cdaf4ce9
    • VinceShieh's avatar
      [SPARK-17462][MLLIB]use VersionUtils to parse Spark version strings · de77c677
      VinceShieh authored
      ## What changes were proposed in this pull request?
      
      Several places in MLlib use custom regexes or other approaches to parse Spark versions.
      Those should be fixed to use the VersionUtils. This PR replaces custom regexes with
      VersionUtils to get Spark version numbers.
      ## How was this patch tested?
      
      Existing tests.
      
      Signed-off-by: VinceShieh vincent.xieintel.com
      
      Author: VinceShieh <vincent.xie@intel.com>
      
      Closes #15055 from VinceShieh/SPARK-17462.
      Unverified
      de77c677
Loading