Skip to content
Snippets Groups Projects
  1. Apr 14, 2017
  2. Mar 28, 2017
  3. Mar 21, 2017
  4. Feb 25, 2017
  5. Feb 13, 2017
  6. Feb 07, 2017
  7. Jan 25, 2017
    • aokolnychyi's avatar
      [SPARK-16046][DOCS] Aggregations in the Spark SQL programming guide · e2f77392
      aokolnychyi authored
      ## What changes were proposed in this pull request?
      
      - A separate subsection for Aggregations under “Getting Started” in the Spark SQL programming guide. It mentions which aggregate functions are predefined and how users can create their own.
      - Examples of using the `UserDefinedAggregateFunction` abstract class for untyped aggregations in Java and Scala.
      - Examples of using the `Aggregator` abstract class for type-safe aggregations in Java and Scala.
      - Python is not covered.
      - The PR might not resolve the ticket since I do not know what exactly was planned by the author.
      
      In total, there are four new standalone examples that can be executed via `spark-submit` or `run-example`. The updated Spark SQL programming guide references to these examples and does not contain hard-coded snippets.
      
      ## How was this patch tested?
      
      The patch was tested locally by building the docs. The examples were run as well.
      
      ![image](https://cloud.githubusercontent.com/assets/6235869/21292915/04d9d084-c515-11e6-811a-999d598dffba.png
      
      )
      
      Author: aokolnychyi <okolnychyyanton@gmail.com>
      
      Closes #16329 from aokolnychyi/SPARK-16046.
      
      (cherry picked from commit 3fdce814)
      Signed-off-by: default avatargatorsmile <gatorsmile@gmail.com>
      e2f77392
  8. Jan 12, 2017
  9. Dec 15, 2016
  10. Dec 08, 2016
    • Yanbo Liang's avatar
      [SPARK-18325][SPARKR][ML] SparkR ML wrappers example code and user guide · 9095c152
      Yanbo Liang authored
      
      ## What changes were proposed in this pull request?
      * Add all R examples for ML wrappers which were added during 2.1 release cycle.
      * Split the whole ```ml.R``` example file into individual example for each algorithm, which will be convenient for users to rerun them.
      * Add corresponding examples to ML user guide.
      * Update ML section of SparkR user guide.
      
      Note: MLlib Scala/Java/Python examples will be consistent, however, SparkR examples may different from them, since R users may use the algorithms in a different way, for example, using R ```formula``` to specify ```featuresCol``` and ```labelCol```.
      
      ## How was this patch tested?
      Run all examples manually.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #16148 from yanboliang/spark-18325.
      
      (cherry picked from commit 9bf8f3cd)
      Signed-off-by: default avatarYanbo Liang <ybliang8@gmail.com>
      9095c152
    • Patrick Wendell's avatar
      48aa6775
    • Patrick Wendell's avatar
      Preparing Spark release v2.1.0-rc2 · 08071749
      Patrick Wendell authored
      08071749
  11. Dec 07, 2016
  12. Dec 03, 2016
    • Yunni's avatar
      [SPARK-18081][ML][DOCS] Add user guide for Locality Sensitive Hashing(LSH) · 28f698b4
      Yunni authored
      
      ## What changes were proposed in this pull request?
      The user guide for LSH is added to ml-features.md, with several scala/java examples in spark-examples.
      
      ## How was this patch tested?
      Doc has been generated through Jekyll, and checked through manual inspection.
      
      Author: Yunni <Euler57721@gmail.com>
      Author: Yun Ni <yunn@uber.com>
      Author: Joseph K. Bradley <joseph@databricks.com>
      Author: Yun Ni <Euler57721@gmail.com>
      
      Closes #15795 from Yunni/SPARK-18081-lsh-guide.
      
      (cherry picked from commit 34777184)
      Signed-off-by: default avatarJoseph K. Bradley <joseph@databricks.com>
      28f698b4
  13. Nov 28, 2016
  14. Nov 16, 2016
    • Xianyang Liu's avatar
      [SPARK-18420][BUILD] Fix the errors caused by lint check in Java · b0ae8712
      Xianyang Liu authored
      
      Small fix, fix the errors caused by lint check in Java
      
      - Clear unused objects and `UnusedImports`.
      - Add comments around the method `finalize` of `NioBufferedFileInputStream`to turn off checkstyle.
      - Cut the line which is longer than 100 characters into two lines.
      
      Travis CI.
      ```
      $ build/mvn -T 4 -q -DskipTests -Pyarn -Phadoop-2.3 -Pkinesis-asl -Phive -Phive-thriftserver install
      $ dev/lint-java
      ```
      Before:
      ```
      Checkstyle checks failed at following occurrences:
      [ERROR] src/main/java/org/apache/spark/network/util/TransportConf.java:[21,8] (imports) UnusedImports: Unused import - org.apache.commons.crypto.cipher.CryptoCipherFactory.
      [ERROR] src/test/java/org/apache/spark/network/sasl/SparkSaslSuite.java:[516,5] (modifier) RedundantModifier: Redundant 'public' modifier.
      [ERROR] src/main/java/org/apache/spark/io/NioBufferedFileInputStream.java:[133] (coding) NoFinalizer: Avoid using finalizer method.
      [ERROR] src/main/java/org/apache/spark/sql/catalyst/expressions/UnsafeMapData.java:[71] (sizes) LineLength: Line is longer than 100 characters (found 113).
      [ERROR] src/main/java/org/apache/spark/sql/catalyst/expressions/UnsafeArrayData.java:[112] (sizes) LineLength: Line is longer than 100 characters (found 110).
      [ERROR] src/test/java/org/apache/spark/sql/catalyst/expressions/HiveHasherSuite.java:[31,17] (modifier) ModifierOrder: 'static' modifier out of order with the JLS suggestions.
      [ERROR]src/main/java/org/apache/spark/examples/ml/JavaLogisticRegressionWithElasticNetExample.java:[64] (sizes) LineLength: Line is longer than 100 characters (found 103).
      [ERROR] src/main/java/org/apache/spark/examples/ml/JavaInteractionExample.java:[22,8] (imports) UnusedImports: Unused import - org.apache.spark.ml.linalg.Vectors.
      [ERROR] src/main/java/org/apache/spark/examples/ml/JavaInteractionExample.java:[51] (regexp) RegexpSingleline: No trailing whitespace allowed.
      ```
      
      After:
      ```
      $ build/mvn -T 4 -q -DskipTests -Pyarn -Phadoop-2.3 -Pkinesis-asl -Phive -Phive-thriftserver install
      $ dev/lint-java
      Using `mvn` from path: /home/travis/build/ConeyLiu/spark/build/apache-maven-3.3.9/bin/mvn
      Checkstyle checks passed.
      ```
      
      Author: Xianyang Liu <xyliu0530@icloud.com>
      
      Closes #15865 from ConeyLiu/master.
      
      (cherry picked from commit 7569cf6c)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      Unverified
      b0ae8712
    • uncleGen's avatar
      [SPARK-18410][STREAMING] Add structured kafka example · 6b2301b8
      uncleGen authored
      
      ## What changes were proposed in this pull request?
      
      This PR provides structured kafka wordcount examples
      
      ## How was this patch tested?
      
      Author: uncleGen <hustyugm@gmail.com>
      
      Closes #15849 from uncleGen/SPARK-18410.
      
      (cherry picked from commit e6145772)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      Unverified
      6b2301b8
  15. Nov 15, 2016
  16. Nov 08, 2016
  17. Oct 28, 2016
    • Jagadeesan's avatar
      [SPARK-18133][EXAMPLES][ML] Python ML Pipeline Example has syntax e… · e9746f87
      Jagadeesan authored
      ## What changes were proposed in this pull request?
      
      In Python 3, there is only one integer type (i.e., int), which mostly behaves like the long type in Python 2. Since Python 3 won't accept "L", so removed "L" in all examples.
      
      ## How was this patch tested?
      
      Unit tests.
      
      …rrors]
      
      Author: Jagadeesan <as2@us.ibm.com>
      
      Closes #15660 from jagadeesanas2/SPARK-18133.
      e9746f87
  18. Oct 26, 2016
    • Xin Ren's avatar
      [SPARK-14300][DOCS][MLLIB] Scala MLlib examples code merge and clean up · dcdda197
      Xin Ren authored
      ## What changes were proposed in this pull request?
      
      https://issues.apache.org/jira/browse/SPARK-14300
      
      Duplicated code found in scala/examples/mllib, below all deleted in this PR:
      
      - DenseGaussianMixture.scala
      - StreamingLinearRegression.scala
      
      ## delete reasons:
      
      #### delete: mllib/DenseGaussianMixture.scala
      
      - duplicate of mllib/GaussianMixtureExample
      
      #### delete: mllib/StreamingLinearRegression.scala
      
      - duplicate of mllib/StreamingLinearRegressionExample
      
      When merging and cleaning those code, be sure not disturb the previous example on and off blocks.
      
      ## How was this patch tested?
      
      Test with `SKIP_API=1 jekyll` manually to make sure that works well.
      
      Author: Xin Ren <iamshrek@126.com>
      
      Closes #12195 from keypointt/SPARK-14300.
      dcdda197
  19. Oct 24, 2016
    • Sean Owen's avatar
      [SPARK-17810][SQL] Default spark.sql.warehouse.dir is relative to local FS but... · 4ecbe1b9
      Sean Owen authored
      [SPARK-17810][SQL] Default spark.sql.warehouse.dir is relative to local FS but can resolve as HDFS path
      
      ## What changes were proposed in this pull request?
      
      Always resolve spark.sql.warehouse.dir as a local path, and as relative to working dir not home dir
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #15382 from srowen/SPARK-17810.
      Unverified
      4ecbe1b9
  20. Oct 17, 2016
    • Maxime Rihouey's avatar
      Fix example of tf_idf with minDocFreq · e3bf37fa
      Maxime Rihouey authored
      ## What changes were proposed in this pull request?
      
      The python example for tf_idf with the parameter "minDocFreq" is not properly set up because the same variable is used to transform the document for both with and without the "minDocFreq" parameter.
      The IDF(minDocFreq=2) is stored in the variable "idfIgnore" but then it is the original variable "idf" used to transform the "tf" instead of the "idfIgnore".
      
      ## How was this patch tested?
      
      Before the results for "tfidf" and "tfidfIgnore" were the same:
      tfidf:
      (1048576,[1046921],[3.75828890549])
      (1048576,[1046920],[3.75828890549])
      (1048576,[1046923],[3.75828890549])
      (1048576,[892732],[3.75828890549])
      (1048576,[892733],[3.75828890549])
      (1048576,[892734],[3.75828890549])
      tfidfIgnore:
      (1048576,[1046921],[3.75828890549])
      (1048576,[1046920],[3.75828890549])
      (1048576,[1046923],[3.75828890549])
      (1048576,[892732],[3.75828890549])
      (1048576,[892733],[3.75828890549])
      (1048576,[892734],[3.75828890549])
      
      After the fix those are how they should be:
      tfidf:
      (1048576,[1046921],[3.75828890549])
      (1048576,[1046920],[3.75828890549])
      (1048576,[1046923],[3.75828890549])
      (1048576,[892732],[3.75828890549])
      (1048576,[892733],[3.75828890549])
      (1048576,[892734],[3.75828890549])
      tfidfIgnore:
      (1048576,[1046921],[0.0])
      (1048576,[1046920],[0.0])
      (1048576,[1046923],[0.0])
      (1048576,[892732],[0.0])
      (1048576,[892733],[0.0])
      (1048576,[892734],[0.0])
      
      Author: Maxime Rihouey <maxime.rihouey@gmail.com>
      
      Closes #15503 from maximerihouey/patch-1.
      Unverified
      e3bf37fa
  21. Oct 10, 2016
    • Wenchen Fan's avatar
      [SPARK-17338][SQL] add global temp view · 23ddff4b
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      Global temporary view is a cross-session temporary view, which means it's shared among all sessions. Its lifetime is the lifetime of the Spark application, i.e. it will be automatically dropped when the application terminates. It's tied to a system preserved database `global_temp`(configurable via SparkConf), and we must use the qualified name to refer a global temp view, e.g. SELECT * FROM global_temp.view1.
      
      changes for `SessionCatalog`:
      
      1. add a new field `gloabalTempViews: GlobalTempViewManager`, to access the shared global temp views, and the global temp db name.
      2. `createDatabase` will fail if users wanna create `global_temp`, which is system preserved.
      3. `setCurrentDatabase` will fail if users wanna set `global_temp`, which is system preserved.
      4. add `createGlobalTempView`, which is used in `CreateViewCommand` to create global temp views.
      5. add `dropGlobalTempView`, which is used in `CatalogImpl` to drop global temp view.
      6. add `alterTempViewDefinition`, which is used in `AlterViewAsCommand` to update the view definition for local/global temp views.
      7. `renameTable`/`dropTable`/`isTemporaryTable`/`lookupRelation`/`getTempViewOrPermanentTableMetadata`/`refreshTable` will handle global temp views.
      
      changes for SQL commands:
      
      1. `CreateViewCommand`/`AlterViewAsCommand` is updated to support global temp views
      2. `ShowTablesCommand` outputs a new column `database`, which is used to distinguish global and local temp views.
      3. other commands can also handle global temp views if they call `SessionCatalog` APIs which accepts global temp views, e.g. `DropTableCommand`, `AlterTableRenameCommand`, `ShowColumnsCommand`, etc.
      
      changes for other public API
      
      1. add a new method `dropGlobalTempView` in `Catalog`
      2. `Catalog.findTable` can find global temp view
      3. add a new method `createGlobalTempView` in `Dataset`
      
      ## How was this patch tested?
      
      new tests in `SQLViewSuite`
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #14897 from cloud-fan/global-temp-view.
      23ddff4b
  22. Oct 05, 2016
    • sethah's avatar
      [SPARK-17239][ML][DOC] Update user guide for multiclass logistic regression · 9df54f53
      sethah authored
      ## What changes were proposed in this pull request?
      Updates user guide to reflect that LogisticRegression now supports multiclass. Also adds new examples to show multiclass training.
      
      ## How was this patch tested?
      Ran locally using spark-submit, run-example, and copy/paste from user guide into shells. Generated docs and verified correct output.
      
      Author: sethah <seth.hendrickson16@gmail.com>
      
      Closes #15349 from sethah/SPARK-17239.
      Unverified
      9df54f53
  23. Sep 26, 2016
    • Justin Pihony's avatar
      [SPARK-14525][SQL] Make DataFrameWrite.save work for jdbc · 50b89d05
      Justin Pihony authored
      ## What changes were proposed in this pull request?
      
      This change modifies the implementation of DataFrameWriter.save such that it works with jdbc, and the call to jdbc merely delegates to save.
      
      ## How was this patch tested?
      
      This was tested via unit tests in the JDBCWriteSuite, of which I added one new test to cover this scenario.
      
      ## Additional details
      
      rxin This seems to have been most recently touched by you and was also commented on in the JIRA.
      
      This contribution is my original work and I license the work to the project under the project's open source license.
      
      Author: Justin Pihony <justin.pihony@gmail.com>
      Author: Justin Pihony <justin.pihony@typesafe.com>
      
      Closes #12601 from JustinPihony/jdbc_reconciliation.
      Unverified
      50b89d05
  24. Sep 12, 2016
  25. Sep 03, 2016
  26. Aug 27, 2016
    • Sean Owen's avatar
      [SPARK-17001][ML] Enable standardScaler to standardize sparse vectors when withMean=True · e07baf14
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Allow centering / mean scaling of sparse vectors in StandardScaler, if requested. This is for compatibility with `VectorAssembler` in common usages.
      
      ## How was this patch tested?
      
      Jenkins tests, including new caes to reflect the new behavior.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #14663 from srowen/SPARK-17001.
      e07baf14
  27. Aug 24, 2016
    • Weiqing Yang's avatar
      [MINOR][BUILD] Fix Java CheckStyle Error · 673a80d2
      Weiqing Yang authored
      ## What changes were proposed in this pull request?
      As Spark 2.0.1 will be released soon (mentioned in the spark dev mailing list), besides the critical bugs, it's better to fix the code style errors before the release.
      
      Before:
      ```
      ./dev/lint-java
      Checkstyle checks failed at following occurrences:
      [ERROR] src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeExternalSorter.java:[525] (sizes) LineLength: Line is longer than 100 characters (found 119).
      [ERROR] src/main/java/org/apache/spark/examples/sql/streaming/JavaStructuredNetworkWordCount.java:[64] (sizes) LineLength: Line is longer than 100 characters (found 103).
      ```
      After:
      ```
      ./dev/lint-java
      Using `mvn` from path: /usr/local/bin/mvn
      Checkstyle checks passed.
      ```
      ## How was this patch tested?
      Manual.
      
      Author: Weiqing Yang <yangweiqing001@gmail.com>
      
      Closes #14768 from Sherry302/fixjavastyle.
      673a80d2
  28. Aug 20, 2016
    • wm624@hotmail.com's avatar
      [SPARKR][EXAMPLE] change example APP name · 3e5fdeb3
      wm624@hotmail.com authored
      ## What changes were proposed in this pull request?
      
      (Please fill in changes proposed in this fix)
      
      For R SQL example, appname is "MyApp". While examples in scala, Java and python, the appName is "x Spark SQL basic example".
      
      I made the R example consistent with other examples.
      
      ## How was this patch tested?
      
      (Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
      
      Manual test
      (If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
      
      Author: wm624@hotmail.com <wm624@hotmail.com>
      
      Closes #14703 from wangmiao1981/example.
      3e5fdeb3
Loading