Skip to content
Snippets Groups Projects
  1. Jul 19, 2016
    • Xin Ren's avatar
      [SPARK-16535][BUILD] In pom.xml, remove groupId which is redundant definition... · 21a6dd2a
      Xin Ren authored
      [SPARK-16535][BUILD] In pom.xml, remove groupId which is redundant definition and inherited from the parent
      
      https://issues.apache.org/jira/browse/SPARK-16535
      
      ## What changes were proposed in this pull request?
      
      When I scan through the pom.xml of sub projects, I found this warning as below and attached screenshot
      ```
      Definition of groupId is redundant, because it's inherited from the parent
      ```
      ![screen shot 2016-07-13 at 3 13 11 pm](https://cloud.githubusercontent.com/assets/3925641/16823121/744f893e-4916-11e6-8a52-042f83b9db4e.png)
      
      I've tried to remove some of the lines with groupId definition, and the build on my local machine is still ok.
      ```
      <groupId>org.apache.spark</groupId>
      ```
      As I just find now `<maven.version>3.3.9</maven.version>` is being used in Spark 2.x, and Maven-3 supports versionless parent elements: Maven 3 will remove the need to specify the parent version in sub modules. THIS is great (in Maven 3.1).
      
      ref: http://stackoverflow.com/questions/3157240/maven-3-worth-it/3166762#3166762
      
      ## How was this patch tested?
      
      I've tested by re-building the project, and build succeeded.
      
      Author: Xin Ren <iamshrek@126.com>
      
      Closes #14189 from keypointt/SPARK-16535.
      21a6dd2a
    • Dongjoon Hyun's avatar
      [MINOR][BUILD] Fix Java Linter `LineLength` errors · 556a9437
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This PR fixes four java linter `LineLength` errors. Those are all `LineLength` errors, but we had better remove all java linter errors before release.
      
      ## How was this patch tested?
      
      After pass the Jenkins, `./dev/lint-java`.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #14255 from dongjoon-hyun/minor_java_linter.
      556a9437
    • Mortada Mehyar's avatar
      [DOC] improve python doc for rdd.histogram and dataframe.join · 6ee40d2c
      Mortada Mehyar authored
      ## What changes were proposed in this pull request?
      
      doc change only
      
      ## How was this patch tested?
      
      doc change only
      
      Author: Mortada Mehyar <mortada.mehyar@gmail.com>
      
      Closes #14253 from mortada/histogram_typos.
      6ee40d2c
    • Cheng Lian's avatar
      [SPARK-16303][DOCS][EXAMPLES] Minor Scala/Java example update · 1426a080
      Cheng Lian authored
      ## What changes were proposed in this pull request?
      
      This PR moves one and the last hard-coded Scala example snippet from the SQL programming guide into `SparkSqlExample.scala`. It also renames all Scala/Java example files so that all "Sql" in the file names are updated to "SQL".
      
      ## How was this patch tested?
      
      Manually verified the generated HTML page.
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #14245 from liancheng/minor-scala-example-update.
      1426a080
    • Zheng RuiFeng's avatar
      [MINOR] Remove unused arg in als.py · e5fbb182
      Zheng RuiFeng authored
      ## What changes were proposed in this pull request?
      The second arg in method `update()` is never used. So I delete it.
      
      ## How was this patch tested?
      local run with `./bin/spark-submit examples/src/main/python/als.py`
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #14247 from zhengruifeng/als_refine.
      e5fbb182
  2. Jul 18, 2016
    • Reynold Xin's avatar
      [SPARK-16615][SQL] Expose sqlContext in SparkSession · 69c77305
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch removes the private[spark] qualifier for SparkSession.sqlContext, as discussed in http://apache-spark-developers-list.1001551.n3.nabble.com/Re-transtition-SQLContext-to-SparkSession-td18342.html
      
      ## How was this patch tested?
      N/A - this is a visibility change.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #14252 from rxin/SPARK-16615.
      69c77305
    • Reynold Xin's avatar
      [HOTFIX] Fix Scala 2.10 compilation · c4524f51
      Reynold Xin authored
      c4524f51
    • Dongjoon Hyun's avatar
      [SPARK-16590][SQL] Improve LogicalPlanToSQLSuite to check generated SQL directly · ea78edb8
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This PR improves `LogicalPlanToSQLSuite` to check the generated SQL directly by **structure**. So far, `LogicalPlanToSQLSuite` relies on  `checkHiveQl` to ensure the **successful SQL generation** and **answer equality**. However, it does not guarantee the generated SQL is the same or will not be changed unnoticeably.
      
      ## How was this patch tested?
      
      Pass the Jenkins. This is only a testsuite change.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #14235 from dongjoon-hyun/SPARK-16590.
      ea78edb8
    • Felix Cheung's avatar
      [SPARKR][DOCS] minor code sample update in R programming guide · 75f0efe7
      Felix Cheung authored
      ## What changes were proposed in this pull request?
      
      Fix code style from ad hoc review of RC4 doc
      
      ## How was this patch tested?
      
      manual
      
      shivaram
      
      Author: Felix Cheung <felixcheung_m@hotmail.com>
      
      Closes #14250 from felixcheung/rdocs2rc4.
      75f0efe7
    • Daoyuan Wang's avatar
      [SPARK-16515][SQL] set default record reader and writer for script transformation · 96e9afaa
      Daoyuan Wang authored
      ## What changes were proposed in this pull request?
      In ScriptInputOutputSchema, we read default RecordReader and RecordWriter from conf. Since Spark 2.0 has deleted those config keys from hive conf, we have to set default reader/writer class name by ourselves. Otherwise we will get None for LazySimpleSerde, the data written would not be able to read by script. The test case added worked fine with previous version of Spark, but would fail now.
      
      ## How was this patch tested?
      added a test case in SQLQuerySuite.
      
      Closes #14169
      
      Author: Daoyuan Wang <daoyuan.wang@intel.com>
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #14249 from yhuai/scriptTransformation.
      96e9afaa
    • hyukjinkwon's avatar
      [SPARK-16351][SQL] Avoid per-record type dispatch in JSON when writing · 2877f1a5
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      Currently, `JacksonGenerator.apply` is doing type-based dispatch for each row to write appropriate values.
      It might not have to be done like this because the schema is already kept.
      
      So, appropriate writers can be created first according to the schema once, and then apply them to each row. This approach is similar with `CatalystWriteSupport`.
      
      This PR corrects `JacksonGenerator` so that it creates all writers for the schema once and then applies them to each row rather than type dispatching for every row.
      
      Benchmark was proceeded with the codes below:
      
      ```scala
      test("Benchmark for JSON writer") {
        val N = 500 << 8
        val row =
          """{"struct":{"field1": true, "field2": 92233720368547758070},
            "structWithArrayFields":{"field1":[4, 5, 6], "field2":["str1", "str2"]},
            "arrayOfString":["str1", "str2"],
            "arrayOfInteger":[1, 2147483647, -2147483648],
            "arrayOfLong":[21474836470, 9223372036854775807, -9223372036854775808],
            "arrayOfBigInteger":[922337203685477580700, -922337203685477580800],
            "arrayOfDouble":[1.2, 1.7976931348623157E308, 4.9E-324, 2.2250738585072014E-308],
            "arrayOfBoolean":[true, false, true],
            "arrayOfNull":[null, null, null, null],
            "arrayOfStruct":[{"field1": true, "field2": "str1"}, {"field1": false}, {"field3": null}],
            "arrayOfArray1":[[1, 2, 3], ["str1", "str2"]],
            "arrayOfArray2":[[1, 2, 3], [1.1, 2.1, 3.1]]
           }"""
        val df = spark.sqlContext.read.json(spark.sparkContext.parallelize(List.fill(N)(row)))
        val benchmark = new Benchmark("JSON writer", N)
        benchmark.addCase("writing JSON file", 10) { _ =>
          withTempPath { path =>
            df.write.format("json").save(path.getCanonicalPath)
          }
        }
        benchmark.run()
      }
      ```
      
      This produced the results below
      
      - **Before**
      
      ```
      JSON writer:                             Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)   Relative
      ------------------------------------------------------------------------------------------------
      writing JSON file                             1675 / 1767          0.1       13087.5       1.0X
      ```
      
      - **After**
      
      ```
      JSON writer:                             Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)   Relative
      ------------------------------------------------------------------------------------------------
      writing JSON file                             1597 / 1686          0.1       12477.1       1.0X
      ```
      
      In addition, I ran this benchmark 10 times for each and calculated the average elapsed time as below:
      
      | **Before** | **After**|
      |---------------|------------|
      |17478ms  |16669ms |
      
      It seems roughly ~5% is improved.
      
      ## How was this patch tested?
      
      Existing tests should cover this.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #14028 from HyukjinKwon/SPARK-16351.
      2877f1a5
    • krishnakalyan3's avatar
      [SPARK-16055][SPARKR] warning added while using sparkPackages with spark-submit · 8ea3f4ea
      krishnakalyan3 authored
      ## What changes were proposed in this pull request?
      https://issues.apache.org/jira/browse/SPARK-16055
      sparkPackages - argument is passed and we detect that we are in the R script mode, we should print some warning like --packages flag should be used with with spark-submit
      
      ## How was this patch tested?
      In my system locally
      
      Author: krishnakalyan3 <krishnakalyan3@gmail.com>
      
      Closes #14179 from krishnakalyan3/spark-pkg.
      8ea3f4ea
    • WeichenXu's avatar
      [MINOR][TYPO] fix fininsh typo · a529fc94
      WeichenXu authored
      ## What changes were proposed in this pull request?
      
      fininsh => finish
      
      ## How was this patch tested?
      
      N/A
      
      Author: WeichenXu <WeichenXu123@outlook.com>
      
      Closes #14238 from WeichenXu123/fix_fininsh_typo.
      a529fc94
    • Reynold Xin's avatar
      [SPARK-16588][SQL] Deprecate monotonicallyIncreasingId in Scala/Java · 480c8706
      Reynold Xin authored
      This patch deprecates monotonicallyIncreasingId in Scala/Java, as done in Python.
      
      This patch was originally written by HyukjinKwon. Closes #14236.
      480c8706
  3. Jul 17, 2016
    • Felix Cheung's avatar
      [SPARK-16027][SPARKR] Fix R tests SparkSession init/stop · d27fe9ba
      Felix Cheung authored
      ## What changes were proposed in this pull request?
      
      Fix R SparkSession init/stop, and warnings of reusing existing Spark Context
      
      ## How was this patch tested?
      
      unit tests
      
      shivaram
      
      Author: Felix Cheung <felixcheung_m@hotmail.com>
      
      Closes #14177 from felixcheung/rsessiontest.
      d27fe9ba
    • Reynold Xin's avatar
      [SPARK-16584][SQL] Move regexp unit tests to RegexpExpressionsSuite · 7b847580
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch moves regexp related unit tests from StringExpressionsSuite to RegexpExpressionsSuite to match the file name for regexp expressions.
      
      ## How was this patch tested?
      This is a test only change.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #14230 from rxin/SPARK-16584.
      7b847580
  4. Jul 16, 2016
    • Shivaram Venkataraman's avatar
      [SPARK-16507][SPARKR] Add a CRAN checker, fix Rd aliases · c33e4b0d
      Shivaram Venkataraman authored
      ## What changes were proposed in this pull request?
      
      Add a check-cran.sh script that runs `R CMD check` as CRAN. Also fixes a number of issues pointed out by the check. These include
      - Updating `DESCRIPTION` to be appropriate
      - Adding a .Rbuildignore to ignore lintr, src-native, html that are non-standard files / dirs
      - Adding aliases to all S4 methods in DataFrame, Column, GroupedData etc.  This is required as stated in https://cran.r-project.org/doc/manuals/r-release/R-exts.html#Documenting-S4-classes-and-methods
      - Other minor fixes
      
      ## How was this patch tested?
      
      SparkR unit tests, running the above mentioned script
      
      Author: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
      
      Closes #14173 from shivaram/sparkr-cran-changes.
      c33e4b0d
    • Narine Kokhlikyan's avatar
      [SPARK-16112][SPARKR] Programming guide for gapply/gapplyCollect · 41673048
      Narine Kokhlikyan authored
      ## What changes were proposed in this pull request?
      
      Updates programming guide for spark.gapply/spark.gapplyCollect.
      
      Similar to other examples I used `faithful` dataset to demonstrate gapply's functionality.
      Please, let me know if you prefer another example.
      
      ## How was this patch tested?
      Existing test cases in R
      
      Author: Narine Kokhlikyan <narine@slice.com>
      
      Closes #14090 from NarineK/gapplyProgGuide.
      41673048
    • Sean Owen's avatar
      [SPARK-3359][DOCS] More changes to resolve javadoc 8 errors that will help... · 5ec0d692
      Sean Owen authored
      [SPARK-3359][DOCS] More changes to resolve javadoc 8 errors that will help unidoc/genjavadoc compatibility
      
      ## What changes were proposed in this pull request?
      
      These are yet more changes that resolve problems with unidoc/genjavadoc and Java 8. It does not fully resolve the problem, but gets rid of as many errors as we can from this end.
      
      ## How was this patch tested?
      
      Jenkins build of docs
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #14221 from srowen/SPARK-3359.3.
      5ec0d692
    • Sameer Agarwal's avatar
      [SPARK-16582][SQL] Explicitly define isNull = false for non-nullable expressions · a1ffbada
      Sameer Agarwal authored
      ## What changes were proposed in this pull request?
      
      This patch is just a slightly safer way to fix the issue we encountered in https://github.com/apache/spark/pull/14168 should this pattern re-occur at other places in the code.
      
      ## How was this patch tested?
      
      Existing tests. Also, I manually tested that it fixes the problem in SPARK-16514 without having the proposed change in https://github.com/apache/spark/pull/14168
      
      Author: Sameer Agarwal <sameerag@cs.berkeley.edu>
      
      Closes #14227 from sameeragarwal/codegen.
      a1ffbada
  5. Jul 15, 2016
    • Tejas Patil's avatar
      [SPARK-16230][CORE] CoarseGrainedExecutorBackend to self kill if there is an... · b2f24f94
      Tejas Patil authored
      [SPARK-16230][CORE] CoarseGrainedExecutorBackend to self kill if there is an exception while creating an Executor
      
      ## What changes were proposed in this pull request?
      
      With the fix from SPARK-13112, I see that `LaunchTask` is always processed after `RegisteredExecutor` is done and so it gets chance to do all retries to startup an executor. There is still a problem that if `Executor` creation itself fails and there is some exception, it gets unnoticed and the executor is killed when it tries to process the `LaunchTask` as `executor` is null : https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala#L88 So if one looks at the logs, it does not tell that there was problem during `Executor` creation and thats why it was killed.
      
      This PR explicitly catches exception in `Executor` creation, logs a proper message and then exits the JVM. Also, I have changed the `exitExecutor` method to accept `reason` so that backends can use that reason and do stuff like logging to a DB to get an aggregate of such exits at a cluster level
      
      ## How was this patch tested?
      
      I am relying on existing tests
      
      Author: Tejas Patil <tejasp@fb.com>
      
      Closes #14202 from tejasapatil/exit_executor_failure.
      b2f24f94
    • Felix Cheung's avatar
      [SPARK-16538][SPARKR] Add more tests for namespace call to SparkSession functions · 611a8ca5
      Felix Cheung authored
      ## What changes were proposed in this pull request?
      
      More tests
      I don't think this is critical for Spark 2.0.0 RC, maybe Spark 2.0.1 or 2.1.0.
      
      ## How was this patch tested?
      
      unit tests
      
      shivaram dongjoon-hyun
      
      Author: Felix Cheung <felixcheung_m@hotmail.com>
      
      Closes #14206 from felixcheung/rroutetests.
      611a8ca5
    • Joseph K. Bradley's avatar
      [SPARK-14817][ML][MLLIB][DOC] Made DataFrame-based API primary in MLlib guide · 5ffd5d38
      Joseph K. Bradley authored
      ## What changes were proposed in this pull request?
      
      Made DataFrame-based API primary
      * Spark doc menu bar and other places now link to ml-guide.html, not mllib-guide.html
      * mllib-guide.html keeps RDD-specific list of features, with a link at the top redirecting people to ml-guide.html
      * ml-guide.html includes a "maintenance mode" announcement about the RDD-based API
        * **Reviewers: please check this carefully**
      * (minor) Titles for DF API no longer include "- spark.ml" suffix.  Titles for RDD API have "- RDD-based API" suffix
      * Moved migration guide to ml-guide from mllib-guide
        * Also moved past guides from mllib-migration-guides to ml-migration-guides, with a redirect link on mllib-migration-guides
        * **Reviewers**: I did not change any of the content of the migration guides.
      
      Reorganized DataFrame-based guide:
      * ml-guide.html mimics the old mllib-guide.html page in terms of content: overview, migration guide, etc.
      * Moved Pipeline description into ml-pipeline.html and moved tuning into ml-tuning.html
        * **Reviewers**: I did not change the content of these guides, except some intro text.
      * Sidebar remains the same, but with pipeline and tuning sections added
      
      Other:
      * ml-classification-regression.html: Moved text about linear methods to new section in page
      
      ## How was this patch tested?
      
      Generated docs locally
      
      Author: Joseph K. Bradley <joseph@databricks.com>
      
      Closes #14213 from jkbradley/ml-guide-2.0.
      5ffd5d38
    • z001qdp's avatar
      [SPARK-16426][MLLIB] Fix bug that caused NaNs in IsotonicRegression · 71ad945b
      z001qdp authored
      ## What changes were proposed in this pull request?
      
      Fixed a bug that caused `NaN`s in `IsotonicRegression`. The problem occurs when training rows with the same feature value but different labels end up on different partitions. This patch changes a `sortBy` call to a `partitionBy(RangePartitioner)` followed by a `mapPartitions(sortBy)` in order to ensure that all rows with the same feature value end up on the same partition.
      
      ## How was this patch tested?
      
      Added a unit test.
      
      Author: z001qdp <Nicholas.Eggert@target.com>
      
      Closes #14140 from neggert/SPARK-16426-isotonic-nan.
      71ad945b
    • WeichenXu's avatar
      [SPARK-16546][SQL][PYSPARK] update python dataframe.drop · 18324238
      WeichenXu authored
      ## What changes were proposed in this pull request?
      
      Make `dataframe.drop` API in python support multi-columns parameters,
      so that it is the same with scala API.
      
      ## How was this patch tested?
      
      The doc test.
      
      Author: WeichenXu <WeichenXu123@outlook.com>
      
      Closes #14203 from WeichenXu123/drop_python_api.
      18324238
  6. Jul 14, 2016
    • Reynold Xin's avatar
      [SPARK-16557][SQL] Remove stale doc in sql/README.md · 2e4075e2
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      Most of the documentation in https://github.com/apache/spark/blob/master/sql/README.md is stale. It would be useful to keep the list of projects to explain what's going on, and everything else should be removed.
      
      ## How was this patch tested?
      N/A
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #14211 from rxin/SPARK-16557.
      2e4075e2
    • Josh Rosen's avatar
      [SPARK-16555] Work around Jekyll error-handling bug which led to silent failures · 972673ac
      Josh Rosen authored
      If a custom Jekyll template tag throws Ruby's equivalent of a "file not found" exception, then Jekyll will stop the doc building process but will exit with a successful status, causing our doc publishing jobs to silently fail.
      
      This is caused by https://github.com/jekyll/jekyll/issues/5104, a case of bad error-handling logic in Jekyll. This patch works around this by updating our `include_example.rb` plugin to catch the exception and exit rather than allowing it to bubble up and be ignored by Jekyll.
      
      I tested this manually with
      
      ```
      rm ./examples/src/main/scala/org/apache/spark/examples/sql/SparkSQLExample.scala
      cd docs
      SKIP_API=1 jekyll build
      echo $?
      ```
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #14209 from JoshRosen/fix-doc-building.
      972673ac
    • Shivaram Venkataraman's avatar
      [SPARK-16553][DOCS] Fix SQL example file name in docs · 01c4c1fa
      Shivaram Venkataraman authored
      ## What changes were proposed in this pull request?
      
      Fixes a typo in the sql programming guide
      
      ## How was this patch tested?
      
      Building docs locally
      
      (If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
      
      Author: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
      
      Closes #14208 from shivaram/spark-sql-doc-fix.
      01c4c1fa
    • jerryshao's avatar
      [SPARK-16540][YARN][CORE] Avoid adding jars twice for Spark running on yarn · 91575cac
      jerryshao authored
      ## What changes were proposed in this pull request?
      
      Currently when running spark on yarn, jars specified with --jars, --packages will be added twice, one is Spark's own file server, another is yarn's distributed cache, this can be seen from log:
      for example:
      
      ```
      ./bin/spark-shell --master yarn-client --jars examples/target/scala-2.11/jars/scopt_2.11-3.3.0.jar
      ```
      
      If specified the jar to be added is scopt jar, it will added twice:
      
      ```
      ...
      16/07/14 15:06:48 INFO Server: Started 5603ms
      16/07/14 15:06:48 INFO Utils: Successfully started service 'SparkUI' on port 4040.
      16/07/14 15:06:48 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.0.102:4040
      16/07/14 15:06:48 INFO SparkContext: Added JAR file:/Users/sshao/projects/apache-spark/examples/target/scala-2.11/jars/scopt_2.11-3.3.0.jar at spark://192.168.0.102:63996/jars/scopt_2.11-3.3.0.jar with timestamp 1468480008637
      16/07/14 15:06:49 INFO RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
      16/07/14 15:06:49 INFO Client: Requesting a new application from cluster with 1 NodeManagers
      16/07/14 15:06:49 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
      16/07/14 15:06:49 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
      16/07/14 15:06:49 INFO Client: Setting up container launch context for our AM
      16/07/14 15:06:49 INFO Client: Setting up the launch environment for our AM container
      16/07/14 15:06:49 INFO Client: Preparing resources for our AM container
      16/07/14 15:06:49 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
      16/07/14 15:06:50 INFO Client: Uploading resource file:/private/var/folders/tb/8pw1511s2q78mj7plnq8p9g40000gn/T/spark-a446300b-84bf-43ff-bfb1-3adfb0571a42/__spark_libs__6486179704064718817.zip -> hdfs://localhost:8020/user/sshao/.sparkStaging/application_1468468348998_0009/__spark_libs__6486179704064718817.zip
      16/07/14 15:06:51 INFO Client: Uploading resource file:/Users/sshao/projects/apache-spark/examples/target/scala-2.11/jars/scopt_2.11-3.3.0.jar -> hdfs://localhost:8020/user/sshao/.sparkStaging/application_1468468348998_0009/scopt_2.11-3.3.0.jar
      16/07/14 15:06:51 INFO Client: Uploading resource file:/private/var/folders/tb/8pw1511s2q78mj7plnq8p9g40000gn/T/spark-a446300b-84bf-43ff-bfb1-3adfb0571a42/__spark_conf__326416236462420861.zip -> hdfs://localhost:8020/user/sshao/.sparkStaging/application_1468468348998_0009/__spark_conf__.zip
      ...
      ```
      
      So here try to avoid adding jars to Spark's fileserver unnecessarily.
      
      ## How was this patch tested?
      
      Manually verified both in yarn client and cluster mode, also in standalone mode.
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #14196 from jerryshao/SPARK-16540.
      91575cac
    • Jacek Lewandowski's avatar
      [SPARK-16528][SQL] Fix NPE problem in HiveClientImpl · 31ca741a
      Jacek Lewandowski authored
      ## What changes were proposed in this pull request?
      
      There are some calls to methods or fields (getParameters, properties) which are then passed to Java/Scala collection converters. Unfortunately those fields can be null in some cases and then the conversions throws NPE. We fix it by wrapping calls to those fields and methods with option and then do the conversion.
      
      ## How was this patch tested?
      
      Manually tested with a custom Hive metastore.
      
      Author: Jacek Lewandowski <lewandowski.jacek@gmail.com>
      
      Closes #14200 from jacek-lewandowski/SPARK-16528.
      31ca741a
    • Dongjoon Hyun's avatar
      [SPARK-16529][SQL][TEST] `withTempDatabase` should set `default` database before dropping · c576f9fb
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      `SQLTestUtils.withTempDatabase` is a frequently used test harness to setup a temporary table and clean up finally. This issue improves like the following for usability.
      
      ```scala
      -    try f(dbName) finally spark.sql(s"DROP DATABASE $dbName CASCADE")
      +    try f(dbName) finally {
      +      if (spark.catalog.currentDatabase == dbName) {
      +        spark.sql(s"USE ${DEFAULT_DATABASE}")
      +      }
      +      spark.sql(s"DROP DATABASE $dbName CASCADE")
      +    }
      ```
      
      In case of forgetting to reset the databaes, `withTempDatabase` will not raise Exception.
      
      ## How was this patch tested?
      
      This improves test harness.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #14184 from dongjoon-hyun/SPARK-16529.
      c576f9fb
    • Felix Cheung's avatar
      [SPARK-16538][SPARKR] fix R call with namespace operator on SparkSession functions · 12005c88
      Felix Cheung authored
      ## What changes were proposed in this pull request?
      
      Fix function routing to work with and without namespace operator `SparkR::createDataFrame`
      
      ## How was this patch tested?
      
      manual, unit tests
      
      shivaram
      
      Author: Felix Cheung <felixcheung_m@hotmail.com>
      
      Closes #14195 from felixcheung/rroutedefault.
      12005c88
    • Sun Rui's avatar
      [SPARK-16509][SPARKR] Rename window.partitionBy and window.orderBy to... · 093ebbc6
      Sun Rui authored
      [SPARK-16509][SPARKR] Rename window.partitionBy and window.orderBy to windowPartitionBy and windowOrderBy.
      
      ## What changes were proposed in this pull request?
      Rename window.partitionBy and window.orderBy to windowPartitionBy and windowOrderBy to pass CRAN package check.
      
      ## How was this patch tested?
      SparkR unit tests.
      
      Author: Sun Rui <sunrui2016@gmail.com>
      
      Closes #14192 from sun-rui/SPARK-16509.
      093ebbc6
    • Dongjoon Hyun's avatar
      [SPARK-16543][SQL] Rename the columns of `SHOW PARTITION/COLUMNS` commands · 56183b84
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This PR changes the name of columns returned by `SHOW PARTITION` and `SHOW COLUMNS` commands. Currently, both commands uses `result` as a column name.
      
      **Comparison: Column Name**
      
      Command|Spark(Before)|Spark(After)|Hive
      ----------|--------------|------------|-----
      SHOW PARTITIONS|result|partition|partition
      SHOW COLUMNS|result|col_name|field
      
      Note that Spark/Hive uses `col_name` in `DESC TABLES`. So, this PR chooses `col_name` for consistency among Spark commands.
      
      **Before**
      ```scala
      scala> sql("show partitions p").show()
      +------+
      |result|
      +------+
      |   b=2|
      +------+
      
      scala> sql("show columns in p").show()
      +------+
      |result|
      +------+
      |     a|
      |     b|
      +------+
      ```
      
      **After**
      ```scala
      scala> sql("show partitions p").show
      +---------+
      |partition|
      +---------+
      |      b=2|
      +---------+
      
      scala> sql("show columns in p").show
      +--------+
      |col_name|
      +--------+
      |       a|
      |       b|
      +--------+
      ```
      
      ## How was this patch tested?
      
      Manual.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #14199 from dongjoon-hyun/SPARK-16543.
      56183b84
    • gatorsmile's avatar
      [SPARK-16530][SQL][TRIVIAL] Wrong Parser Keyword in ALTER TABLE CHANGE COLUMN · 1b5c9e52
      gatorsmile authored
      #### What changes were proposed in this pull request?
      Based on the [Hive SQL syntax](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-ChangeColumnName/Type/Position/Comment), the command to change column name/type/position/comment is `ALTER TABLE CHANGE COLUMN`. However, in our .g4 file, it is `ALTER TABLE CHANGE COLUMNS`. Because it is the last optional keyword, it does not take any effect. Thus, I put the issue as a Trivial level.
      
      cc hvanhovell
      
      #### How was this patch tested?
      Existing test cases
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #14186 from gatorsmile/changeColumns.
      1b5c9e52
    • Marcelo Vanzin's avatar
      [SPARK-16505][YARN] Optionally propagate error during shuffle service startup. · b7b5e178
      Marcelo Vanzin authored
      This prevents the NM from starting when something is wrong, which would
      lead to later errors which are confusing and harder to debug.
      
      Added a unit test to verify startup fails if something is wrong.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #14162 from vanzin/SPARK-16505.
      b7b5e178
    • jerryshao's avatar
      [SPARK-14963][MINOR][YARN] Fix typo in YarnShuffleService recovery file name · c4bc2ed8
      jerryshao authored
      ## What changes were proposed in this pull request?
      
      Due to the changes of [SPARK-14963](https://issues.apache.org/jira/browse/SPARK-14963), external shuffle recovery file name is changed mistakenly, so here change it back to the previous file name.
      
      This only affects the master branch, branch-2.0 is correct [here](https://github.com/apache/spark/blob/branch-2.0/common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java#L195).
      
      ## How was this patch tested?
      
      N/A
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #14197 from jerryshao/fix-typo-file-name.
      c4bc2ed8
    • Bryan Cutler's avatar
      [SPARK-16403][EXAMPLES] Cleanup to remove unused imports, consistent style, minor fixes · e3f8a033
      Bryan Cutler authored
      ## What changes were proposed in this pull request?
      
      Cleanup of examples, mostly from PySpark-ML to fix minor issues:  unused imports, style consistency, pipeline_example is a duplicate, use future print funciton, and a spelling error.
      
      * The "Pipeline Example" is duplicated by "Simple Text Classification Pipeline" in Scala, Python, and Java.
      
      * "Estimator Transformer Param Example" is duplicated by "Simple Params Example" in Scala, Python and Java
      
      * Synced random_forest_classifier_example.py with Scala by adding IndexToString label converted
      
      * Synced train_validation_split.py (in Scala ModelSelectionViaTrainValidationExample) by adjusting data split, adding grid for intercept.
      
      * RegexTokenizer was doing nothing in tokenizer_example.py and JavaTokenizerExample.java, synced with Scala version
      
      ## How was this patch tested?
      local tests and run modified examples
      
      Author: Bryan Cutler <cutlerb@gmail.com>
      
      Closes #14081 from BryanCutler/examples-cleanup-SPARK-16403.
      e3f8a033
    • WeichenXu's avatar
      [SPARK-16500][ML][MLLIB][OPTIMIZER] add LBFGS convergence warning for all used place in MLLib · 252d4f27
      WeichenXu authored
      ## What changes were proposed in this pull request?
      
      Add warning_for the following case when LBFGS training not actually convergence:
      
      1) LogisticRegression
      2) AFTSurvivalRegression
      3) LBFGS algorithm wrapper in mllib package
      
      ## How was this patch tested?
      
      N/A
      
      Author: WeichenXu <WeichenXu123@outlook.com>
      
      Closes #14157 from WeichenXu123/add_lbfgs_convergence_warning_for_all_used_place.
      252d4f27
    • Wenchen Fan's avatar
      [SPARK-16448] RemoveAliasOnlyProject should not remove alias with metadata · db7317ac
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      `Alias` with metadata is not a no-op and we should not strip it in `RemoveAliasOnlyProject` rule.
      This PR also did some improvement for this rule:
      
      1. extend the semantic of `alias-only`. Now we allow the project list to be partially aliased.
      2. add unit test for this rule.
      
      ## How was this patch tested?
      
      new `RemoveAliasOnlyProjectSuite`
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #14106 from cloud-fan/bug.
      db7317ac
Loading