Skip to content
Snippets Groups Projects
  1. Nov 14, 2016
  2. Nov 13, 2016
    • Yanbo Liang's avatar
      [SPARK-18412][SPARKR][ML] Fix exception for some SparkR ML algorithms training on libsvm data · 07be232e
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      * Fix the following exceptions which throws when ```spark.randomForest```(classification), ```spark.gbt```(classification), ```spark.naiveBayes``` and ```spark.glm```(binomial family) were fitted on libsvm data.
      ```
      java.lang.IllegalArgumentException: requirement failed: If label column already exists, forceIndexLabel can not be set with true.
      ```
      See [SPARK-18412](https://issues.apache.org/jira/browse/SPARK-18412) for more detail about how to reproduce this bug.
      * Refactor out ```getFeaturesAndLabels``` to RWrapperUtils, since lots of ML algorithm wrappers use this function.
      * Drop some unwanted columns when making prediction.
      
      ## How was this patch tested?
      Add unit test.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #15851 from yanboliang/spark-18412.
      07be232e
    • Denny Lee's avatar
      [SPARK-18426][STRUCTURED STREAMING] Python Documentation Fix for Structured... · b91a51bb
      Denny Lee authored
      [SPARK-18426][STRUCTURED STREAMING] Python Documentation Fix for Structured Streaming Programming Guide
      
      ## What changes were proposed in this pull request?
      
      Update the python section of the Structured Streaming Guide from .builder() to .builder
      
      ## How was this patch tested?
      
      Validated documentation and successfully running the test example.
      
      Please review https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark before opening a pull request.
      
      'Builder' object is not callable object hence changed .builder() to
      .builder
      
      Author: Denny Lee <dennylee@gallifrey.local>
      
      Closes #15872 from dennyglee/master.
      b91a51bb
  3. Nov 12, 2016
    • Holden Karau's avatar
      [SPARK-18418] Fix flags for make_binary_release for hadoop profile · 1386fd28
      Holden Karau authored
      ## What changes were proposed in this pull request?
      
      Fix the flags used to specify the hadoop version
      
      ## How was this patch tested?
      
      Manually tested as part of https://github.com/apache/spark/pull/15659 by having the build succeed.
      
      cc joshrosen
      
      Author: Holden Karau <holden@us.ibm.com>
      
      Closes #15860 from holdenk/minor-fix-release-build-script.
      1386fd28
    • Yanbo Liang's avatar
      [SPARK-14077][ML][FOLLOW-UP] Minor refactor and cleanup for NaiveBayes · 22cb3a06
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      * Refactor out ```trainWithLabelCheck``` and make ```mllib.NaiveBayes``` call into it.
      * Avoid capturing the outer object for ```modelType```.
      * Move ```requireNonnegativeValues``` and ```requireZeroOneBernoulliValues``` to companion object.
      
      ## How was this patch tested?
      Existing tests.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #15826 from yanboliang/spark-14077-2.
      22cb3a06
    • Guoqiang Li's avatar
      [SPARK-18375][SPARK-18383][BUILD][CORE] Upgrade netty to 4.0.42.Final · bc41d997
      Guoqiang Li authored
      ## What changes were proposed in this pull request?
      
      One of the important changes for 4.0.42.Final is "Support any FileRegion implementation when using epoll transport netty/netty#5825".
      In 4.0.42.Final, `MessageWithHeader` can work properly when `spark.[shuffle|rpc].io.mode` is set to epoll
      
      ## How was this patch tested?
      
      Existing tests
      
      Author: Guoqiang Li <witgo@qq.com>
      
      Closes #15830 from witgo/SPARK-18375_netty-4.0.42.
      Unverified
      bc41d997
  4. Nov 11, 2016
    • Weiqing Yang's avatar
      [SPARK-16759][CORE] Add a configuration property to pass caller contexts of... · 3af89451
      Weiqing Yang authored
      [SPARK-16759][CORE] Add a configuration property to pass caller contexts of upstream applications into Spark
      
      ## What changes were proposed in this pull request?
      
      Many applications take Spark as a computing engine and run on it. This PR adds a configuration property `spark.log.callerContext` that can be used by Spark's upstream applications (e.g. Oozie) to set up their caller contexts into Spark. In the end, Spark will combine its own caller context with the caller contexts of its upstream applications, and write them into Yarn RM log and HDFS audit log.
      
      The audit log has a config to truncate the caller contexts passed in (default 128). The caller contexts will be sent over rpc, so it should be concise. The call context written into HDFS log and Yarn log consists of two parts: the information `A` specified by Spark itself and the value `B` of `spark.log.callerContext` property.  Currently `A` typically takes 64 to 74 characters,  so `B` can have up to 50 characters (mentioned in the doc `running-on-yarn.md`)
      ## How was this patch tested?
      
      Manual tests. I have run some Spark applications with `spark.log.callerContext` configuration in Yarn client/cluster mode, and verified that the caller contexts were written into Yarn RM log and HDFS audit log correctly.
      
      The ways to configure `spark.log.callerContext` property:
      - In spark-defaults.conf:
      
      ```
      spark.log.callerContext  infoSpecifiedByUpstreamApp
      ```
      - In app's source code:
      
      ```
      val spark = SparkSession
            .builder
            .appName("SparkKMeans")
            .config("spark.log.callerContext", "infoSpecifiedByUpstreamApp")
            .getOrCreate()
      ```
      
      When running on Spark Yarn cluster mode, the driver is unable to pass 'spark.log.callerContext' to Yarn client and AM since Yarn client and AM have already started before the driver performs `.config("spark.log.callerContext", "infoSpecifiedByUpstreamApp")`.
      
      The following  example shows the command line used to submit a SparkKMeans application and the corresponding records in Yarn RM log and HDFS audit log.
      
      Command:
      
      ```
      ./bin/spark-submit --verbose --executor-cores 3 --num-executors 1 --master yarn --deploy-mode client --class org.apache.spark.examples.SparkKMeans examples/target/original-spark-examples_2.11-2.1.0-SNAPSHOT.jar hdfs://localhost:9000/lr_big.txt 2 5
      ```
      
      Yarn RM log:
      
      <img width="1440" alt="screen shot 2016-10-19 at 9 12 03 pm" src="https://cloud.githubusercontent.com/assets/8546874/19547050/7d2f278c-9649-11e6-9df8-8d5ff12609f0.png">
      
      HDFS audit log:
      
      <img width="1400" alt="screen shot 2016-10-19 at 10 18 14 pm" src="https://cloud.githubusercontent.com/assets/8546874/19547102/096060ae-964a-11e6-981a-cb28efd5a058.png">
      
      Author: Weiqing Yang <yangweiqing001@gmail.com>
      
      Closes #15563 from weiqingy/SPARK-16759.
      3af89451
    • sethah's avatar
      [SPARK-18060][ML] Avoid unnecessary computation for MLOR · 46b2550b
      sethah authored
      ## What changes were proposed in this pull request?
      
      Before this patch, the gradient updates for multinomial logistic regression were computed by an outer loop over the number of classes and an inner loop over the number of features. Inside the inner loop, we standardized the feature value (`value / featuresStd(index)`), which means we performed the computation `numFeatures * numClasses` times. We only need to perform that computation `numFeatures` times, however. If we re-order the inner and outer loop, we can avoid this, but then we lose sequential memory access. In this patch, we instead lay out the coefficients in column major order while we train, so that we can avoid the extra computation and retain sequential memory access. We convert back to row-major order when we create the model.
      
      ## How was this patch tested?
      
      This is an implementation detail only, so the original behavior should be maintained. All tests pass. I ran some performance tests to verify speedups. The results are below, and show significant speedups.
      ## Performance Tests
      
      **Setup**
      
      3 node bare-metal cluster
      120 cores total
      384 gb RAM total
      
      **Results**
      
      NOTE: The `currentMasterTime` and `thisPatchTime` are times in seconds for a single iteration of L-BFGS or OWL-QN.
      
      |    |   numPoints |   numFeatures |   numClasses |   regParam |   elasticNetParam |   currentMasterTime (sec) |   thisPatchTime (sec) |   pctSpeedup |
      |----|-------------|---------------|--------------|------------|-------------------|---------------------------|-----------------------|--------------|
      |  0 |       1e+07 |           100 |          500 |       0.5  |                 0 |                        90 |                    18 |           80 |
      |  1 |       1e+08 |           100 |           50 |       0.5  |                 0 |                        90 |                    19 |           78 |
      |  2 |       1e+08 |           100 |           50 |       0.05 |                 1 |                        72 |                    19 |           73 |
      |  3 |       1e+06 |           100 |         5000 |       0.5  |                 0 |                        93 |                    53 |           43 |
      |  4 |       1e+07 |           100 |         5000 |       0.5  |                 0 |                       900 |                   390 |           56 |
      |  5 |       1e+08 |           100 |          500 |       0.5  |                 0 |                       840 |                   174 |           79 |
      |  6 |       1e+08 |           100 |          200 |       0.5  |                 0 |                       360 |                    72 |           80 |
      |  7 |       1e+08 |          1000 |            5 |       0.5  |                 0 |                         9 |                     3 |           66 |
      
      Author: sethah <seth.hendrickson16@gmail.com>
      
      Closes #15593 from sethah/MLOR_PERF_COL_MAJOR_COEF.
      Unverified
      46b2550b
    • Felix Cheung's avatar
      [SPARK-18264][SPARKR] build vignettes with package, update vignettes for CRAN... · ba23f768
      Felix Cheung authored
      [SPARK-18264][SPARKR] build vignettes with package, update vignettes for CRAN release build and add info on release
      
      ## What changes were proposed in this pull request?
      
      Changes to DESCRIPTION to build vignettes.
      Changes the metadata for vignettes to generate the recommended format (which is about <10% of size before). Unfortunately it does not look as nice
      (before - left, after - right)
      
      ![image](https://cloud.githubusercontent.com/assets/8969467/20040492/b75883e6-a40d-11e6-9534-25cdd5d59a8b.png)
      
      ![image](https://cloud.githubusercontent.com/assets/8969467/20040490/a40f4d42-a40d-11e6-8c91-af00ddcbdad9.png)
      
      Also add information on how to run build/release to CRAN later.
      
      ## How was this patch tested?
      
      manually, unit tests
      
      shivaram
      
      We need this for branch-2.1
      
      Author: Felix Cheung <felixcheung_m@hotmail.com>
      
      Closes #15790 from felixcheung/rpkgvignettes.
      ba23f768
    • Ryan Blue's avatar
      [SPARK-18387][SQL] Add serialization to checkEvaluation. · 6e95325f
      Ryan Blue authored
      ## What changes were proposed in this pull request?
      
      This removes the serialization test from RegexpExpressionsSuite and
      replaces it by serializing all expressions in checkEvaluation.
      
      This also fixes math constant expressions by making LeafMathExpression
      Serializable and fixes NumberFormat values that are null or invalid
      after serialization.
      
      ## How was this patch tested?
      
      This patch is to tests.
      
      Author: Ryan Blue <blue@apache.org>
      
      Closes #15847 from rdblue/SPARK-18387-fix-serializable-expressions.
      6e95325f
    • Dongjoon Hyun's avatar
      [SPARK-17982][SQL] SQLBuilder should wrap the generated SQL with parenthesis for LIMIT · d42bb7cc
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      Currently, `SQLBuilder` handles `LIMIT` by always adding `LIMIT` at the end of the generated subSQL. It makes `RuntimeException`s like the following. This PR adds a parenthesis always except `SubqueryAlias` is used together with `LIMIT`.
      
      **Before**
      
      ``` scala
      scala> sql("CREATE TABLE tbl(id INT)")
      scala> sql("CREATE VIEW v1(id2) AS SELECT id FROM tbl LIMIT 2")
      java.lang.RuntimeException: Failed to analyze the canonicalized SQL: ...
      ```
      
      **After**
      
      ``` scala
      scala> sql("CREATE TABLE tbl(id INT)")
      scala> sql("CREATE VIEW v1(id2) AS SELECT id FROM tbl LIMIT 2")
      scala> sql("SELECT id2 FROM v1")
      res4: org.apache.spark.sql.DataFrame = [id2: int]
      ```
      
      **Fixed cases in this PR**
      
      The following two cases are the detail query plans having problematic SQL generations.
      
      1. `SELECT * FROM (SELECT id FROM tbl LIMIT 2)`
      
          Please note that **FROM SELECT** part of the generated SQL in the below. When we don't use '()' for limit, this fails.
      
      ```scala
      # Original logical plan:
      Project [id#1]
      +- GlobalLimit 2
         +- LocalLimit 2
            +- Project [id#1]
               +- MetastoreRelation default, tbl
      
      # Canonicalized logical plan:
      Project [gen_attr_0#1 AS id#4]
      +- SubqueryAlias tbl
         +- Project [gen_attr_0#1]
            +- GlobalLimit 2
               +- LocalLimit 2
                  +- Project [gen_attr_0#1]
                     +- SubqueryAlias gen_subquery_0
                        +- Project [id#1 AS gen_attr_0#1]
                           +- SQLTable default, tbl, [id#1]
      
      # Generated SQL:
      SELECT `gen_attr_0` AS `id` FROM (SELECT `gen_attr_0` FROM SELECT `gen_attr_0` FROM (SELECT `id` AS `gen_attr_0` FROM `default`.`tbl`) AS gen_subquery_0 LIMIT 2) AS tbl
      ```
      
      2. `SELECT * FROM (SELECT id FROM tbl TABLESAMPLE (2 ROWS))`
      
          Please note that **((~~~) AS gen_subquery_0 LIMIT 2)** in the below. When we use '()' for limit on `SubqueryAlias`, this fails.
      
      ```scala
      # Original logical plan:
      Project [id#1]
      +- Project [id#1]
         +- GlobalLimit 2
            +- LocalLimit 2
               +- MetastoreRelation default, tbl
      
      # Canonicalized logical plan:
      Project [gen_attr_0#1 AS id#4]
      +- SubqueryAlias tbl
         +- Project [gen_attr_0#1]
            +- GlobalLimit 2
               +- LocalLimit 2
                  +- SubqueryAlias gen_subquery_0
                     +- Project [id#1 AS gen_attr_0#1]
                        +- SQLTable default, tbl, [id#1]
      
      # Generated SQL:
      SELECT `gen_attr_0` AS `id` FROM (SELECT `gen_attr_0` FROM ((SELECT `id` AS `gen_attr_0` FROM `default`.`tbl`) AS gen_subquery_0 LIMIT 2)) AS tbl
      ```
      
      ## How was this patch tested?
      
      Pass the Jenkins test with a newly added test case.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #15546 from dongjoon-hyun/SPARK-17982.
      d42bb7cc
    • Vinayak's avatar
      [SPARK-17843][WEB UI] Indicate event logs pending for processing on history server UI · a531fe1a
      Vinayak authored
      ## What changes were proposed in this pull request?
      
      History Server UI's application listing to display information on currently under process event logs so a user knows that pending this processing an application may not list on the UI.
      
      When there are no event logs under process, the application list page has a "Last Updated" date-time at the top indicating the date-time of the last _completed_ scan of the event logs. The value is displayed to the user in his/her local time zone.
      ## How was this patch tested?
      
      All unit tests pass. Particularly all the suites under org.apache.spark.deploy.history.\* were run to test changes.
      - Very first startup - Pending logs - no logs processed yet:
      
      <img width="1280" alt="screen shot 2016-10-24 at 3 07 04 pm" src="https://cloud.githubusercontent.com/assets/12079825/19640981/b8d2a96a-99fc-11e6-9b1f-2d736fe90e48.png">
      - Very first startup - Pending logs - some logs processed:
      
      <img width="1280" alt="screen shot 2016-10-24 at 3 18 42 pm" src="https://cloud.githubusercontent.com/assets/12079825/19641087/3f8e3bae-99fd-11e6-9ef1-e0e70d71d8ef.png">
      - Last updated - No currently pending logs:
      
      <img width="1280" alt="screen shot 2016-10-17 at 8 34 37 pm" src="https://cloud.githubusercontent.com/assets/12079825/19443100/4d13946c-94a9-11e6-8ee2-c442729bb206.png">
      - Last updated - With some currently pending logs:
      
      <img width="1280" alt="screen shot 2016-10-24 at 3 09 31 pm" src="https://cloud.githubusercontent.com/assets/12079825/19640903/7323ba3a-99fc-11e6-8359-6a45753dbb28.png">
      - No applications found and No currently pending logs:
      
      <img width="1280" alt="screen shot 2016-10-24 at 3 24 26 pm" src="https://cloud.githubusercontent.com/assets/12079825/19641364/03a2cb04-99fe-11e6-87d6-d09587fc6201.png">
      
      Author: Vinayak <vijoshi5@in.ibm.com>
      
      Closes #15410 from vijoshi/SAAS-608_master.
      a531fe1a
    • Junjie Chen's avatar
      [SPARK-13331] AES support for over-the-wire encryption · 4f15d94c
      Junjie Chen authored
      ## What changes were proposed in this pull request?
      
      DIGEST-MD5 mechanism is used for SASL authentication and secure communication. DIGEST-MD5 mechanism supports 3DES, DES, and RC4 ciphers. However, 3DES, DES and RC4 are slow relatively.
      
      AES provide better performance and security by design and is a replacement for 3DES according to NIST. Apache Common Crypto is a cryptographic library optimized with AES-NI, this patch employ Apache Common Crypto as enc/dec backend for SASL authentication and secure channel to improve spark RPC.
      ## How was this patch tested?
      
      Unit tests and Integration test.
      
      Author: Junjie Chen <junjie.j.chen@intel.com>
      
      Closes #15172 from cjjnjust/shuffle_rpc_encrypt.
      4f15d94c
  5. Nov 10, 2016
    • Yanbo Liang's avatar
      [SPARK-18401][SPARKR][ML] SparkR random forest should support output original label. · 5ddf6947
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      SparkR ```spark.randomForest``` classification prediction should output original label rather than the indexed label. This issue is very similar with [SPARK-18291](https://issues.apache.org/jira/browse/SPARK-18291).
      
      ## How was this patch tested?
      Add unit tests.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #15842 from yanboliang/spark-18401.
      5ddf6947
    • Eric Liang's avatar
      [SPARK-18185] Fix all forms of INSERT / OVERWRITE TABLE for Datasource tables · a3356343
      Eric Liang authored
      ## What changes were proposed in this pull request?
      
      As of current 2.1, INSERT OVERWRITE with dynamic partitions against a Datasource table will overwrite the entire table instead of only the partitions matching the static keys, as in Hive. It also doesn't respect custom partition locations.
      
      This PR adds support for all these operations to Datasource tables managed by the Hive metastore. It is implemented as follows
      - During planning time, the full set of partitions affected by an INSERT or OVERWRITE command is read from the Hive metastore.
      - The planner identifies any partitions with custom locations and includes this in the write task metadata.
      - FileFormatWriter tasks refer to this custom locations map when determining where to write for dynamic partition output.
      - When the write job finishes, the set of written partitions is compared against the initial set of matched partitions, and the Hive metastore is updated to reflect the newly added / removed partitions.
      
      It was necessary to introduce a method for staging files with absolute output paths to `FileCommitProtocol`. These files are not handled by the Hadoop output committer but are moved to their final locations when the job commits.
      
      The overwrite behavior of legacy Datasource tables is also changed: no longer will the entire table be overwritten if a partial partition spec is present.
      
      cc cloud-fan yhuai
      
      ## How was this patch tested?
      
      Unit tests, existing tests.
      
      Author: Eric Liang <ekl@databricks.com>
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #15814 from ericl/sc-5027.
      a3356343
    • Cheng Lian's avatar
      [SPARK-18403][SQL] Temporarily disable flaky ObjectHashAggregateSuite · e0deee1f
      Cheng Lian authored
      ## What changes were proposed in this pull request?
      
      Randomized tests in `ObjectHashAggregateSuite` is being flaky and breaks PR builds. This PR disables them temporarily to bring back the PR build.
      
      ## How was this patch tested?
      
      N/A
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #15845 from liancheng/ignore-flaky-object-hash-agg-suite.
      e0deee1f
    • Wenchen Fan's avatar
      [SPARK-17990][SPARK-18302][SQL] correct several partition related behaviours of ExternalCatalog · 2f7461f3
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      This PR corrects several partition related behaviors of `ExternalCatalog`:
      
      1. default partition location should not always lower case the partition column names in path string(fix `HiveExternalCatalog`)
      2. rename partition should not always lower case the partition column names in updated partition path string(fix `HiveExternalCatalog`)
      3. rename partition should update the partition location only for managed table(fix `InMemoryCatalog`)
      4. create partition with existing directory should be fine(fix `InMemoryCatalog`)
      5. create partition with non-existing directory should create that directory(fix `InMemoryCatalog`)
      6. drop partition from external table should not delete the directory(fix `InMemoryCatalog`)
      
      ## How was this patch tested?
      
      new tests in `ExternalCatalogSuite`
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #15797 from cloud-fan/partition.
      2f7461f3
    • Michael Allman's avatar
      [SPARK-17993][SQL] Fix Parquet log output redirection · b533fa2b
      Michael Allman authored
      (Link to Jira issue: https://issues.apache.org/jira/browse/SPARK-17993)
      ## What changes were proposed in this pull request?
      
      PR #14690 broke parquet log output redirection for converted partitioned Hive tables. For example, when querying parquet files written by Parquet-mr 1.6.0 Spark prints a torrent of (harmless) warning messages from the Parquet reader:
      
      ```
      Oct 18, 2016 7:42:18 PM WARNING: org.apache.parquet.CorruptStatistics: Ignoring statistics because created_by could not be parsed (see PARQUET-251): parquet-mr version 1.6.0
      org.apache.parquet.VersionParser$VersionParseException: Could not parse created_by: parquet-mr version 1.6.0 using format: (.+) version ((.*) )?\(build ?(.*)\)
          at org.apache.parquet.VersionParser.parse(VersionParser.java:112)
          at org.apache.parquet.CorruptStatistics.shouldIgnoreStatistics(CorruptStatistics.java:60)
          at org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetStatistics(ParquetMetadataConverter.java:263)
          at org.apache.parquet.hadoop.ParquetFileReader$Chunk.readAllPages(ParquetFileReader.java:583)
          at org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:513)
          at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:270)
          at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:225)
          at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:137)
          at org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
          at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:102)
          at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:162)
          at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:102)
          at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.scan_nextBatch$(Unknown Source)
          at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
          at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
          at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:372)
          at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:231)
          at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:225)
          at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
          at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
          at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
          at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
          at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
          at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
          at org.apache.spark.scheduler.Task.run(Task.scala:99)
          at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
          at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
          at java.lang.Thread.run(Thread.java:745)
      ```
      
      This only happens during execution, not planning, and it doesn't matter what log level the `SparkContext` is set to. That's because Parquet (versions < 1.9) doesn't use slf4j for logging. Note, you can tell that log redirection is not working here because the log message format does not conform to the default Spark log message format.
      
      This is a regression I noted as something we needed to fix as a follow up.
      
      It appears that the problem arose because we removed the call to `inferSchema` during Hive table conversion. That call is what triggered the output redirection.
      
      ## How was this patch tested?
      
      I tested this manually in four ways:
      1. Executing `spark.sqlContext.range(10).selectExpr("id as a").write.mode("overwrite").parquet("test")`.
      2. Executing `spark.read.format("parquet").load(legacyParquetFile).show` for a Parquet file `legacyParquetFile` written using Parquet-mr 1.6.0.
      3. Executing `select * from legacy_parquet_table limit 1` for some unpartitioned Parquet-based Hive table written using Parquet-mr 1.6.0.
      4. Executing `select * from legacy_partitioned_parquet_table where partcol=x limit 1` for some partitioned Parquet-based Hive table written using Parquet-mr 1.6.0.
      
      I ran each test with a new instance of `spark-shell` or `spark-sql`.
      
      Incidentally, I found that test case 3 was not a regression—redirection was not occurring in the master codebase prior to #14690.
      
      I spent some time working on a unit test, but based on my experience working on this ticket I feel that automated testing here is far from feasible.
      
      cc ericl dongjoon-hyun
      
      Author: Michael Allman <michael@videoamp.com>
      
      Closes #15538 from mallman/spark-17993-fix_parquet_log_redirection.
      b533fa2b
    • Sean Owen's avatar
      [SPARK-18262][BUILD][SQL] JSON.org license is now CatX · 16eaad9d
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Try excluding org.json:json from hive-exec dep as it's Cat X now. It may be the case that it's not used by the part of Hive Spark uses anyway.
      
      ## How was this patch tested?
      
      Existing tests
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #15798 from srowen/SPARK-18262.
      16eaad9d
    • wm624@hotmail.com's avatar
      [SPARK-14914][CORE] Fix Resource not closed after using, for unit tests and example · 22a9d064
      wm624@hotmail.com authored
      ## What changes were proposed in this pull request?
      
      This is a follow-up work of #15618.
      
      Close file source;
      For any newly created streaming context outside the withContext, explicitly close the context.
      
      ## How was this patch tested?
      
      Existing unit tests.
      
      Author: wm624@hotmail.com <wm624@hotmail.com>
      
      Closes #15818 from wangmiao1981/rtest.
      Unverified
      22a9d064
    • Sandeep Singh's avatar
      [SPARK-18268][ML][MLLIB] ALS fail with better message if ratings is empty rdd · 96a59109
      Sandeep Singh authored
      ## What changes were proposed in this pull request?
      ALS.run fail with better message if ratings is empty rdd
      ALS.train and ALS.trainImplicit are also affected
      
      ## How was this patch tested?
      added new tests
      
      Author: Sandeep Singh <sandeep@techaddict.me>
      
      Closes #15809 from techaddict/SPARK-18268.
      Unverified
      96a59109
    • Liang-Chi Hsieh's avatar
      [MINOR][PYSPARK] Improve error message when running PySpark with different minor versions · cc86fcd0
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      Currently the error message is correct but doesn't provide additional hint to new users. It would be better to hint related configuration to users in the message.
      
      ## How was this patch tested?
      
      N/A because it only changes error message.
      
      Please review https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark before opening a pull request.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #15822 from viirya/minor-pyspark-worker-errmsg.
      Unverified
      cc86fcd0
  6. Nov 09, 2016
    • Wenchen Fan's avatar
      [SPARK-18147][SQL] do not fail for very complex aggregator result type · 6021c95a
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      ~In `TypedAggregateExpression.evaluateExpression`, we may create `ReferenceToExpressions` with `CreateStruct`, and `CreateStruct` may generate too many codes and split them into several methods.  `ReferenceToExpressions` will replace `BoundReference` in `CreateStruct` with `LambdaVariable`, which can only be used as local variables and doesn't work if we split the generated code.~
      
      It's already fixed by #15693 , this pr adds regression test
      
      ## How was this patch tested?
      
      new test in `DatasetAggregatorSuite`
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #15807 from cloud-fan/typed-agg.
      6021c95a
    • Tyson Condie's avatar
      [SPARK-17829][SQL] Stable format for offset log · 3f62e1b5
      Tyson Condie authored
      ## What changes were proposed in this pull request?
      
      Currently we use java serialization for the WAL that stores the offsets contained in each batch. This has two main issues:
      It can break across spark releases (though this is not the only thing preventing us from upgrading a running query)
      It is unnecessarily opaque to the user.
      I'd propose we require offsets to provide a user readable serialization and use that instead. JSON is probably a good option.
      ## How was this patch tested?
      
      Tests were added for KafkaSourceOffset in [KafkaSourceOffsetSuite](external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaSourceOffsetSuite.scala) and for LongOffset in [OffsetSuite](sql/core/src/test/scala/org/apache/spark/sql/streaming/OffsetSuite.scala)
      
      Please review https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark before opening a pull request.
      
      zsxwing marmbrus
      
      Author: Tyson Condie <tcondie@gmail.com>
      Author: Tyson Condie <tcondie@clash.local>
      
      Closes #15626 from tcondie/spark-8360.
      3f62e1b5
    • jiangxingbo's avatar
      [SPARK-18191][CORE][FOLLOWUP] Call `setConf` if `OutputFormat` is `Configurable`. · 64fbdf1a
      jiangxingbo authored
      ## What changes were proposed in this pull request?
      
      We should call `setConf` if `OutputFormat` is `Configurable`, this should be done before we create `OutputCommitter` and `RecordWriter`.
      This is follow up of #15769, see discussion [here](https://github.com/apache/spark/pull/15769/files#r87064229)
      
      ## How was this patch tested?
      
      Add test of this case in `PairRDDFunctionsSuite`.
      
      Author: jiangxingbo <jiangxb1987@gmail.com>
      
      Closes #15823 from jiangxb1987/config-format.
      64fbdf1a
    • Herman van Hovell's avatar
      [SPARK-18370][SQL] Add table information to InsertIntoHadoopFsRelationCommand · d8b81f77
      Herman van Hovell authored
      ## What changes were proposed in this pull request?
      `InsertIntoHadoopFsRelationCommand` does not keep track if it inserts into a table and what table it inserts to. This can make debugging these statements problematic. This PR adds table information the `InsertIntoHadoopFsRelationCommand`. Explaining this SQL command `insert into prq select * from range(0, 100000)` now yields the following executed plan:
      ```
      == Physical Plan ==
      ExecutedCommand
         +- InsertIntoHadoopFsRelationCommand file:/dev/assembly/spark-warehouse/prq, ParquetFormat, <function1>, Map(serialization.format -> 1, path -> file:/dev/assembly/spark-warehouse/prq), Append, CatalogTable(
      	Table: `default`.`prq`
      	Owner: hvanhovell
      	Created: Wed Nov 09 17:42:30 CET 2016
      	Last Access: Thu Jan 01 01:00:00 CET 1970
      	Type: MANAGED
      	Schema: [StructField(id,LongType,true)]
      	Provider: parquet
      	Properties: [transient_lastDdlTime=1478709750]
      	Storage(Location: file:/dev/assembly/spark-warehouse/prq, InputFormat: org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat, OutputFormat: org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat, Serde: org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe, Properties: [serialization.format=1]))
               +- Project [id#7L]
                  +- Range (0, 100000, step=1, splits=None)
      ```
      
      ## How was this patch tested?
      Added extra checks to the `ParquetMetastoreSuite`
      
      Author: Herman van Hovell <hvanhovell@databricks.com>
      
      Closes #15832 from hvanhovell/SPARK-18370.
      d8b81f77
    • Ryan Blue's avatar
      [SPARK-18368][SQL] Fix regexp replace when serialized · d4028de9
      Ryan Blue authored
      ## What changes were proposed in this pull request?
      
      This makes the result value both transient and lazy, so that if the RegExpReplace object is initialized then serialized, `result: StringBuffer` will be correctly initialized.
      
      ## How was this patch tested?
      
      * Verified that this patch fixed the query that found the bug.
      * Added a test case that fails without the fix.
      
      Author: Ryan Blue <blue@apache.org>
      
      Closes #15834 from rdblue/SPARK-18368-fix-regexp-replace.
      d4028de9
    • Yin Huai's avatar
      Revert "[SPARK-18368] Fix regexp_replace with task serialization." · 47636618
      Yin Huai authored
      This reverts commit b9192bb3.
      47636618
    • Vinayak's avatar
      [SPARK-16808][CORE] History Server main page does not honor APPLICATION_WEB_PROXY_BASE · 06a13ecc
      Vinayak authored
      ## What changes were proposed in this pull request?
      
      Application links generated on the history server UI no longer (regression from 1.6) contain the configured spark.ui.proxyBase in the links. To address this, made the uiRoot available globally to all javascripts for Web UI. Updated the mustache template (historypage-template.html) to include the uiroot for rendering links to the applications.
      
      The existing test was not sufficient to verify the scenario where ajax call is used to populate the application listing template, so added a new selenium test case to cover this scenario.
      
      ## How was this patch tested?
      
      Existing tests and a new unit test.
      No visual changes to the UI.
      
      Author: Vinayak <vijoshi5@in.ibm.com>
      
      Closes #15742 from vijoshi/SPARK-16808_master.
      06a13ecc
    • Cheng Lian's avatar
      [SPARK-18338][SQL][TEST-MAVEN] Fix test case initialization order under Maven builds · 205e6d58
      Cheng Lian authored
      ## What changes were proposed in this pull request?
      
      Test case initialization order under Maven and SBT are different. Maven always creates instances of all test cases and then run them all together.
      
      This fails `ObjectHashAggregateSuite` because the randomized test cases there register a temporary Hive function right before creating a test case, and can be cleared while initializing other successive test cases. In SBT, this is fine since the created test case is executed immediately after creating the temporary function.
      
      To fix this issue, we should put initialization/destruction code into `beforeAll()` and `afterAll()`.
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #15802 from liancheng/fix-flaky-object-hash-agg-suite.
      205e6d58
    • Dongjoon Hyun's avatar
      [SPARK-18292][SQL] LogicalPlanToSQLSuite should not use resource dependent... · 02c5325b
      Dongjoon Hyun authored
      [SPARK-18292][SQL] LogicalPlanToSQLSuite should not use resource dependent path for golden file generation
      
      ## What changes were proposed in this pull request?
      
      `LogicalPlanToSQLSuite` uses the following command to update the existing answer files.
      
      ```bash
      SPARK_GENERATE_GOLDEN_FILES=1 build/sbt "hive/test-only *LogicalPlanToSQLSuite"
      ```
      
      However, after introducing `getTestResourcePath`, it fails to update the previous golden answer files in the predefined directory. This issue aims to fix that.
      
      ## How was this patch tested?
      
      It's a testsuite update. Manual.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #15789 from dongjoon-hyun/SPARK-18292.
      Unverified
      02c5325b
    • gatorsmile's avatar
      [SPARK-17659][SQL] Partitioned View is Not Supported By SHOW CREATE TABLE · e256392a
      gatorsmile authored
      ### What changes were proposed in this pull request?
      
      `Partitioned View` is not supported by SPARK SQL. For Hive partitioned view, SHOW CREATE TABLE is unable to generate the right DDL. Thus, SHOW CREATE TABLE should not support it like the other Hive-only features. This PR is to issue an exception when detecting the view is a partitioned view.
      ### How was this patch tested?
      
      Added a test case
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #15233 from gatorsmile/partitionedView.
      e256392a
    • Ryan Blue's avatar
      [SPARK-18368] Fix regexp_replace with task serialization. · b9192bb3
      Ryan Blue authored
      ## What changes were proposed in this pull request?
      
      This makes the result value both transient and lazy, so that if the RegExpReplace object is initialized then serialized, `result: StringBuffer` will be correctly initialized.
      
      ## How was this patch tested?
      
      * Verified that this patch fixed the query that found the bug.
      * Added a test case that fails without the fix.
      
      Author: Ryan Blue <blue@apache.org>
      
      Closes #15816 from rdblue/SPARK-18368-fix-regexp-replace.
      b9192bb3
    • Eric Liang's avatar
      [SPARK-18333][SQL] Revert hacks in parquet and orc reader to support case insensitive resolution · 4afa39e2
      Eric Liang authored
      ## What changes were proposed in this pull request?
      
      These are no longer needed after https://issues.apache.org/jira/browse/SPARK-17183
      
      cc cloud-fan
      
      ## How was this patch tested?
      
      Existing parquet and orc tests.
      
      Author: Eric Liang <ekl@databricks.com>
      
      Closes #15799 from ericl/sc-4929.
      4afa39e2
Loading