Skip to content
Snippets Groups Projects
  1. Jan 07, 2017
  2. Jan 06, 2017
  3. Jan 04, 2017
    • Dongjoon Hyun's avatar
      [SPARK-18877][SQL][BACKPORT-2.1] CSVInferSchema.inferField` on DecimalType... · 1ecf1a95
      Dongjoon Hyun authored
      [SPARK-18877][SQL][BACKPORT-2.1] CSVInferSchema.inferField` on DecimalType should find a common type with `typeSoFar`
      
      ## What changes were proposed in this pull request?
      
      CSV type inferencing causes `IllegalArgumentException` on decimal numbers with heterogeneous precisions and scales because the current logic uses the last decimal type in a **partition**. Specifically, `inferRowType`, the **seqOp** of **aggregate**, returns the last decimal type. This PR fixes it to use `findTightestCommonType`.
      
      **decimal.csv**
      ```
      9.03E+12
      1.19E+11
      ```
      
      **BEFORE**
      ```scala
      scala> spark.read.format("csv").option("inferSchema", true).load("decimal.csv").printSchema
      root
       |-- _c0: decimal(3,-9) (nullable = true)
      
      scala> spark.read.format("csv").option("inferSchema", true).load("decimal.csv").show
      16/12/16 14:32:49 ERROR Executor: Exception in task 0.0 in stage 4.0 (TID 4)
      java.lang.IllegalArgumentException: requirement failed: Decimal precision 4 exceeds max precision 3
      ```
      
      **AFTER**
      ```scala
      scala> spark.read.format("csv").option("inferSchema", true).load("decimal.csv").printSchema
      root
       |-- _c0: decimal(4,-9) (nullable = true)
      
      scala> spark.read.format("csv").option("inferSchema", true).load("decimal.csv").show
      +---------+
      |      _c0|
      +---------+
      |9.030E+12|
      | 1.19E+11|
      +---------+
      ```
      
      ## How was this patch tested?
      
      Pass the newly add test case.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #16463 from dongjoon-hyun/SPARK-18877-BACKPORT-21.
      1ecf1a95
  4. Jan 03, 2017
    • gatorsmile's avatar
      [SPARK-19048][SQL] Delete Partition Location when Dropping Managed Partitioned... · 77625506
      gatorsmile authored
      [SPARK-19048][SQL] Delete Partition Location when Dropping Managed Partitioned Tables in InMemoryCatalog
      
      ### What changes were proposed in this pull request?
      The data in the managed table should be deleted after table is dropped. However, if the partition location is not under the location of the partitioned table, it is not deleted as expected. Users can specify any location for the partition when they adding a partition.
      
      This PR is to delete partition location when dropping managed partitioned tables stored in `InMemoryCatalog`.
      
      ### How was this patch tested?
      Added test cases for both HiveExternalCatalog and InMemoryCatalog
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #16448 from gatorsmile/unsetSerdeProp.
      
      (cherry picked from commit b67b35f7)
      Signed-off-by: default avatargatorsmile <gatorsmile@gmail.com>
      77625506
  5. Jan 02, 2017
  6. Jan 01, 2017
  7. Dec 30, 2016
    • Cheng Lian's avatar
      [SPARK-19016][SQL][DOC] Document scalable partition handling · 20ae1172
      Cheng Lian authored
      
      This PR documents the scalable partition handling feature in the body of the programming guide.
      
      Before this PR, we only mention it in the migration guide. It's not super clear that external datasource tables require an extra `MSCK REPAIR TABLE` command is to have per-partition information persisted since 2.1.
      
      N/A.
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #16424 from liancheng/scalable-partition-handling-doc.
      
      (cherry picked from commit 871f6114)
      Signed-off-by: default avatarCheng Lian <lian@databricks.com>
      20ae1172
  8. Dec 29, 2016
    • adesharatushar's avatar
      [SPARK-19003][DOCS] Add Java example in Spark Streaming Guide, section Design... · 47ab4afe
      adesharatushar authored
      [SPARK-19003][DOCS] Add Java example in Spark Streaming Guide, section Design Patterns for using foreachRDD
      
      ## What changes were proposed in this pull request?
      
      Added missing Java example under section "Design Patterns for using foreachRDD". Now this section has examples in all 3 languages, improving consistency of documentation.
      
      ## How was this patch tested?
      
      Manual.
      Generated docs using command "SKIP_API=1 jekyll build" and verified generated HTML page manually.
      
      The syntax of example has been tested for correctness using sample code on Java1.7 and Spark 2.2.0-SNAPSHOT.
      
      Author: adesharatushar <tushar_adeshara@persistent.com>
      
      Closes #16408 from adesharatushar/streaming-doc-fix.
      
      (cherry picked from commit dba81e1d)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      47ab4afe
  9. Dec 28, 2016
  10. Dec 24, 2016
  11. Dec 23, 2016
    • Shixiong Zhu's avatar
      [SPARK-18991][CORE] Change ContextCleaner.referenceBuffer to use... · 5bafdc45
      Shixiong Zhu authored
      [SPARK-18991][CORE] Change ContextCleaner.referenceBuffer to use ConcurrentHashMap to make it faster
      
      ## What changes were proposed in this pull request?
      
      The time complexity of ConcurrentHashMap's `remove` is O(1). Changing ContextCleaner.referenceBuffer's type from `ConcurrentLinkedQueue` to `ConcurrentHashMap's` will make the removal much faster.
      
      ## How was this patch tested?
      
      Jenkins
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #16390 from zsxwing/SPARK-18991.
      
      (cherry picked from commit a848f0ba)
      Signed-off-by: default avatarShixiong Zhu <shixiong@databricks.com>
      5bafdc45
  12. Dec 22, 2016
    • Shixiong Zhu's avatar
      [SPARK-18972][CORE] Fix the netty thread names for RPC · 1857acc7
      Shixiong Zhu authored
      
      ## What changes were proposed in this pull request?
      
      Right now the name of threads created by Netty for Spark RPC are `shuffle-client-**` and `shuffle-server-**`. It's pretty confusing.
      
      This PR just uses the module name in TransportConf to set the thread name. In addition, it also includes the following minor fixes:
      
      - TransportChannelHandler.channelActive and channelInactive should call the corresponding super methods.
      - Make ShuffleBlockFetcherIterator throw NoSuchElementException if it has no more elements. Otherwise,  if the caller calls `next` without `hasNext`, it will just hang.
      
      ## How was this patch tested?
      
      Jenkins
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #16380 from zsxwing/SPARK-18972.
      
      (cherry picked from commit f252cb5d)
      Signed-off-by: default avatarShixiong Zhu <shixiong@databricks.com>
      1857acc7
    • Shixiong Zhu's avatar
      [SPARK-18985][SS] Add missing @InterfaceStability.Evolving for Structured Streaming APIs · 5e801034
      Shixiong Zhu authored
      
      ## What changes were proposed in this pull request?
      
      Add missing InterfaceStability.Evolving for Structured Streaming APIs
      
      ## How was this patch tested?
      
      Compiling the codes.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #16385 from zsxwing/SPARK-18985.
      
      (cherry picked from commit 2246ce88)
      Signed-off-by: default avatarShixiong Zhu <shixiong@databricks.com>
      5e801034
    • Ryan Williams's avatar
      [SPARK-17807][CORE] split test-tags into test-JAR · 132f2297
      Ryan Williams authored
      
      Remove spark-tag's compile-scope dependency (and, indirectly, spark-core's compile-scope transitive-dependency) on scalatest by splitting test-oriented tags into spark-tags' test JAR.
      
      Alternative to #16303.
      
      Author: Ryan Williams <ryan.blake.williams@gmail.com>
      
      Closes #16311 from ryan-williams/tt.
      
      (cherry picked from commit afd9bc1d)
      Signed-off-by: default avatarMarcelo Vanzin <vanzin@cloudera.com>
      132f2297
    • Reynold Xin's avatar
      [SPARK-18973][SQL] Remove SortPartitions and RedistributeData · f6853b3e
      Reynold Xin authored
      
      ## What changes were proposed in this pull request?
      SortPartitions and RedistributeData logical operators are not actually used and can be removed. Note that we do have a Sort operator (with global flag false) that subsumed SortPartitions.
      
      ## How was this patch tested?
      Also updated test cases to reflect the removal.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #16381 from rxin/SPARK-18973.
      
      (cherry picked from commit 26151000)
      Signed-off-by: default avatarHerman van Hovell <hvanhovell@databricks.com>
      f6853b3e
    • Reynold Xin's avatar
      [DOC] bucketing is applicable to all file-based data sources · ec0d6e21
      Reynold Xin authored
      
      ## What changes were proposed in this pull request?
      Starting Spark 2.1.0, bucketing feature is available for all file-based data sources. This patch fixes some function docs that haven't yet been updated to reflect that.
      
      ## How was this patch tested?
      N/A
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #16349 from rxin/ds-doc.
      
      (cherry picked from commit 2e861df9)
      Signed-off-by: default avatarReynold Xin <rxin@databricks.com>
      ec0d6e21
    • Reynold Xin's avatar
      [SQL] Minor readability improvement for partition handling code · def3690f
      Reynold Xin authored
      
      This patch includes minor changes to improve readability for partition handling code. I'm in the middle of implementing some new feature and found some naming / implicit type inference not as intuitive.
      
      This patch should have no semantic change and the changes should be covered by existing test cases.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #16378 from rxin/minor-fix.
      
      (cherry picked from commit 7c5b7b3a)
      Signed-off-by: default avatarReynold Xin <rxin@databricks.com>
      def3690f
    • Shixiong Zhu's avatar
      [SPARK-18908][SS] Creating StreamingQueryException should check if logicalPlan is created · 07e2a17d
      Shixiong Zhu authored
      
      ## What changes were proposed in this pull request?
      
      This PR audits places using `logicalPlan` in StreamExecution and ensures they all handles the case that `logicalPlan` cannot be created.
      
      In addition, this PR also fixes the following issues in `StreamingQueryException`:
      - `StreamingQueryException` and `StreamExecution` are cycle-dependent because in the `StreamingQueryException`'s constructor, it calls `StreamExecution`'s `toDebugString` which uses `StreamingQueryException`. Hence it will output `null` value in the error message.
      - Duplicated stack trace when calling Throwable.printStackTrace because StreamingQueryException's toString contains the stack trace.
      
      ## How was this patch tested?
      
      The updated `test("max files per trigger - incorrect values")`. I found this issue when I switched from `testStream` to the real codes to verify the failure in this test.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #16322 from zsxwing/SPARK-18907.
      
      (cherry picked from commit ff7d82a2)
      Signed-off-by: default avatarShixiong Zhu <shixiong@databricks.com>
      07e2a17d
  13. Dec 21, 2016
  14. Dec 20, 2016
    • Burak Yavuz's avatar
      [SPARK-18900][FLAKY-TEST] StateStoreSuite.maintenance · 063a98e5
      Burak Yavuz authored
      ## What changes were proposed in this pull request?
      
      It was pretty flaky before 10 days ago.
      https://spark-tests.appspot.com/test-details?suite_name=org.apache.spark.sql.execution.streaming.state.StateStoreSuite&test_name=maintenance
      
      
      
      Since no code changes went into this code path to not be so flaky, I'm just increasing the timeouts such that load related flakiness shouldn't be a problem. As you may see from the testing, I haven't been able to reproduce it.
      
      ## How was this patch tested?
      
      2000 retries 5 times
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #16314 from brkyvz/maint-flaky.
      
      (cherry picked from commit b2dd8ec6)
      Signed-off-by: default avatarTathagata Das <tathagata.das1565@gmail.com>
      063a98e5
    • Burak Yavuz's avatar
      [SPARK-18927][SS] MemorySink for StructuredStreaming can't recover from... · 3857d5ba
      Burak Yavuz authored
      [SPARK-18927][SS] MemorySink for StructuredStreaming can't recover from checkpoint if location is provided in SessionConf
      
      ## What changes were proposed in this pull request?
      
      Checkpoint Location can be defined for a StructuredStreaming on a per-query basis by the `DataStreamWriter` options, but it can also be provided through SparkSession configurations. It should be able to recover in both cases when the OutputMode is Complete for MemorySinks.
      
      ## How was this patch tested?
      
      Unit tests
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #16342 from brkyvz/chk-rec.
      
      (cherry picked from commit caed8932)
      Signed-off-by: default avatarShixiong Zhu <shixiong@databricks.com>
      3857d5ba
    • Liang-Chi Hsieh's avatar
      [SPARK-18281] [SQL] [PYSPARK] Remove timeout for reading data through socket for local iterator · cd297c39
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      There is a timeout failure when using `rdd.toLocalIterator()` or `df.toLocalIterator()` for a PySpark RDD and DataFrame:
      
          df = spark.createDataFrame([[1],[2],[3]])
          it = df.toLocalIterator()
          row = next(it)
      
          df2 = df.repartition(1000)  # create many empty partitions which increase materialization time so causing timeout
          it2 = df2.toLocalIterator()
          row = next(it2)
      
      The cause of this issue is, we open a socket to serve the data from JVM side. We set timeout for connection and reading through the socket in Python side. In Python we use a generator to read the data, so we only begin to connect the socket once we start to ask data from it. If we don't consume it immediately, there is connection timeout.
      
      In the other side, the materialization time for RDD partitions is unpredictable. So we can't set a timeout for reading data through the socket. Otherwise, it is very possibly to fail.
      
      ## How was this patch tested?
      
      Added tests into PySpark.
      
      Please review http://spark.apache.org/contributing.html
      
       before opening a pull request.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #16263 from viirya/fix-pyspark-localiterator.
      
      (cherry picked from commit 95c95b71)
      Signed-off-by: default avatarDavies Liu <davies.liu@gmail.com>
      cd297c39
    • Josh Rosen's avatar
      [SPARK-18761][CORE] Introduce "task reaper" to oversee task killing in executors · 2971ae56
      Josh Rosen authored
      ## What changes were proposed in this pull request?
      
      Spark's current task cancellation / task killing mechanism is "best effort" because some tasks may not be interruptible or may not respond to their "killed" flags being set. If a significant fraction of a cluster's task slots are occupied by tasks that have been marked as killed but remain running then this can lead to a situation where new jobs and tasks are starved of resources that are being used by these zombie tasks.
      
      This patch aims to address this problem by adding a "task reaper" mechanism to executors. At a high-level, task killing now launches a new thread which attempts to kill the task and then watches the task and periodically checks whether it has been killed. The TaskReaper will periodically re-attempt to call `TaskRunner.kill()` and will log warnings if the task keeps running. I modified TaskRunner to rename its thread at the start of the task, allowing TaskReaper to take a thread dump and filter it in order to log stacktraces from the exact task thread that we are waiting to finish. If the task has not stopped after a configurable timeout then the TaskReaper will throw an exception to trigger executor JVM death, thereby forcibly freeing any resources consumed by the zombie tasks.
      
      This feature is flagged off by default and is controlled by four new configurations under the `spark.task.reaper.*` namespace. See the updated `configuration.md` doc for details.
      
      ## How was this patch tested?
      
      Tested via a new test case in `JobCancellationSuite`, plus manual testing.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #16189 from JoshRosen/cancellation.
      2971ae56
Loading