Skip to content
Snippets Groups Projects
  1. Sep 16, 2016
    • Reynold Xin's avatar
      [SPARK-17558] Bump Hadoop 2.7 version from 2.7.2 to 2.7.3 · dca771be
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch bumps the Hadoop version in hadoop-2.7 profile from 2.7.2 to 2.7.3, which was recently released and contained a number of bug fixes.
      
      ## How was this patch tested?
      The change should be covered by existing tests.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #15115 from rxin/SPARK-17558.
      dca771be
    • Sean Zhong's avatar
      [SPARK-17426][SQL] Refactor `TreeNode.toJSON` to avoid OOM when converting unknown fields to JSON · a425a37a
      Sean Zhong authored
      ## What changes were proposed in this pull request?
      
      This PR is a follow up of SPARK-17356. Current implementation of `TreeNode.toJSON` recursively converts all fields of TreeNode to JSON, even if the field is of type `Seq` or type Map. This may trigger out of memory exception in cases like:
      
      1. the Seq or Map can be very big. Converting them to JSON may take huge memory, which may trigger out of memory error.
      2. Some user space input may also be propagated to the Plan. The user space input can be of arbitrary type, and may also be self-referencing. Trying to print user space input to JSON may trigger out of memory error or stack overflow error.
      
      For a code example, please check the Jira description of SPARK-17426.
      
      In this PR, we refactor the `TreeNode.toJSON` so that we only convert a field to JSON string if the field is a safe type.
      
      ## How was this patch tested?
      
      Unit test.
      
      Author: Sean Zhong <seanzhong@databricks.com>
      
      Closes #14990 from clockfly/json_oom2.
      a425a37a
    • Adam Roberts's avatar
      [SPARK-17534][TESTS] Increase timeouts for DirectKafkaStreamSuite tests · fc1efb72
      Adam Roberts authored
      **## What changes were proposed in this pull request?**
      There are two tests in this suite that are particularly flaky on the following hardware:
      
      2x Intel(R) Xeon(R) CPU E5-2697 v2  2.70GHz and 16 GB of RAM, 1 TB HDD
      
      This simple PR increases the timeout times and batch duration so they can reliably pass
      
      **## How was this patch tested?**
      Existing unit tests with the two core box where I was seeing the failures often
      
      Author: Adam Roberts <aroberts@uk.ibm.com>
      
      Closes #15094 from a-roberts/patch-6.
      fc1efb72
    • Jagadeesan's avatar
      [SPARK-17543] Missing log4j config file for tests in common/network-… · b2e27262
      Jagadeesan authored
      ## What changes were proposed in this pull request?
      
      The Maven module `common/network-shuffle` does not have a log4j configuration file for its test cases. So, added `log4j.properties` in the directory `src/test/resources`.
      
      …shuffle]
      
      Author: Jagadeesan <as2@us.ibm.com>
      
      Closes #15108 from jagadeesanas2/SPARK-17543.
      b2e27262
  2. Sep 15, 2016
    • Andrew Ray's avatar
      [SPARK-17458][SQL] Alias specified for aggregates in a pivot are not honored · b72486f8
      Andrew Ray authored
      ## What changes were proposed in this pull request?
      
      This change preserves aliases that are given for pivot aggregations
      
      ## How was this patch tested?
      
      New unit test
      
      Author: Andrew Ray <ray.andrew@gmail.com>
      
      Closes #15111 from aray/SPARK-17458.
      b72486f8
    • Josh Rosen's avatar
      [SPARK-17484] Prevent invalid block locations from being reported after put() exceptions · 1202075c
      Josh Rosen authored
      ## What changes were proposed in this pull request?
      
      If a BlockManager `put()` call failed after the BlockManagerMaster was notified of a block's availability then incomplete cleanup logic in a `finally` block would never send a second block status method to inform the master of the block's unavailability. This, in turn, leads to fetch failures and used to be capable of causing complete job failures before #15037 was fixed.
      
      This patch addresses this issue via multiple small changes:
      
      - The `finally` block now calls `removeBlockInternal` when cleaning up from a failed `put()`; in addition to removing the `BlockInfo` entry (which was _all_ that the old cleanup logic did), this code (redundantly) tries to remove the block from the memory and disk stores (as an added layer of defense against bugs lower down in the stack) and optionally notifies the master of block removal (which now happens during exception-triggered cleanup).
      - When a BlockManager receives a request for a block that it does not have it will now notify the master to update its block locations. This ensures that bad metadata pointing to non-existent blocks will eventually be fixed. Note that I could have implemented this logic in the block manager client (rather than in the remote server), but that would introduce the problem of distinguishing between transient and permanent failures; on the server, however, we know definitively that the block isn't present.
      - Catch `NonFatal` instead of `Exception` to avoid swallowing `InterruptedException`s thrown from synchronous block replication calls.
      
      This patch depends upon the refactorings in #15036, so that other patch will also have to be backported when backporting this fix.
      
      For more background on this issue, including example logs from a real production failure, see [SPARK-17484](https://issues.apache.org/jira/browse/SPARK-17484).
      
      ## How was this patch tested?
      
      Two new regression tests in BlockManagerSuite.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #15085 from JoshRosen/SPARK-17484.
      1202075c
    • Sean Zhong's avatar
      [SPARK-17364][SQL] Antlr lexer wrongly treats full qualified identifier as a... · a6b81820
      Sean Zhong authored
      [SPARK-17364][SQL] Antlr lexer wrongly treats full qualified identifier as a decimal number token when parsing SQL string
      
      ## What changes were proposed in this pull request?
      
      The Antlr lexer we use to tokenize a SQL string may wrongly tokenize a fully qualified identifier as a decimal number token. For example, table identifier `default.123_table` is wrongly tokenized as
      ```
      default // Matches lexer rule IDENTIFIER
      .123 // Matches lexer rule DECIMAL_VALUE
      _TABLE // Matches lexer rule IDENTIFIER
      ```
      
      The correct tokenization for `default.123_table` should be:
      ```
      default // Matches lexer rule IDENTIFIER,
      . // Matches a single dot
      123_TABLE // Matches lexer rule IDENTIFIER
      ```
      
      This PR fix the Antlr grammar so that it can tokenize fully qualified identifier correctly:
      1. Fully qualified table name can be parsed correctly. For example, `select * from database.123_suffix`.
      2. Fully qualified column name can be parsed correctly, for example `select a.123_suffix from a`.
      
      ### Before change
      
      #### Case 1: Failed to parse fully qualified column name
      
      ```
      scala> spark.sql("select a.123_column from a").show
      org.apache.spark.sql.catalyst.parser.ParseException:
      extraneous input '.123' expecting {<EOF>,
      ...
      , IDENTIFIER, BACKQUOTED_IDENTIFIER}(line 1, pos 8)
      == SQL ==
      select a.123_column from a
      --------^^^
      ```
      
      #### Case 2: Failed to parse fully qualified table name
      ```
      scala> spark.sql("select * from default.123_table")
      org.apache.spark.sql.catalyst.parser.ParseException:
      extraneous input '.123' expecting {<EOF>,
      ...
      IDENTIFIER, BACKQUOTED_IDENTIFIER}(line 1, pos 21)
      
      == SQL ==
      select * from default.123_table
      ---------------------^^^
      ```
      
      ### After Change
      
      #### Case 1: fully qualified column name, no ParseException thrown
      ```
      scala> spark.sql("select a.123_column from a").show
      ```
      
      #### Case 2: fully qualified table name, no ParseException thrown
      ```
      scala> spark.sql("select * from default.123_table")
      ```
      
      ## How was this patch tested?
      
      Unit test.
      
      Author: Sean Zhong <seanzhong@databricks.com>
      
      Closes #15006 from clockfly/SPARK-17364.
      a6b81820
    • 岑玉海's avatar
      [SPARK-17429][SQL] use ImplicitCastInputTypes with function Length · fe767395
      岑玉海 authored
      ## What changes were proposed in this pull request?
      select length(11);
      select length(2.0);
      these sql will return errors, but hive is ok.
      this PR will support casting input types implicitly for function length
      the correct result is:
      select length(11) return 2
      select length(2.0) return 3
      
      Author: 岑玉海 <261810726@qq.com>
      Author: cenyuhai <cenyuhai@didichuxing.com>
      
      Closes #15014 from cenyuhai/SPARK-17429.
      fe767395
    • Herman van Hovell's avatar
      [SPARK-17114][SQL] Fix aggregates grouped by literals with empty input · d403562e
      Herman van Hovell authored
      ## What changes were proposed in this pull request?
      This PR fixes an issue with aggregates that have an empty input, and use a literals as their grouping keys. These aggregates are currently interpreted as aggregates **without** grouping keys, this triggers the ungrouped code path (which aways returns a single row).
      
      This PR fixes the `RemoveLiteralFromGroupExpressions` optimizer rule, which changes the semantics of the Aggregate by eliminating all literal grouping keys.
      
      ## How was this patch tested?
      Added tests to `SQLQueryTestSuite`.
      
      Author: Herman van Hovell <hvanhovell@databricks.com>
      
      Closes #15101 from hvanhovell/SPARK-17114-3.
      d403562e
    • Josh Rosen's avatar
      [SPARK-17547] Ensure temp shuffle data file is cleaned up after error · 5b8f7377
      Josh Rosen authored
      SPARK-8029 (#9610) modified shuffle writers to first stage their data to a temporary file in the same directory as the final destination file and then to atomically rename this temporary file at the end of the write job. However, this change introduced the potential for the temporary output file to be leaked if an exception occurs during the write because the shuffle writers' existing error cleanup code doesn't handle deletion of the temp file.
      
      This patch avoids this potential cause of disk-space leaks by adding `finally` blocks to ensure that temp files are always deleted if they haven't been renamed.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #15104 from JoshRosen/cleanup-tmp-data-file-in-shuffle-writer.
      5b8f7377
    • Adam Roberts's avatar
      [SPARK-17379][BUILD] Upgrade netty-all to 4.0.41 final for bug fixes · 0ad8eeb4
      Adam Roberts authored
      ## What changes were proposed in this pull request?
      Upgrade netty-all to latest in the 4.0.x line which is 4.0.41, mentions several bug fixes and performance improvements we may find useful, see netty.io/news/2016/08/29/4-0-41-Final-4-1-5-Final.html. Initially tried to use 4.1.5 but noticed it's not backwards compatible.
      
      ## How was this patch tested?
      Existing unit tests against branch-1.6 and branch-2.0 using IBM Java 8 on Intel, Power and Z architectures
      
      Author: Adam Roberts <aroberts@uk.ibm.com>
      
      Closes #14961 from a-roberts/netty.
      0ad8eeb4
    • Tejas Patil's avatar
      [SPARK-17451][CORE] CoarseGrainedExecutorBackend should inform driver before self-kill · b4792781
      Tejas Patil authored
      ## What changes were proposed in this pull request?
      
      Jira : https://issues.apache.org/jira/browse/SPARK-17451
      
      `CoarseGrainedExecutorBackend` in some failure cases exits the JVM. While this does not have any issue, from the driver UI there is no specific reason captured for this. In this PR, I am adding functionality to `exitExecutor` to notify driver that the executor is exiting.
      
      ## How was this patch tested?
      
      Ran the change over a test env and took down shuffle service before the executor could register to it. In the driver logs, where the job failure reason is mentioned (ie. `Job aborted due to stage ...` it gives the correct reason:
      
      Before:
      `ExecutorLostFailure (executor ZZZZZZZZZ exited caused by one of the running tasks) Reason: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages.`
      
      After:
      `ExecutorLostFailure (executor ZZZZZZZZZ exited caused by one of the running tasks) Reason: Unable to create executor due to java.util.concurrent.TimeoutException: Timeout waiting for task.`
      
      Author: Tejas Patil <tejasp@fb.com>
      
      Closes #15013 from tejasapatil/SPARK-17451_inform_driver.
      b4792781
    • Sean Owen's avatar
      [SPARK-17406][BUILD][HOTFIX] MiMa excludes fix · 2ad27695
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Following https://github.com/apache/spark/pull/14969 for some reason the MiMa excludes weren't complete, but still passed the PR builder. This adds 3 more excludes from https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-sbt-hadoop-2.2/1749/consoleFull
      
      It also moves the excludes to their own Seq in the build, as they probably should have been.
      Even though this is merged to 2.1.x only / master, I left the exclude in for 2.0.x in case we back port. It's a private API so is always a false positive.
      
      ## How was this patch tested?
      
      Jenkins build
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #15110 from srowen/SPARK-17406.2.
      2ad27695
    • John Muller's avatar
      [SPARK-17536][SQL] Minor performance improvement to JDBC batch inserts · 71a65825
      John Muller authored
      ## What changes were proposed in this pull request?
      
      Optimize a while loop during batch inserts
      
      ## How was this patch tested?
      
      Unit tests were done, specifically "mvn  test" for sql
      
      Author: John Muller <jmuller@us.imshealth.com>
      
      Closes #15098 from blue666man/SPARK-17536.
      71a65825
    • cenyuhai's avatar
      [SPARK-17406][WEB UI] limit timeline executor events · ad79fc0a
      cenyuhai authored
      ## What changes were proposed in this pull request?
      The job page will be too slow to open when there are thousands of executor events(added or removed). I found that in ExecutorsTab file, executorIdToData will not remove elements, it will increase all the time.Before this pr, it looks like [timeline1.png](https://issues.apache.org/jira/secure/attachment/12827112/timeline1.png). After this pr, it looks like [timeline2.png](https://issues.apache.org/jira/secure/attachment/12827113/timeline2.png)(we can set how many executor events will be displayed)
      
      Author: cenyuhai <cenyuhai@didichuxing.com>
      
      Closes #14969 from cenyuhai/SPARK-17406.
      ad79fc0a
    • codlife's avatar
      [SPARK-17521] Error when I use sparkContext.makeRDD(Seq()) · 647ee05e
      codlife authored
      ## What changes were proposed in this pull request?
      
       when i use sc.makeRDD below
      ```
      val data3 = sc.makeRDD(Seq())
      println(data3.partitions.length)
      ```
      I got an error:
      Exception in thread "main" java.lang.IllegalArgumentException: Positive number of slices required
      
      We can fix this bug just modify the last line ,do a check of seq.size
      ```
        def makeRDD[T: ClassTag](seq: Seq[(T, Seq[String])]): RDD[T] = withScope {
          assertNotStopped()
          val indexToPrefs = seq.zipWithIndex.map(t => (t._2, t._1._2)).toMap
          new ParallelCollectionRDD[T](this, seq.map(_._1), math.max(seq.size, defaultParallelism), indexToPrefs)
        }
      ```
      
      ## How was this patch tested?
      
       manual tests
      
      (If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
      
      Author: codlife <1004910847@qq.com>
      Author: codlife <wangjianfei15@otcaix.iscas.ac.cn>
      
      Closes #15077 from codlife/master.
      647ee05e
    • Adam Roberts's avatar
      [SPARK-17524][TESTS] Use specified spark.buffer.pageSize · f893e262
      Adam Roberts authored
      ## What changes were proposed in this pull request?
      
      This PR has the appendRowUntilExceedingPageSize test in RowBasedKeyValueBatchSuite use whatever spark.buffer.pageSize value a user has specified to prevent a test failure for anyone testing Apache Spark on a box with a reduced page size. The test is currently hardcoded to use the default page size which is 64 MB so this minor PR is a test improvement
      
      ## How was this patch tested?
      Existing unit tests with 1 MB page size and with 64 MB (the default) page size
      
      Author: Adam Roberts <aroberts@uk.ibm.com>
      
      Closes #15079 from a-roberts/patch-5.
      f893e262
    • WeichenXu's avatar
      [SPARK-17507][ML][MLLIB] check weight vector size in ANN · d15b4f90
      WeichenXu authored
      ## What changes were proposed in this pull request?
      
      as the TODO described,
      check weight vector size and if wrong throw exception.
      
      ## How was this patch tested?
      
      existing tests.
      
      Author: WeichenXu <WeichenXu123@outlook.com>
      
      Closes #15060 from WeichenXu123/check_input_weight_size_of_ann.
      d15b4f90
    • gatorsmile's avatar
      [SPARK-17440][SPARK-17441] Fixed Multiple Bugs in ALTER TABLE · 6a6adb16
      gatorsmile authored
      ### What changes were proposed in this pull request?
      For the following `ALTER TABLE` DDL, we should issue an exception when the target table is a `VIEW`:
      ```SQL
       ALTER TABLE viewName SET LOCATION '/path/to/your/lovely/heart'
      
       ALTER TABLE viewName SET SERDE 'whatever'
      
       ALTER TABLE viewName SET SERDEPROPERTIES ('x' = 'y')
      
       ALTER TABLE viewName PARTITION (a=1, b=2) SET SERDEPROPERTIES ('x' = 'y')
      
       ALTER TABLE viewName ADD IF NOT EXISTS PARTITION (a='4', b='8')
      
       ALTER TABLE viewName DROP IF EXISTS PARTITION (a='2')
      
       ALTER TABLE viewName RECOVER PARTITIONS
      
       ALTER TABLE viewName PARTITION (a='1', b='q') RENAME TO PARTITION (a='100', b='p')
      ```
      
      In addition, `ALTER TABLE RENAME PARTITION` is unable to handle data source tables, just like the other `ALTER PARTITION` commands. We should issue an exception instead.
      
      ### How was this patch tested?
      Added a few test cases.
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #15004 from gatorsmile/altertable.
      6a6adb16
  3. Sep 14, 2016
    • Xing SHI's avatar
      [SPARK-17465][SPARK CORE] Inappropriate memory management in... · bb322943
      Xing SHI authored
      [SPARK-17465][SPARK CORE] Inappropriate memory management in `org.apache.spark.storage.MemoryStore` may lead to memory leak
      
      The expression like `if (memoryMap(taskAttemptId) == 0) memoryMap.remove(taskAttemptId)` in method `releaseUnrollMemoryForThisTask` and `releasePendingUnrollMemoryForThisTask` should be called after release memory operation, whatever `memoryToRelease` is > 0 or not.
      
      If the memory of a task has been set to 0 when calling a `releaseUnrollMemoryForThisTask` or a `releasePendingUnrollMemoryForThisTask` method, the key in the memory map corresponding to that task will never be removed from the hash map.
      
      See the details in [SPARK-17465](https://issues.apache.org/jira/browse/SPARK-17465).
      
      Author: Xing SHI <shi-kou@indetail.co.jp>
      
      Closes #15022 from saturday-shi/SPARK-17465.
      bb322943
    • Eric Liang's avatar
      [SPARK-17472] [PYSPARK] Better error message for serialization failures of large objects in Python · dbfc7aa4
      Eric Liang authored
      ## What changes were proposed in this pull request?
      
      For large objects, pickle does not raise useful error messages. However, we can wrap them to be slightly more user friendly:
      
      Example 1:
      ```
      def run():
        import numpy.random as nr
        b = nr.bytes(8 * 1000000000)
        sc.parallelize(range(1000), 1000).map(lambda x: len(b)).count()
      
      run()
      ```
      
      Before:
      ```
      error: 'i' format requires -2147483648 <= number <= 2147483647
      ```
      
      After:
      ```
      pickle.PicklingError: Object too large to serialize: 'i' format requires -2147483648 <= number <= 2147483647
      ```
      
      Example 2:
      ```
      def run():
        import numpy.random as nr
        b = sc.broadcast(nr.bytes(8 * 1000000000))
        sc.parallelize(range(1000), 1000).map(lambda x: len(b.value)).count()
      
      run()
      ```
      
      Before:
      ```
      SystemError: error return without exception set
      ```
      
      After:
      ```
      cPickle.PicklingError: Could not serialize broadcast: SystemError: error return without exception set
      ```
      
      ## How was this patch tested?
      
      Manually tried out these cases
      
      cc davies
      
      Author: Eric Liang <ekl@databricks.com>
      
      Closes #15026 from ericl/spark-17472.
      dbfc7aa4
    • Shixiong Zhu's avatar
      [SPARK-17463][CORE] Make CollectionAccumulator and SetAccumulator's value can be read thread-safely · e33bfaed
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      Make CollectionAccumulator and SetAccumulator's value can be read thread-safely to fix the ConcurrentModificationException reported in [JIRA](https://issues.apache.org/jira/browse/SPARK-17463).
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #15063 from zsxwing/SPARK-17463.
      e33bfaed
    • Kishor Patil's avatar
      [SPARK-17511] Yarn Dynamic Allocation: Avoid marking released container as Failed · ff6e4cbd
      Kishor Patil authored
      ## What changes were proposed in this pull request?
      
      Due to race conditions, the ` assert(numExecutorsRunning <= targetNumExecutors)` can fail causing `AssertionError`. So removed the assertion, instead moved the conditional check before launching new container:
      ```
      java.lang.AssertionError: assertion failed
              at scala.Predef$.assert(Predef.scala:156)
              at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$runAllocatedContainers$1.org$apache$spark$deploy$yarn$YarnAllocator$$anonfun$$updateInternalState$1(YarnAllocator.scala:489)
              at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$runAllocatedContainers$1$$anon$1.run(YarnAllocator.scala:519)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
              at java.lang.Thread.run(Thread.java:745)
      ```
      ## How was this patch tested?
      This was manually tested using a large ForkAndJoin job with Dynamic Allocation enabled to validate the failing job succeeds, without any such exception.
      
      Author: Kishor Patil <kpatil@yahoo-inc.com>
      
      Closes #15069 from kishorvpatil/SPARK-17511.
      ff6e4cbd
    • Xin Wu's avatar
      [SPARK-10747][SQL] Support NULLS FIRST|LAST clause in ORDER BY · 040e4697
      Xin Wu authored
      ## What changes were proposed in this pull request?
      Currently, ORDER BY clause returns nulls value according to sorting order (ASC|DESC), considering null value is always smaller than non-null values.
      However, SQL2003 standard support NULLS FIRST or NULLS LAST to allow users to specify whether null values should be returned first or last, regardless of sorting order (ASC|DESC).
      
      This PR is to support this new feature.
      
      ## How was this patch tested?
      New test cases are added to test NULLS FIRST|LAST for regular select queries and windowing queries.
      
      (If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
      
      Author: Xin Wu <xinwu@us.ibm.com>
      
      Closes #14842 from xwu0226/SPARK-10747.
      040e4697
    • hyukjinkwon's avatar
      [MINOR][SQL] Add missing functions for some options in SQLConf and use them where applicable · a79838bd
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      I first thought they are missing because they are kind of hidden options but it seems they are just missing.
      
      For example, `spark.sql.parquet.mergeSchema` is documented in [sql-programming-guide.md](https://github.com/apache/spark/blob/master/docs/sql-programming-guide.md) but this function is missing whereas many options such as `spark.sql.join.preferSortMergeJoin` are not documented but have its own function individually.
      
      So, this PR suggests making them consistent by adding the missing functions for some options in `SQLConf` and use them where applicable, in order to make them more readable.
      
      ## How was this patch tested?
      
      Existing tests should cover this.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #14678 from HyukjinKwon/sqlconf-cleanup.
      a79838bd
    • Josh Rosen's avatar
      [SPARK-17514] df.take(1) and df.limit(1).collect() should perform the same in Python · 6d06ff6f
      Josh Rosen authored
      ## What changes were proposed in this pull request?
      
      In PySpark, `df.take(1)` runs a single-stage job which computes only one partition of the DataFrame, while `df.limit(1).collect()` computes all partitions and runs a two-stage job. This difference in performance is confusing.
      
      The reason why `limit(1).collect()` is so much slower is that `collect()` internally maps to `df.rdd.<some-pyspark-conversions>.toLocalIterator`, which causes Spark SQL to build a query where a global limit appears in the middle of the plan; this, in turn, ends up being executed inefficiently because limits in the middle of plans are now implemented by repartitioning to a single task rather than by running a `take()` job on the driver (this was done in #7334, a patch which was a prerequisite to allowing partition-local limits to be pushed beneath unions, etc.).
      
      In order to fix this performance problem I think that we should generalize the fix from SPARK-10731 / #8876 so that `DataFrame.collect()` also delegates to the Scala implementation and shares the same performance properties. This patch modifies `DataFrame.collect()` to first collect all results to the driver and then pass them to Python, allowing this query to be planned using Spark's `CollectLimit` optimizations.
      
      ## How was this patch tested?
      
      Added a regression test in `sql/tests.py` which asserts that the expected number of jobs, stages, and tasks are run for both queries.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #15068 from JoshRosen/pyspark-collect-limit.
      6d06ff6f
    • gatorsmile's avatar
      [SPARK-17409][SQL] Do Not Optimize Query in CTAS More Than Once · 52738d4e
      gatorsmile authored
      ### What changes were proposed in this pull request?
      As explained in https://github.com/apache/spark/pull/14797:
      >Some analyzer rules have assumptions on logical plans, optimizer may break these assumption, we should not pass an optimized query plan into QueryExecution (will be analyzed again), otherwise we may some weird bugs.
      For example, we have a rule for decimal calculation to promote the precision before binary operations, use PromotePrecision as placeholder to indicate that this rule should not apply twice. But a Optimizer rule will remove this placeholder, that break the assumption, then the rule applied twice, cause wrong result.
      
      We should not optimize the query in CTAS more than once. For example,
      ```Scala
      spark.range(99, 101).createOrReplaceTempView("tab1")
      val sqlStmt = "SELECT id, cast(id as long) * cast('1.0' as decimal(38, 18)) as num FROM tab1"
      sql(s"CREATE TABLE tab2 USING PARQUET AS $sqlStmt")
      checkAnswer(spark.table("tab2"), sql(sqlStmt))
      ```
      Before this PR, the results do not match
      ```
      == Results ==
      !== Correct Answer - 2 ==       == Spark Answer - 2 ==
      ![100,100.000000000000000000]   [100,null]
       [99,99.000000000000000000]     [99,99.000000000000000000]
      ```
      After this PR, the results match.
      ```
      +---+----------------------+
      |id |num                   |
      +---+----------------------+
      |99 |99.000000000000000000 |
      |100|100.000000000000000000|
      +---+----------------------+
      ```
      
      In this PR, we do not treat the `query` in CTAS as a child. Thus, the `query` will not be optimized when optimizing CTAS statement. However, we still need to analyze it for normalizing and verifying the CTAS in the Analyzer. Thus, we do it in the analyzer rule `PreprocessDDL`, because so far only this rule needs the analyzed plan of the `query`.
      
      ### How was this patch tested?
      Added a test
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #15048 from gatorsmile/ctasOptimized.
      52738d4e
    • Sean Owen's avatar
      [SPARK-17445][DOCS] Reference an ASF page as the main place to find third-party packages · dc0a4c91
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Point references to spark-packages.org to https://cwiki.apache.org/confluence/display/SPARK/Third+Party+Projects
      
      This will be accompanied by a parallel change to the spark-website repo, and additional changes to this wiki.
      
      ## How was this patch tested?
      
      Jenkins tests.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #15075 from srowen/SPARK-17445.
      dc0a4c91
    • Ergin Seyfe's avatar
      [SPARK-17480][SQL] Improve performance by removing or caching List.length which is O(n) · 4cea9da2
      Ergin Seyfe authored
      ## What changes were proposed in this pull request?
      Scala's List.length method is O(N) and it makes the gatherCompressibilityStats function O(N^2). Eliminate the List.length calls by writing it in Scala way.
      
      https://github.com/scala/scala/blob/2.10.x/src/library/scala/collection/LinearSeqOptimized.scala#L36
      
      As suggested. Extended the fix to HiveInspectors and AggregationIterator classes as well.
      
      ## How was this patch tested?
      Profiled a Spark job and found that CompressibleColumnBuilder is using 39% of the CPU. Out of this 39% CompressibleColumnBuilder->gatherCompressibilityStats is using 23% of it. 6.24% of the CPU is spend on List.length which is called inside gatherCompressibilityStats.
      
      After this change we started to save 6.24% of the CPU.
      
      Author: Ergin Seyfe <eseyfe@fb.com>
      
      Closes #15032 from seyfe/gatherCompressibilityStats.
      4cea9da2
    • wm624@hotmail.com's avatar
      [CORE][DOC] remove redundant comment · 18b4f035
      wm624@hotmail.com authored
      ## What changes were proposed in this pull request?
      In the comment, there is redundant `the estimated`.
      
      This PR simply remove the redundant comment and adjusts format.
      
      Author: wm624@hotmail.com <wm624@hotmail.com>
      
      Closes #15091 from wangmiao1981/comment.
      18b4f035
    • Sami Jaktholm's avatar
      [SPARK-17525][PYTHON] Remove SparkContext.clearFiles() from the PySpark API as... · b5bfcddb
      Sami Jaktholm authored
      [SPARK-17525][PYTHON] Remove SparkContext.clearFiles() from the PySpark API as it was removed from the Scala API prior to Spark 2.0.0
      
      ## What changes were proposed in this pull request?
      
      This pull request removes the SparkContext.clearFiles() method from the PySpark API as the method was removed from the Scala API in 8ce645d4. Using that method in PySpark leads to an exception as PySpark tries to call the non-existent method on the JVM side.
      
      ## How was this patch tested?
      
      Existing tests (though none of them tested this particular method).
      
      Author: Sami Jaktholm <sjakthol@outlook.com>
      
      Closes #15081 from sjakthol/pyspark-sc-clearfiles.
      b5bfcddb
    • Jagadeesan's avatar
      [SPARK-17449][DOCUMENTATION] Relation between heartbeatInterval and… · def7c265
      Jagadeesan authored
      ## What changes were proposed in this pull request?
      
      The relation between spark.network.timeout and spark.executor.heartbeatInterval should be mentioned in the document.
      
      … network timeout]
      
      Author: Jagadeesan <as2@us.ibm.com>
      
      Closes #15042 from jagadeesanas2/SPARK-17449.
      def7c265
  4. Sep 13, 2016
    • junyangq's avatar
      [SPARK-17317][SPARKR] Add SparkR vignette · a454a4d8
      junyangq authored
      ## What changes were proposed in this pull request?
      
      This PR tries to add a SparkR vignette, which works as a friendly guidance going through the functionality provided by SparkR.
      
      ## How was this patch tested?
      
      Manual test.
      
      Author: junyangq <qianjunyang@gmail.com>
      Author: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
      Author: Junyang Qian <junyangq@databricks.com>
      
      Closes #14980 from junyangq/SPARKR-vignette.
      a454a4d8
    • gatorsmile's avatar
      [SPARK-17530][SQL] Add Statistics into DESCRIBE FORMATTED · 37b93f54
      gatorsmile authored
      ### What changes were proposed in this pull request?
      Statistics is missing in the output of `DESCRIBE FORMATTED`. This PR is to add it. After the PR, the output will be like:
      ```
      +----------------------------+----------------------------------------------------------------------------------------------------------------------+-------+
      |col_name                    |data_type                                                                                                             |comment|
      +----------------------------+----------------------------------------------------------------------------------------------------------------------+-------+
      |key                         |string                                                                                                                |null   |
      |value                       |string                                                                                                                |null   |
      |                            |                                                                                                                      |       |
      |# Detailed Table Information|                                                                                                                      |       |
      |Database:                   |default                                                                                                               |       |
      |Owner:                      |xiaoli                                                                                                                |       |
      |Create Time:                |Tue Sep 13 14:36:57 PDT 2016                                                                                          |       |
      |Last Access Time:           |Wed Dec 31 16:00:00 PST 1969                                                                                          |       |
      |Location:                   |file:/private/var/folders/4b/sgmfldk15js406vk7lw5llzw0000gn/T/warehouse-9982e1db-df17-4376-a140-dbbee0203d83/texttable|       |
      |Table Type:                 |MANAGED                                                                                                               |       |
      |Statistics:                 |sizeInBytes=5812, rowCount=500, isBroadcastable=false                                                                 |       |
      |Table Parameters:           |                                                                                                                      |       |
      |  rawDataSize               |-1                                                                                                                    |       |
      |  numFiles                  |1                                                                                                                     |       |
      |  transient_lastDdlTime     |1473802620                                                                                                            |       |
      |  totalSize                 |5812                                                                                                                  |       |
      |  COLUMN_STATS_ACCURATE     |false                                                                                                                 |       |
      |  numRows                   |-1                                                                                                                    |       |
      |                            |                                                                                                                      |       |
      |# Storage Information       |                                                                                                                      |       |
      |SerDe Library:              |org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe                                                                    |       |
      |InputFormat:                |org.apache.hadoop.mapred.TextInputFormat                                                                              |       |
      |OutputFormat:               |org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat                                                            |       |
      |Compressed:                 |No                                                                                                                    |       |
      |Storage Desc Parameters:    |                                                                                                                      |       |
      |  serialization.format      |1                                                                                                                     |       |
      +----------------------------+----------------------------------------------------------------------------------------------------------------------+-------+
      ```
      
      Also improve the output of statistics in `DESCRIBE EXTENDED` by removing duplicate `Statistics`. Below is the example after the PR:
      
      ```
      +----------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------+
      |col_name                    |data_type                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |comment|
      +----------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------+
      |key                         |string                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |null   |
      |value                       |string                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |null   |
      |                            |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |       |
      |# Detailed Table Information|CatalogTable(
      	Table: `default`.`texttable`
      	Owner: xiaoli
      	Created: Tue Sep 13 14:38:43 PDT 2016
      	Last Access: Wed Dec 31 16:00:00 PST 1969
      	Type: MANAGED
      	Schema: [StructField(key,StringType,true), StructField(value,StringType,true)]
      	Provider: hive
      	Properties: [rawDataSize=-1, numFiles=1, transient_lastDdlTime=1473802726, totalSize=5812, COLUMN_STATS_ACCURATE=false, numRows=-1]
      	Statistics: sizeInBytes=5812, rowCount=500, isBroadcastable=false
      	Storage(Location: file:/private/var/folders/4b/sgmfldk15js406vk7lw5llzw0000gn/T/warehouse-8ea5c5a0-5680-4778-91cb-c6334cf8a708/texttable, InputFormat: org.apache.hadoop.mapred.TextInputFormat, OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, Serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, Properties: [serialization.format=1]))|       |
      +----------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------+
      ```
      
      ### How was this patch tested?
      Manually tested.
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #15083 from gatorsmile/descFormattedStats.
      37b93f54
    • Burak Yavuz's avatar
      [SPARK-17531] Don't initialize Hive Listeners for the Execution Client · 72edc7e9
      Burak Yavuz authored
      ## What changes were proposed in this pull request?
      
      If a user provides listeners inside the Hive Conf, the configuration for these listeners are passed to the Hive Execution Client as well. This may cause issues for two reasons:
      1. The Execution Client will actually generate garbage
      2. The listener class needs to be both in the Spark Classpath and Hive Classpath
      
      This PR empties the listener configurations in `HiveUtils.newTemporaryConfiguration` so that the execution client will not contain the listener confs, but the metadata client will.
      
      ## How was this patch tested?
      
      Unit tests
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #15086 from brkyvz/null-listeners.
      72edc7e9
    • jiangxingbo's avatar
      [SPARK-17142][SQL] Complex query triggers binding error in HashAggregateExec · 4ba63b19
      jiangxingbo authored
      ## What changes were proposed in this pull request?
      
      In `ReorderAssociativeOperator` rule, we extract foldable expressions with Add/Multiply arithmetics, and replace with eval literal. For example, `(a + 1) + (b + 2)` is optimized to `(a + b + 3)` by this rule.
      For aggregate operator, output expressions should be derived from groupingExpressions, current implemenation of `ReorderAssociativeOperator` rule may break this promise. A instance could be:
      ```
      SELECT
        ((t1.a + 1) + (t2.a + 2)) AS out_col
      FROM
        testdata2 AS t1
      INNER JOIN
        testdata2 AS t2
      ON
        (t1.a = t2.a)
      GROUP BY (t1.a + 1), (t2.a + 2)
      ```
      `((t1.a + 1) + (t2.a + 2))` is optimized to `(t1.a + t2.a + 3)`, which could not be derived from `ExpressionSet((t1.a +1), (t2.a + 2))`.
      Maybe we should improve the rule of `ReorderAssociativeOperator` by adding a GroupingExpressionSet to keep Aggregate.groupingExpressions, and respect these expressions during the optimize stage.
      
      ## How was this patch tested?
      
      Add new test case in `ReorderAssociativeOperatorSuite`.
      
      Author: jiangxingbo <jiangxb1987@gmail.com>
      
      Closes #14917 from jiangxb1987/rao.
      4ba63b19
    • Josh Rosen's avatar
      [SPARK-17515] CollectLimit.execute() should perform per-partition limits · 3f6a2bb3
      Josh Rosen authored
      ## What changes were proposed in this pull request?
      
      CollectLimit.execute() incorrectly omits per-partition limits, leading to performance regressions in case this case is hit (which should not happen in normal operation, but can occur in some cases (see #15068 for one example).
      
      ## How was this patch tested?
      
      Regression test in SQLQuerySuite that asserts the number of records scanned from the input RDD.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #15070 from JoshRosen/SPARK-17515.
      3f6a2bb3
    • hyukjinkwon's avatar
      [BUILD] Closing some stale PRs and ones suggested to be closed by committer(s) · 46f5c201
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR proposes to close some stale PRs and ones suggested to be closed by committer(s)
      
      Closes #10052
      Closes #11079
      Closes #12661
      Closes #12772
      Closes #12958
      Closes #12990
      Closes #13409
      Closes #13779
      Closes #13811
      Closes #14577
      Closes #14714
      Closes #14875
      Closes #15020
      
      ## How was this patch tested?
      
      N/A
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #15057 from HyukjinKwon/closing-stale-pr.
      46f5c201
  5. Sep 12, 2016
    • Davies Liu's avatar
      [SPARK-17474] [SQL] fix python udf in TakeOrderedAndProjectExec · a91ab705
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      When there is any Python UDF in the Project between Sort and Limit, it will be collected into TakeOrderedAndProjectExec, ExtractPythonUDFs failed to pull the Python UDFs out because QueryPlan.expressions does not include the expression inside Option[Seq[Expression]].
      
      Ideally, we should fix the `QueryPlan.expressions`, but tried with no luck (it always run into infinite loop). In PR, I changed the TakeOrderedAndProjectExec to no use Option[Seq[Expression]] to workaround it. cc JoshRosen
      
      ## How was this patch tested?
      
      Added regression test.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #15030 from davies/all_expr.
      a91ab705
    • Josh Rosen's avatar
      [SPARK-17485] Prevent failed remote reads of cached blocks from failing entire job · f9c580f1
      Josh Rosen authored
      ## What changes were proposed in this pull request?
      
      In Spark's `RDD.getOrCompute` we first try to read a local copy of a cached RDD block, then a remote copy, and only fall back to recomputing the block if no cached copy (local or remote) can be read. This logic works correctly in the case where no remote copies of the block exist, but if there _are_ remote copies and reads of those copies fail (due to network issues or internal Spark bugs) then the BlockManager will throw a `BlockFetchException` that will fail the task (and which could possibly fail the whole job if the read failures keep occurring).
      
      In the cases of TorrentBroadcast and task result fetching we really do want to fail the entire job in case no remote blocks can be fetched, but this logic is inappropriate for reads of cached RDD blocks because those can/should be recomputed in case cached blocks are unavailable.
      
      Therefore, I think that the `BlockManager.getRemoteBytes()` method should never throw on remote fetch errors and, instead, should handle failures by returning `None`.
      
      ## How was this patch tested?
      
      Block manager changes should be covered by modified tests in `BlockManagerSuite`: the old tests expected exceptions to be thrown on failed remote reads, while the modified tests now expect `None` to be returned from the `getRemote*` method.
      
      I also manually inspected all usages of `BlockManager.getRemoteValues()`, `getRemoteBytes()`, and `get()` to verify that they correctly pattern-match on the result and handle `None`. Note that these `None` branches are already exercised because the old `getRemoteBytes` returned `None` when no remote locations for the block could be found (which could occur if an executor died and its block manager de-registered with the master).
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #15037 from JoshRosen/SPARK-17485.
      f9c580f1
Loading