Skip to content
Snippets Groups Projects
  1. Apr 13, 2017
    • Yash Sharma's avatar
      [SPARK-20189][DSTREAM] Fix spark kinesis testcases to remove deprecated... · ec68d8f8
      Yash Sharma authored
      [SPARK-20189][DSTREAM] Fix spark kinesis testcases to remove deprecated createStream and use Builders
      
      ## What changes were proposed in this pull request?
      
      The spark-kinesis testcases use the KinesisUtils.createStream which are deprecated now. Modify the testcases to use the recommended KinesisInputDStream.builder instead.
      This change will also enable the testcases to automatically use the session tokens automatically.
      
      ## How was this patch tested?
      
      All the existing testcases work fine as expected with the changes.
      
      https://issues.apache.org/jira/browse/SPARK-20189
      
      Author: Yash Sharma <ysharma@atlassian.com>
      
      Closes #17506 from yssharma/ysharma/cleanup_kinesis_testcases.
      ec68d8f8
  2. Apr 12, 2017
    • Shixiong Zhu's avatar
      [SPARK-20131][CORE] Don't use `this` lock in StandaloneSchedulerBackend.stop · c5f1cc37
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      `o.a.s.streaming.StreamingContextSuite.SPARK-18560 Receiver data should be deserialized properly` is flaky is because there is a potential dead-lock in StandaloneSchedulerBackend which causes `await` timeout. Here is the related stack trace:
      ```
      "Thread-31" #211 daemon prio=5 os_prio=31 tid=0x00007fedd4808000 nid=0x16403 waiting on condition [0x00007000239b7000]
         java.lang.Thread.State: TIMED_WAITING (parking)
      	at sun.misc.Unsafe.park(Native Method)
      	- parking to wait for  <0x000000079b49ca10> (a scala.concurrent.impl.Promise$CompletionLatch)
      	at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
      	at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
      	at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
      	at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)
      	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
      	at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
      	at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:201)
      	at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
      	at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:92)
      	at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:76)
      	at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.stop(CoarseGrainedSchedulerBackend.scala:402)
      	at org.apache.spark.scheduler.cluster.StandaloneSchedulerBackend.org$apache$spark$scheduler$cluster$StandaloneSchedulerBackend$$stop(StandaloneSchedulerBackend.scala:213)
      	- locked <0x00000007066fca38> (a org.apache.spark.scheduler.cluster.StandaloneSchedulerBackend)
      	at org.apache.spark.scheduler.cluster.StandaloneSchedulerBackend.stop(StandaloneSchedulerBackend.scala:116)
      	- locked <0x00000007066fca38> (a org.apache.spark.scheduler.cluster.StandaloneSchedulerBackend)
      	at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:517)
      	at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1657)
      	at org.apache.spark.SparkContext$$anonfun$stop$8.apply$mcV$sp(SparkContext.scala:1921)
      	at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1302)
      	at org.apache.spark.SparkContext.stop(SparkContext.scala:1920)
      	at org.apache.spark.streaming.StreamingContext.stop(StreamingContext.scala:708)
      	at org.apache.spark.streaming.StreamingContextSuite$$anonfun$43$$anonfun$apply$mcV$sp$66$$anon$3.run(StreamingContextSuite.scala:827)
      
      "dispatcher-event-loop-3" #18 daemon prio=5 os_prio=31 tid=0x00007fedd603a000 nid=0x6203 waiting for monitor entry [0x0000700003be4000]
         java.lang.Thread.State: BLOCKED (on object monitor)
      	at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.org$apache$spark$scheduler$cluster$CoarseGrainedSchedulerBackend$DriverEndpoint$$makeOffers(CoarseGrainedSchedulerBackend.scala:253)
      	- waiting to lock <0x00000007066fca38> (a org.apache.spark.scheduler.cluster.StandaloneSchedulerBackend)
      	at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint$$anonfun$receive$1.applyOrElse(CoarseGrainedSchedulerBackend.scala:124)
      	at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:117)
      	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:205)
      	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:101)
      	at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:213)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      	at java.lang.Thread.run(Thread.java:745)
      ```
      
      This PR removes `synchronized` and changes `stopping` to AtomicBoolean to ensure idempotent to fix the dead-lock.
      
      ## How was this patch tested?
      
      Jenkins
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #17610 from zsxwing/SPARK-20131.
      c5f1cc37
    • Wenchen Fan's avatar
      [SPARK-15354][FLAKY-TEST] TopologyAwareBlockReplicationPolicyBehavior.Peers in 2 racks · a7b430b5
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      `TopologyAwareBlockReplicationPolicyBehavior.Peers in 2 racks` is failing occasionally: https://spark-tests.appspot.com/test-details?suite_name=org.apache.spark.storage.TopologyAwareBlockReplicationPolicyBehavior&test_name=Peers+in+2+racks.
      
      This is because, when we generate 10 block manager id to test, they may all belong to the same rack, as the rack is randomly picked. This PR fixes this problem by forcing each rack to be picked at least once.
      
      ## How was this patch tested?
      
      N/A
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #17624 from cloud-fan/test.
      a7b430b5
    • Burak Yavuz's avatar
      [SPARK-20301][FLAKY-TEST] Fix Hadoop Shell.runCommand flakiness in Structured Streaming tests · 924c4247
      Burak Yavuz authored
      ## What changes were proposed in this pull request?
      
      Some Structured Streaming tests show flakiness such as:
      ```
      [info] - prune results by current_date, complete mode - 696 *** FAILED *** (10 seconds, 937 milliseconds)
      [info]   Timed out while stopping and waiting for microbatchthread to terminate.: The code passed to failAfter did not complete within 10 seconds.
      ```
      
      This happens when we wait for the stream to stop, but it doesn't. The reason it doesn't stop is that we interrupt the microBatchThread, but Hadoop's `Shell.runCommand` swallows the interrupt exception, and the exception is not propagated upstream to the microBatchThread. Then this thread continues to run, only to start blocking on the `streamManualClock`.
      
      ## How was this patch tested?
      
      Thousand retries locally and [Jenkins](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75720/testReport) of the flaky tests
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #17613 from brkyvz/flaky-stream-agg.
      924c4247
    • Jeff Zhang's avatar
      [SPARK-19570][PYSPARK] Allow to disable hive in pyspark shell · 99a94731
      Jeff Zhang authored
      ## What changes were proposed in this pull request?
      
      SPARK-15236 do this for scala shell, this ticket is for pyspark shell. This is not only for pyspark itself, but can also benefit downstream project like livy which use shell.py for its interactive session. For now, livy has no control of whether enable hive or not.
      
      ## How was this patch tested?
      
      I didn't find a way to add test for it. Just manually test it.
      Run `bin/pyspark --master local --conf spark.sql.catalogImplementation=in-memory` and verify hive is not enabled.
      
      Author: Jeff Zhang <zjffdu@apache.org>
      
      Closes #16906 from zjffdu/SPARK-19570.
      99a94731
    • Reynold Xin's avatar
      [SPARK-20304][SQL] AssertNotNull should not include path in string representation · 54085538
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      AssertNotNull's toString/simpleString dumps the entire walkedTypePath. walkedTypePath is used for error message reporting and shouldn't be part of the output.
      
      ## How was this patch tested?
      Manually tested.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #17616 from rxin/SPARK-20304.
      54085538
    • Xiao Li's avatar
      [SPARK-20303][SQL] Rename createTempFunction to registerFunction · 504e62e2
      Xiao Li authored
      ### What changes were proposed in this pull request?
      Session catalog API `createTempFunction` is being used by Hive build-in functions, persistent functions, and temporary functions. Thus, the name is confusing. This PR is to rename it by `registerFunction`. Also we can move construction of `FunctionBuilder` and `ExpressionInfo` into the new `registerFunction`, instead of duplicating the logics everywhere.
      
      In the next PRs, the remaining Function-related APIs also need cleanups.
      
      ### How was this patch tested?
      Existing test cases.
      
      Author: Xiao Li <gatorsmile@gmail.com>
      
      Closes #17615 from gatorsmile/cleanupCreateTempFunction.
      504e62e2
    • hyukjinkwon's avatar
      [SPARK-18692][BUILD][DOCS] Test Java 8 unidoc build on Jenkins · ceaf77ae
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR proposes to run Spark unidoc to test Javadoc 8 build as Javadoc 8 is easily re-breakable.
      
      There are several problems with it:
      
      - It introduces little extra bit of time to run the tests. In my case, it took 1.5 mins more (`Elapsed :[94.8746569157]`). How it was tested is described in "How was this patch tested?".
      
      - > One problem that I noticed was that Unidoc appeared to be processing test sources: if we can find a way to exclude those from being processed in the first place then that might significantly speed things up.
      
        (see  joshrosen's [comment](https://issues.apache.org/jira/browse/SPARK-18692?focusedCommentId=15947627&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15947627))
      
      To complete this automated build, It also suggests to fix existing Javadoc breaks / ones introduced by test codes as described above.
      
      There fixes are similar instances that previously fixed. Please refer https://github.com/apache/spark/pull/15999 and https://github.com/apache/spark/pull/16013
      
      Note that this only fixes **errors** not **warnings**. Please see my observation https://github.com/apache/spark/pull/17389#issuecomment-288438704 for spurious errors by warnings.
      
      ## How was this patch tested?
      
      Manually via `jekyll build` for building tests. Also, tested via running `./dev/run-tests`.
      
      This was tested via manually adding `time.time()` as below:
      
      ```diff
           profiles_and_goals = build_profiles + sbt_goals
      
           print("[info] Building Spark unidoc (w/Hive 1.2.1) using SBT with these arguments: ",
                 " ".join(profiles_and_goals))
      
      +    import time
      +    st = time.time()
           exec_sbt(profiles_and_goals)
      +    print("Elapsed :[%s]" % str(time.time() - st))
      ```
      
      produces
      
      ```
      ...
      ========================================================================
      Building Unidoc API Documentation
      ========================================================================
      ...
      [info] Main Java API documentation successful.
      ...
      Elapsed :[94.8746569157]
      ...
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #17477 from HyukjinKwon/SPARK-18692.
      ceaf77ae
    • jtoka's avatar
      [SPARK-20296][TRIVIAL][DOCS] Count distinct error message for streaming · 2e1fd46e
      jtoka authored
      ## What changes were proposed in this pull request?
      Update count distinct error message for streaming datasets/dataframes to match current behavior. These aggregations are not yet supported, regardless of whether the dataset/dataframe is aggregated.
      
      Author: jtoka <jason.tokayer@gmail.com>
      
      Closes #17609 from jtoka/master.
      2e1fd46e
    • Reynold Xin's avatar
      [SPARK-20302][SQL] Short circuit cast when from and to types are structurally the same · ffc57b01
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      When we perform a cast expression and the from and to types are structurally the same (having the same structure but different field names), we should be able to skip the actual cast.
      
      ## How was this patch tested?
      Added unit tests for the newly introduced functions.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #17614 from rxin/SPARK-20302.
      ffc57b01
    • Brendan Dwyer's avatar
      [SPARK-20298][SPARKR][MINOR] fixed spelling mistake "charactor" · 044f7ecb
      Brendan Dwyer authored
      ## What changes were proposed in this pull request?
      
      Fixed spelling of "charactor"
      
      ## How was this patch tested?
      
      Spelling change only
      
      Author: Brendan Dwyer <brendan.dwyer@ibm.com>
      
      Closes #17611 from bdwyer2/SPARK-20298.
      044f7ecb
    • hyukjinkwon's avatar
      [MINOR][DOCS] JSON APIs related documentation fixes · bca4259f
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR proposes corrections related to JSON APIs as below:
      
      - Rendering links in Python documentation
      - Replacing `RDD` to `Dataset` in programing guide
      - Adding missing description about JSON Lines consistently in `DataFrameReader.json` in Python API
      - De-duplicating little bit of `DataFrameReader.json` in Scala/Java API
      
      ## How was this patch tested?
      
      Manually build the documentation via `jekyll build`. Corresponding snapstops will be left on the codes.
      
      Note that currently there are Javadoc8 breaks in several places. These are proposed to be handled in https://github.com/apache/spark/pull/17477. So, this PR does not fix those.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #17602 from HyukjinKwon/minor-json-documentation.
      bca4259f
    • Lee Dongjin's avatar
      [MINOR][DOCS] Fix spacings in Structured Streaming Programming Guide · b9384382
      Lee Dongjin authored
      ## What changes were proposed in this pull request?
      
      1. Omitted space between the sentences: `... on static data.The Spark SQL engine will ...` -> `... on static data. The Spark SQL engine will ...`
      2. Omitted colon in Output Model section.
      
      ## How was this patch tested?
      
      None.
      
      Author: Lee Dongjin <dongjin@apache.org>
      
      Closes #17564 from dongjinleekr/feature/fix-programming-guide.
      b9384382
  3. Apr 11, 2017
    • Dilip Biswal's avatar
      [SPARK-19993][SQL] Caching logical plans containing subquery expressions does not work. · b14bfc3f
      Dilip Biswal authored
      ## What changes were proposed in this pull request?
      The sameResult() method does not work when the logical plan contains subquery expressions.
      
      **Before the fix**
      ```SQL
      scala> val ds = spark.sql("select * from s1 where s1.c1 in (select s2.c1 from s2 where s1.c1 = s2.c1)")
      ds: org.apache.spark.sql.DataFrame = [c1: int]
      
      scala> ds.cache
      res13: ds.type = [c1: int]
      
      scala> spark.sql("select * from s1 where s1.c1 in (select s2.c1 from s2 where s1.c1 = s2.c1)").explain(true)
      == Analyzed Logical Plan ==
      c1: int
      Project [c1#86]
      +- Filter c1#86 IN (list#78 [c1#86])
         :  +- Project [c1#87]
         :     +- Filter (outer(c1#86) = c1#87)
         :        +- SubqueryAlias s2
         :           +- Relation[c1#87] parquet
         +- SubqueryAlias s1
            +- Relation[c1#86] parquet
      
      == Optimized Logical Plan ==
      Join LeftSemi, ((c1#86 = c1#87) && (c1#86 = c1#87))
      :- Relation[c1#86] parquet
      +- Relation[c1#87] parquet
      ```
      **Plan after fix**
      ```SQL
      == Analyzed Logical Plan ==
      c1: int
      Project [c1#22]
      +- Filter c1#22 IN (list#14 [c1#22])
         :  +- Project [c1#23]
         :     +- Filter (outer(c1#22) = c1#23)
         :        +- SubqueryAlias s2
         :           +- Relation[c1#23] parquet
         +- SubqueryAlias s1
            +- Relation[c1#22] parquet
      
      == Optimized Logical Plan ==
      InMemoryRelation [c1#22], true, 10000, StorageLevel(disk, memory, deserialized, 1 replicas)
         +- *BroadcastHashJoin [c1#1, c1#1], [c1#2, c1#2], LeftSemi, BuildRight
            :- *FileScan parquet default.s1[c1#1] Batched: true, Format: Parquet, Location: InMemoryFileIndex[file:/Users/dbiswal/mygit/apache/spark/bin/spark-warehouse/s1], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<c1:int>
            +- BroadcastExchange HashedRelationBroadcastMode(List((shiftleft(cast(input[0, int, true] as bigint), 32) | (cast(input[0, int, true] as bigint) & 4294967295))))
               +- *FileScan parquet default.s2[c1#2] Batched: true, Format: Parquet, Location: InMemoryFileIndex[file:/Users/dbiswal/mygit/apache/spark/bin/spark-warehouse/s2], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<c1:int>
      ```
      ## How was this patch tested?
      New tests are added to CachedTableSuite.
      
      Author: Dilip Biswal <dbiswal@us.ibm.com>
      
      Closes #17330 from dilipbiswal/subquery_cache_final.
      b14bfc3f
    • DB Tsai's avatar
      [SPARK-20291][SQL] NaNvl(FloatType, NullType) should not be cast to NaNvl(DoubleType, DoubleType) · 8ad63ee1
      DB Tsai authored
      ## What changes were proposed in this pull request?
      
      `NaNvl(float value, null)` will be converted into `NaNvl(float value, Cast(null, DoubleType))` and finally `NaNvl(Cast(float value, DoubleType), Cast(null, DoubleType))`.
      
      This will cause mismatching in the output type when the input type is float.
      
      By adding extra rule in TypeCoercion can resolve this issue.
      
      ## How was this patch tested?
      
      unite tests.
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: DB Tsai <dbt@netflix.com>
      
      Closes #17606 from dbtsai/fixNaNvl.
      8ad63ee1
    • Dongjoon Hyun's avatar
      [MINOR][DOCS] Update supported versions for Hive Metastore · cde9e328
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      Since SPARK-18112 and SPARK-13446, Apache Spark starts to support reading Hive metastore 2.0 ~ 2.1.1. This updates the docs.
      
      ## How was this patch tested?
      
      N/A
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #17612 from dongjoon-hyun/metastore.
      cde9e328
    • David Gingrich's avatar
      [SPARK-19505][PYTHON] AttributeError on Exception.message in Python3 · 6297697f
      David Gingrich authored
      ## What changes were proposed in this pull request?
      
      Added `util._message_exception` helper to use `str(e)` when `e.message` is unavailable (Python3).  Grepped for all occurrences of `.message` in `pyspark/` and these were the only occurrences.
      
      ## How was this patch tested?
      
      - Doctests for helper function
      
      ## Legal
      
      This is my original work and I license the work to the project under the project’s open source license.
      
      Author: David Gingrich <david@textio.com>
      
      Closes #16845 from dgingrich/topic-spark-19505-py3-exceptions.
      6297697f
    • Reynold Xin's avatar
      [SPARK-20289][SQL] Use StaticInvoke to box primitive types · 123b4fbb
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      Dataset typed API currently uses NewInstance to box primitive types (i.e. calling the constructor). Instead, it'd be slightly more idiomatic in Java to use PrimitiveType.valueOf, which can be invoked using StaticInvoke expression.
      
      ## How was this patch tested?
      The change should be covered by existing tests for Dataset encoders.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #17604 from rxin/SPARK-20289.
      123b4fbb
    • Liang-Chi Hsieh's avatar
      [SPARK-20175][SQL] Exists should not be evaluated in Join operator · cd91f967
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      Similar to `ListQuery`, `Exists` should not be evaluated in `Join` operator too.
      
      ## How was this patch tested?
      
      Jenkins tests.
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #17491 from viirya/dont-push-exists-to-join.
      cd91f967
    • Wenchen Fan's avatar
      [SPARK-20274][SQL] support compatible array element type in encoder · c8706980
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      This is a regression caused by SPARK-19716.
      
      Before SPARK-19716, we will cast an array field to the expected array type. However, after SPARK-19716, the cast is removed, but we forgot to push the cast to the element level.
      
      ## How was this patch tested?
      
      new regression tests
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #17587 from cloud-fan/array.
      c8706980
    • MirrorZ's avatar
      Document Master URL format in high availability set up · d11ef3d7
      MirrorZ authored
      ## What changes were proposed in this pull request?
      
      Add documentation for adding master url in multi host, port format for standalone cluster with high availability with zookeeper.
      Referring documentation [Standby Masters with ZooKeeper](http://spark.apache.org/docs/latest/spark-standalone.html#standby-masters-with-zookeeper)
      
      ## How was this patch tested?
      
      Documenting the functionality already present.
      
      Author: MirrorZ <chandrika3437@gmail.com>
      
      Closes #17584 from MirrorZ/master.
      d11ef3d7
    • Benjamin Fradet's avatar
      [SPARK-20097][ML] Fix visibility discrepancy with numInstances and degreesOfFreedom in LR and GLR · 0d2b7964
      Benjamin Fradet authored
      ## What changes were proposed in this pull request?
      
      - made `numInstances` public in GLR
      - made `degreesOfFreedom` public in LR
      
      ## How was this patch tested?
      
      reran the concerned test suites
      
      Author: Benjamin Fradet <benjamin.fradet@gmail.com>
      
      Closes #17431 from BenFradet/SPARK-20097.
      0d2b7964
  4. Apr 10, 2017
    • Shixiong Zhu's avatar
      [SPARK-17564][TESTS] Fix flaky RequestTimeoutIntegrationSuite.furtherRequestsDelay · 734dfbfc
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      This PR  fixs the following failure:
      ```
      sbt.ForkMain$ForkError: java.lang.AssertionError: null
      	at org.junit.Assert.fail(Assert.java:86)
      	at org.junit.Assert.assertTrue(Assert.java:41)
      	at org.junit.Assert.assertTrue(Assert.java:52)
      	at org.apache.spark.network.RequestTimeoutIntegrationSuite.furtherRequestsDelay(RequestTimeoutIntegrationSuite.java:230)
      	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
      	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      	at java.lang.reflect.Method.invoke(Method.java:497)
      	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
      	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
      	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
      	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
      	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
      	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
      	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
      	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
      	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
      	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
      	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
      	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
      	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
      	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
      	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
      	at org.junit.runners.Suite.runChild(Suite.java:128)
      	at org.junit.runners.Suite.runChild(Suite.java:27)
      	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
      	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
      	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
      	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
      	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
      	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
      	at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
      	at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
      	at com.novocode.junit.JUnitRunner$1.execute(JUnitRunner.java:132)
      	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
      	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
      	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      	at java.lang.Thread.run(Thread.java:745)
      ```
      
      It happens several times per month on [Jenkins](http://spark-tests.appspot.com/test-details?suite_name=org.apache.spark.network.RequestTimeoutIntegrationSuite&test_name=furtherRequestsDelay). The failure is because `callback1` may not be called before `assertTrue(callback1.failure instanceof IOException);`. It's pretty easy to reproduce this error by adding a sleep before this line: https://github.com/apache/spark/blob/379b0b0bbdbba2278ce3bcf471bd75f6ffd9cf0d/common/network-common/src/test/java/org/apache/spark/network/RequestTimeoutIntegrationSuite.java#L267
      
      The fix is straightforward: just use the latch to wait until `callback1` is called.
      
      ## How was this patch tested?
      
      Jenkins
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #17599 from zsxwing/SPARK-17564.
      734dfbfc
    • Reynold Xin's avatar
      [SPARK-20283][SQL] Add preOptimizationBatches · 379b0b0b
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      We currently have postHocOptimizationBatches, but not preOptimizationBatches. This patch adds preOptimizationBatches so the optimizer debugging extensions are symmetric.
      
      ## How was this patch tested?
      N/A
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #17595 from rxin/SPARK-20283.
      379b0b0b
    • Shixiong Zhu's avatar
      [SPARK-20282][SS][TESTS] Write the commit log first to fix a race contion in tests · a35b9d97
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      This PR fixes the following failure:
      ```
      sbt.ForkMain$ForkError: org.scalatest.exceptions.TestFailedException:
      Assert on query failed:
      
      == Progress ==
         AssertOnQuery(<condition>, )
         StopStream
         AddData to MemoryStream[value#30891]: 1,2
         StartStream(OneTimeTrigger,org.apache.spark.util.SystemClock35cdc93a,Map())
         CheckAnswer: [6],[3]
         StopStream
      => AssertOnQuery(<condition>, )
         AssertOnQuery(<condition>, )
         StartStream(OneTimeTrigger,org.apache.spark.util.SystemClockcdb247d,Map())
         CheckAnswer: [6],[3]
         StopStream
         AddData to MemoryStream[value#30891]: 3
         StartStream(OneTimeTrigger,org.apache.spark.util.SystemClock55394e4d,Map())
         CheckLastBatch: [2]
         StopStream
         AddData to MemoryStream[value#30891]: 0
         StartStream(OneTimeTrigger,org.apache.spark.util.SystemClock749aa997,Map())
         ExpectFailure[org.apache.spark.SparkException, isFatalError: false]
         AssertOnQuery(<condition>, )
         AssertOnQuery(<condition>, incorrect start offset or end offset on exception)
      
      == Stream ==
      Output Mode: Append
      Stream state: not started
      Thread state: dead
      
      == Sink ==
      0: [6] [3]
      
      == Plan ==
      
      	at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:495)
      	at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
      	at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
      	at org.scalatest.FunSuite.fail(FunSuite.scala:1555)
      	at org.apache.spark.sql.streaming.StreamTest$class.failTest$1(StreamTest.scala:347)
      	at org.apache.spark.sql.streaming.StreamTest$class.verify$1(StreamTest.scala:318)
      	at org.apache.spark.sql.streaming.StreamTest$$anonfun$liftedTree1$1$1.apply(StreamTest.scala:483)
      	at org.apache.spark.sql.streaming.StreamTest$$anonfun$liftedTree1$1$1.apply(StreamTest.scala:357)
      	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
      	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
      	at org.apache.spark.sql.streaming.StreamTest$class.liftedTree1$1(StreamTest.scala:357)
      	at org.apache.spark.sql.streaming.StreamTest$class.testStream(StreamTest.scala:356)
      	at org.apache.spark.sql.streaming.StreamingQuerySuite.testStream(StreamingQuerySuite.scala:41)
      	at org.apache.spark.sql.streaming.StreamingQuerySuite$$anonfun$6.apply$mcV$sp(StreamingQuerySuite.scala:166)
      	at org.apache.spark.sql.streaming.StreamingQuerySuite$$anonfun$6.apply(StreamingQuerySuite.scala:161)
      	at org.apache.spark.sql.streaming.StreamingQuerySuite$$anonfun$6.apply(StreamingQuerySuite.scala:161)
      	at org.apache.spark.sql.catalyst.util.package$.quietly(package.scala:42)
      	at org.apache.spark.sql.test.SQLTestUtils$$anonfun$testQuietly$1.apply$mcV$sp(SQLTestUtils.scala:268)
      	at org.apache.spark.sql.test.SQLTestUtils$$anonfun$testQuietly$1.apply(SQLTestUtils.scala:268)
      	at org.apache.spark.sql.test.SQLTestUtils$$anonfun$testQuietly$1.apply(SQLTestUtils.scala:268)
      	at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
      	at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
      	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
      	at org.scalatest.Transformer.apply(Transformer.scala:22)
      	at org.scalatest.Transformer.apply(Transformer.scala:20)
      	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
      	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:68)
      	at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
      	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
      	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
      	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
      	at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
      	at org.apache.spark.sql.streaming.StreamingQuerySuite.org$scalatest$BeforeAndAfterEach$$super$runTest(StreamingQuerySuite.scala:41)
      	at org.scalatest.BeforeAndAfterEach$class.runTest(BeforeAndAfterEach.scala:255)
      	at org.apache.spark.sql.streaming.StreamingQuerySuite.org$scalatest$BeforeAndAfter$$super$runTest(StreamingQuerySuite.scala:41)
      	at org.scalatest.BeforeAndAfter$class.runTest(BeforeAndAfter.scala:200)
      	at org.apache.spark.sql.streaming.StreamingQuerySuite.runTest(StreamingQuerySuite.scala:41)
      	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
      	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
      	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
      	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
      	at scala.collection.immutable.List.foreach(List.scala:381)
      	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
      	at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
      	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
      	at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
      	at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
      	at org.scalatest.Suite$class.run(Suite.scala:1424)
      	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
      	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
      	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
      	at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
      	at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
      	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:31)
      	at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
      	at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
      	at org.apache.spark.sql.streaming.StreamingQuerySuite.org$scalatest$BeforeAndAfter$$super$run(StreamingQuerySuite.scala:41)
      	at org.scalatest.BeforeAndAfter$class.run(BeforeAndAfter.scala:241)
      	at org.apache.spark.sql.streaming.StreamingQuerySuite.run(StreamingQuerySuite.scala:41)
      	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:357)
      	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:502)
      	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
      	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
      	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      	at java.lang.Thread.run(Thread.java:745)
      ```
      
      The failure is because `CheckAnswer` will run once `committedOffsets` is updated. Then writing the commit log may be interrupted by the following `StopStream`.
      
      This PR just change the order to write the commit log first.
      
      ## How was this patch tested?
      
      Jenkins
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #17594 from zsxwing/SPARK-20282.
      a35b9d97
    • Shixiong Zhu's avatar
      [SPARK-20285][TESTS] Increase the pyspark streaming test timeout to 30 seconds · f9a50ba2
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      Saw the following failure locally:
      
      ```
      Traceback (most recent call last):
        File "/home/jenkins/workspace/python/pyspark/streaming/tests.py", line 351, in test_cogroup
          self._test_func(input, func, expected, sort=True, input2=input2)
        File "/home/jenkins/workspace/python/pyspark/streaming/tests.py", line 162, in _test_func
          self.assertEqual(expected, result)
      AssertionError: Lists differ: [[(1, ([1], [2])), (2, ([1], [... != []
      
      First list contains 3 additional elements.
      First extra element 0:
      [(1, ([1], [2])), (2, ([1], [])), (3, ([1], []))]
      
      + []
      - [[(1, ([1], [2])), (2, ([1], [])), (3, ([1], []))],
      -  [(1, ([1, 1, 1], [])), (2, ([1], [])), (4, ([], [1]))],
      -  [('', ([1, 1], [1, 2])), ('a', ([1, 1], [1, 1])), ('b', ([1], [1]))]]
      ```
      
      It also happened on Jenkins: http://spark-tests.appspot.com/builds/spark-branch-2.1-test-sbt-hadoop-2.7/120
      
      It's because when the machine is overloaded, the timeout is not enough. This PR just increases the timeout to 30 seconds.
      
      ## How was this patch tested?
      
      Jenkins
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #17597 from zsxwing/SPARK-20285.
      f9a50ba2
    • Bogdan Raducanu's avatar
      [SPARK-20280][CORE] FileStatusCache Weigher integer overflow · f6dd8e0e
      Bogdan Raducanu authored
      ## What changes were proposed in this pull request?
      
      Weigher.weigh needs to return Int but it is possible for an Array[FileStatus] to have size > Int.maxValue. To avoid this, the size is scaled down by a factor of 32. The maximumWeight of the cache is also scaled down by the same factor.
      
      ## How was this patch tested?
      New test in FileIndexSuite
      
      Author: Bogdan Raducanu <bogdan@databricks.com>
      
      Closes #17591 from bogdanrdc/SPARK-20280.
      f6dd8e0e
    • Sean Owen's avatar
      [SPARK-20156][CORE][SQL][STREAMING][MLLIB] Java String toLowerCase "Turkish... · a26e3ed5
      Sean Owen authored
      [SPARK-20156][CORE][SQL][STREAMING][MLLIB] Java String toLowerCase "Turkish locale bug" causes Spark problems
      
      ## What changes were proposed in this pull request?
      
      Add Locale.ROOT to internal calls to String `toLowerCase`, `toUpperCase`, to avoid inadvertent locale-sensitive variation in behavior (aka the "Turkish locale problem").
      
      The change looks large but it is just adding `Locale.ROOT` (the locale with no country or language specified) to every call to these methods.
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #17527 from srowen/SPARK-20156.
      a26e3ed5
    • Xiao Li's avatar
      [SPARK-20273][SQL] Disallow Non-deterministic Filter push-down into Join Conditions · fd711ea1
      Xiao Li authored
      ## What changes were proposed in this pull request?
      ```
      sql("SELECT t1.b, rand(0) as r FROM cachedData, cachedData t1 GROUP BY t1.b having r > 0.5").show()
      ```
      We will get the following error:
      ```
      Job aborted due to stage failure: Task 1 in stage 4.0 failed 1 times, most recent failure: Lost task 1.0 in stage 4.0 (TID 8, localhost, executor driver): java.lang.NullPointerException
      	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificPredicate.eval(Unknown Source)
      	at org.apache.spark.sql.execution.joins.BroadcastNestedLoopJoinExec$$anonfun$org$apache$spark$sql$execution$joins$BroadcastNestedLoopJoinExec$$boundCondition$1.apply(BroadcastNestedLoopJoinExec.scala:87)
      	at org.apache.spark.sql.execution.joins.BroadcastNestedLoopJoinExec$$anonfun$org$apache$spark$sql$execution$joins$BroadcastNestedLoopJoinExec$$boundCondition$1.apply(BroadcastNestedLoopJoinExec.scala:87)
      	at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:463)
      ```
      Filters could be pushed down to the join conditions by the optimizer rule `PushPredicateThroughJoin`. However, Analyzer [blocks users to add non-deterministics conditions](https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala#L386-L395) (For details, see the PR https://github.com/apache/spark/pull/7535).
      
      We should not push down non-deterministic conditions; otherwise, we need to explicitly initialize the non-deterministic expressions. This PR is to simply block it.
      
      ### How was this patch tested?
      Added a test case
      
      Author: Xiao Li <gatorsmile@gmail.com>
      
      Closes #17585 from gatorsmile/joinRandCondition.
      fd711ea1
    • hyukjinkwon's avatar
      [SPARK-19518][SQL] IGNORE NULLS in first / last in SQL · 5acaf8c0
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR proposes to add `IGNORE NULLS` keyword in `first`/`last` in Spark's parser likewise http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions057.htm.  This simply maps the keywords to existing `ignoreNullsExpr`.
      
      **Before**
      
      ```scala
      scala> sql("select first('a' IGNORE NULLS)").show()
      ```
      
      ```
      org.apache.spark.sql.catalyst.parser.ParseException:
      extraneous input 'NULLS' expecting {')', ','}(line 1, pos 24)
      
      == SQL ==
      select first('a' IGNORE NULLS)
      ------------------------^^^
      
        at org.apache.spark.sql.catalyst.parser.ParseException.withCommand(ParseDriver.scala:210)
        at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:112)
        at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:46)
        at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:66)
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:622)
        ... 48 elided
      ```
      
      **After**
      
      ```scala
      scala> sql("select first('a' IGNORE NULLS)").show()
      ```
      
      ```
      +--------------+
      |first(a, true)|
      +--------------+
      |             a|
      +--------------+
      ```
      
      ## How was this patch tested?
      
      Unit tests in `ExpressionParserSuite`.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #17566 from HyukjinKwon/SPARK-19518.
      5acaf8c0
    • Bogdan Raducanu's avatar
      [SPARK-20243][TESTS] DebugFilesystem.assertNoOpenStreams thread race · 4f7d49b9
      Bogdan Raducanu authored
      ## What changes were proposed in this pull request?
      
      Synchronize access to openStreams map.
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: Bogdan Raducanu <bogdan@databricks.com>
      
      Closes #17592 from bogdanrdc/SPARK-20243.
      4f7d49b9
    • Wenchen Fan's avatar
      [SPARK-20229][SQL] add semanticHash to QueryPlan · 3d7f201f
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      Like `Expression`, `QueryPlan` should also have a `semanticHash` method, then we can put plans to a hash map and look it up fast. This PR refactors `QueryPlan` to follow `Expression` and put all the normalization logic in `QueryPlan.canonicalized`, so that it's very natural to implement `semanticHash`.
      
      follow-up: improve `CacheManager` to leverage this `semanticHash` and speed up plan lookup, instead of iterating all cached plans.
      
      ## How was this patch tested?
      
      existing tests. Note that we don't need to test the `semanticHash` method, once the existing tests prove `sameResult` is correct, we are good.
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #17541 from cloud-fan/plan-semantic.
      3d7f201f
    • DB Tsai's avatar
      [SPARK-20270][SQL] na.fill should not change the values in long or integer... · 1a0bc416
      DB Tsai authored
      [SPARK-20270][SQL] na.fill should not change the values in long or integer when the default value is in double
      
      ## What changes were proposed in this pull request?
      
      This bug was partially addressed in SPARK-18555 https://github.com/apache/spark/pull/15994, but the root cause isn't completely solved. This bug is pretty critical since it changes the member id in Long in our application if the member id can not be represented by Double losslessly when the member id is very big.
      
      Here is an example how this happens, with
      ```
            Seq[(java.lang.Long, java.lang.Double)]((null, 3.14), (9123146099426677101L, null),
              (9123146560113991650L, 1.6), (null, null)).toDF("a", "b").na.fill(0.2),
      ```
      the logical plan will be
      ```
      == Analyzed Logical Plan ==
      a: bigint, b: double
      Project [cast(coalesce(cast(a#232L as double), cast(0.2 as double)) as bigint) AS a#240L, cast(coalesce(nanvl(b#233, cast(null as double)), 0.2) as double) AS b#241]
      +- Project [_1#229L AS a#232L, _2#230 AS b#233]
         +- LocalRelation [_1#229L, _2#230]
      ```
      
      Note that even the value is not null, Spark will cast the Long into Double first. Then if it's not null, Spark will cast it back to Long which results in losing precision.
      
      The behavior should be that the original value should not be changed if it's not null, but Spark will change the value which is wrong.
      
      With the PR, the logical plan will be
      ```
      == Analyzed Logical Plan ==
      a: bigint, b: double
      Project [coalesce(a#232L, cast(0.2 as bigint)) AS a#240L, coalesce(nanvl(b#233, cast(null as double)), cast(0.2 as double)) AS b#241]
      +- Project [_1#229L AS a#232L, _2#230 AS b#233]
         +- LocalRelation [_1#229L, _2#230]
      ```
      which behaves correctly without changing the original Long values and also avoids extra cost of unnecessary casting.
      
      ## How was this patch tested?
      
      unit test added.
      
      +cc srowen rxin cloud-fan gatorsmile
      
      Thanks.
      
      Author: DB Tsai <dbt@netflix.com>
      
      Closes #17577 from dbtsai/fixnafill.
      1a0bc416
  5. Apr 09, 2017
    • Reynold Xin's avatar
      [SPARK-20264][SQL] asm should be non-test dependency in sql/core · 7bfa05e0
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      sq/core module currently declares asm as a test scope dependency. Transitively it should actually be a normal dependency since the actual core module defines it. This occasionally confuses IntelliJ.
      
      ## How was this patch tested?
      N/A - This is a build change.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #17574 from rxin/SPARK-20264.
      7bfa05e0
    • Kazuaki Ishizaki's avatar
      [SPARK-20253][SQL] Remove unnecessary nullchecks of a return value from Spark... · 7a63f5e8
      Kazuaki Ishizaki authored
      [SPARK-20253][SQL] Remove unnecessary nullchecks of a return value from Spark runtime routines in generated Java code
      
      ## What changes were proposed in this pull request?
      
      This PR elminates unnecessary nullchecks of a return value from known Spark runtime routines. We know whether a given Spark runtime routine returns ``null`` or not (e.g. ``ArrayData.toDoubleArray()`` never returns ``null``). Thus, we can eliminate a null check for the return value from the Spark runtime routine.
      
      When we run the following example program, now we get the Java code "Without this PR". In this code, since we know ``ArrayData.toDoubleArray()`` never returns ``null```, we can eliminate null checks at lines 90-92, and 97.
      
      ```java
      val ds = sparkContext.parallelize(Seq(Array(1.1, 2.2)), 1).toDS.cache
      ds.count
      ds.map(e => e).show
      ```
      
      Without this PR
      ```java
      /* 050 */   protected void processNext() throws java.io.IOException {
      /* 051 */     while (inputadapter_input.hasNext() && !stopEarly()) {
      /* 052 */       InternalRow inputadapter_row = (InternalRow) inputadapter_input.next();
      /* 053 */       boolean inputadapter_isNull = inputadapter_row.isNullAt(0);
      /* 054 */       ArrayData inputadapter_value = inputadapter_isNull ? null : (inputadapter_row.getArray(0));
      /* 055 */
      /* 056 */       ArrayData deserializetoobject_value1 = null;
      /* 057 */
      /* 058 */       if (!inputadapter_isNull) {
      /* 059 */         int deserializetoobject_dataLength = inputadapter_value.numElements();
      /* 060 */
      /* 061 */         Double[] deserializetoobject_convertedArray = null;
      /* 062 */         deserializetoobject_convertedArray = new Double[deserializetoobject_dataLength];
      /* 063 */
      /* 064 */         int deserializetoobject_loopIndex = 0;
      /* 065 */         while (deserializetoobject_loopIndex < deserializetoobject_dataLength) {
      /* 066 */           MapObjects_loopValue2 = (double) (inputadapter_value.getDouble(deserializetoobject_loopIndex));
      /* 067 */           MapObjects_loopIsNull2 = inputadapter_value.isNullAt(deserializetoobject_loopIndex);
      /* 068 */
      /* 069 */           if (MapObjects_loopIsNull2) {
      /* 070 */             throw new RuntimeException(((java.lang.String) references[0]));
      /* 071 */           }
      /* 072 */           if (false) {
      /* 073 */             deserializetoobject_convertedArray[deserializetoobject_loopIndex] = null;
      /* 074 */           } else {
      /* 075 */             deserializetoobject_convertedArray[deserializetoobject_loopIndex] = MapObjects_loopValue2;
      /* 076 */           }
      /* 077 */
      /* 078 */           deserializetoobject_loopIndex += 1;
      /* 079 */         }
      /* 080 */
      /* 081 */         deserializetoobject_value1 = new org.apache.spark.sql.catalyst.util.GenericArrayData(deserializetoobject_convertedArray); /*###*/
      /* 082 */       }
      /* 083 */       boolean deserializetoobject_isNull = true;
      /* 084 */       double[] deserializetoobject_value = null;
      /* 085 */       if (!inputadapter_isNull) {
      /* 086 */         deserializetoobject_isNull = false;
      /* 087 */         if (!deserializetoobject_isNull) {
      /* 088 */           Object deserializetoobject_funcResult = null;
      /* 089 */           deserializetoobject_funcResult = deserializetoobject_value1.toDoubleArray();
      /* 090 */           if (deserializetoobject_funcResult == null) {
      /* 091 */             deserializetoobject_isNull = true;
      /* 092 */           } else {
      /* 093 */             deserializetoobject_value = (double[]) deserializetoobject_funcResult;
      /* 094 */           }
      /* 095 */
      /* 096 */         }
      /* 097 */         deserializetoobject_isNull = deserializetoobject_value == null;
      /* 098 */       }
      /* 099 */
      /* 100 */       boolean mapelements_isNull = true;
      /* 101 */       double[] mapelements_value = null;
      /* 102 */       if (!false) {
      /* 103 */         mapelements_resultIsNull = false;
      /* 104 */
      /* 105 */         if (!mapelements_resultIsNull) {
      /* 106 */           mapelements_resultIsNull = deserializetoobject_isNull;
      /* 107 */           mapelements_argValue = deserializetoobject_value;
      /* 108 */         }
      /* 109 */
      /* 110 */         mapelements_isNull = mapelements_resultIsNull;
      /* 111 */         if (!mapelements_isNull) {
      /* 112 */           Object mapelements_funcResult = null;
      /* 113 */           mapelements_funcResult = ((scala.Function1) references[1]).apply(mapelements_argValue);
      /* 114 */           if (mapelements_funcResult == null) {
      /* 115 */             mapelements_isNull = true;
      /* 116 */           } else {
      /* 117 */             mapelements_value = (double[]) mapelements_funcResult;
      /* 118 */           }
      /* 119 */
      /* 120 */         }
      /* 121 */         mapelements_isNull = mapelements_value == null;
      /* 122 */       }
      /* 123 */
      /* 124 */       serializefromobject_resultIsNull = false;
      /* 125 */
      /* 126 */       if (!serializefromobject_resultIsNull) {
      /* 127 */         serializefromobject_resultIsNull = mapelements_isNull;
      /* 128 */         serializefromobject_argValue = mapelements_value;
      /* 129 */       }
      /* 130 */
      /* 131 */       boolean serializefromobject_isNull = serializefromobject_resultIsNull;
      /* 132 */       final ArrayData serializefromobject_value = serializefromobject_resultIsNull ? null : org.apache.spark.sql.catalyst.expressions.UnsafeArrayData.fromPrimitiveArray(serializefromobject_argValue);
      /* 133 */       serializefromobject_isNull = serializefromobject_value == null;
      /* 134 */       serializefromobject_holder.reset();
      /* 135 */
      /* 136 */       serializefromobject_rowWriter.zeroOutNullBytes();
      /* 137 */
      /* 138 */       if (serializefromobject_isNull) {
      /* 139 */         serializefromobject_rowWriter.setNullAt(0);
      /* 140 */       } else {
      /* 141 */         // Remember the current cursor so that we can calculate how many bytes are
      /* 142 */         // written later.
      /* 143 */         final int serializefromobject_tmpCursor = serializefromobject_holder.cursor;
      /* 144 */
      /* 145 */         if (serializefromobject_value instanceof UnsafeArrayData) {
      /* 146 */           final int serializefromobject_sizeInBytes = ((UnsafeArrayData) serializefromobject_value).getSizeInBytes();
      /* 147 */           // grow the global buffer before writing data.
      /* 148 */           serializefromobject_holder.grow(serializefromobject_sizeInBytes);
      /* 149 */           ((UnsafeArrayData) serializefromobject_value).writeToMemory(serializefromobject_holder.buffer, serializefromobject_holder.cursor);
      /* 150 */           serializefromobject_holder.cursor += serializefromobject_sizeInBytes;
      /* 151 */
      /* 152 */         } else {
      /* 153 */           final int serializefromobject_numElements = serializefromobject_value.numElements();
      /* 154 */           serializefromobject_arrayWriter.initialize(serializefromobject_holder, serializefromobject_numElements, 8);
      /* 155 */
      /* 156 */           for (int serializefromobject_index = 0; serializefromobject_index < serializefromobject_numElements; serializefromobject_index++) {
      /* 157 */             if (serializefromobject_value.isNullAt(serializefromobject_index)) {
      /* 158 */               serializefromobject_arrayWriter.setNullDouble(serializefromobject_index);
      /* 159 */             } else {
      /* 160 */               final double serializefromobject_element = serializefromobject_value.getDouble(serializefromobject_index);
      /* 161 */               serializefromobject_arrayWriter.write(serializefromobject_index, serializefromobject_element);
      /* 162 */             }
      /* 163 */           }
      /* 164 */         }
      /* 165 */
      /* 166 */         serializefromobject_rowWriter.setOffsetAndSize(0, serializefromobject_tmpCursor, serializefromobject_holder.cursor - serializefromobject_tmpCursor);
      /* 167 */       }
      /* 168 */       serializefromobject_result.setTotalSize(serializefromobject_holder.totalSize());
      /* 169 */       append(serializefromobject_result);
      /* 170 */       if (shouldStop()) return;
      /* 171 */     }
      /* 172 */   }
      ```
      
      With this PR (removed most of lines 90-97 in the above code)
      ```java
      /* 050 */   protected void processNext() throws java.io.IOException {
      /* 051 */     while (inputadapter_input.hasNext() && !stopEarly()) {
      /* 052 */       InternalRow inputadapter_row = (InternalRow) inputadapter_input.next();
      /* 053 */       boolean inputadapter_isNull = inputadapter_row.isNullAt(0);
      /* 054 */       ArrayData inputadapter_value = inputadapter_isNull ? null : (inputadapter_row.getArray(0));
      /* 055 */
      /* 056 */       ArrayData deserializetoobject_value1 = null;
      /* 057 */
      /* 058 */       if (!inputadapter_isNull) {
      /* 059 */         int deserializetoobject_dataLength = inputadapter_value.numElements();
      /* 060 */
      /* 061 */         Double[] deserializetoobject_convertedArray = null;
      /* 062 */         deserializetoobject_convertedArray = new Double[deserializetoobject_dataLength];
      /* 063 */
      /* 064 */         int deserializetoobject_loopIndex = 0;
      /* 065 */         while (deserializetoobject_loopIndex < deserializetoobject_dataLength) {
      /* 066 */           MapObjects_loopValue2 = (double) (inputadapter_value.getDouble(deserializetoobject_loopIndex));
      /* 067 */           MapObjects_loopIsNull2 = inputadapter_value.isNullAt(deserializetoobject_loopIndex);
      /* 068 */
      /* 069 */           if (MapObjects_loopIsNull2) {
      /* 070 */             throw new RuntimeException(((java.lang.String) references[0]));
      /* 071 */           }
      /* 072 */           if (false) {
      /* 073 */             deserializetoobject_convertedArray[deserializetoobject_loopIndex] = null;
      /* 074 */           } else {
      /* 075 */             deserializetoobject_convertedArray[deserializetoobject_loopIndex] = MapObjects_loopValue2;
      /* 076 */           }
      /* 077 */
      /* 078 */           deserializetoobject_loopIndex += 1;
      /* 079 */         }
      /* 080 */
      /* 081 */         deserializetoobject_value1 = new org.apache.spark.sql.catalyst.util.GenericArrayData(deserializetoobject_convertedArray); /*###*/
      /* 082 */       }
      /* 083 */       boolean deserializetoobject_isNull = true;
      /* 084 */       double[] deserializetoobject_value = null;
      /* 085 */       if (!inputadapter_isNull) {
      /* 086 */         deserializetoobject_isNull = false;
      /* 087 */         if (!deserializetoobject_isNull) {
      /* 088 */           Object deserializetoobject_funcResult = null;
      /* 089 */           deserializetoobject_funcResult = deserializetoobject_value1.toDoubleArray();
      /* 090 */           deserializetoobject_value = (double[]) deserializetoobject_funcResult;
      /* 091 */
      /* 092 */         }
      /* 093 */
      /* 094 */       }
      /* 095 */
      /* 096 */       boolean mapelements_isNull = true;
      /* 097 */       double[] mapelements_value = null;
      /* 098 */       if (!false) {
      /* 099 */         mapelements_resultIsNull = false;
      /* 100 */
      /* 101 */         if (!mapelements_resultIsNull) {
      /* 102 */           mapelements_resultIsNull = deserializetoobject_isNull;
      /* 103 */           mapelements_argValue = deserializetoobject_value;
      /* 104 */         }
      /* 105 */
      /* 106 */         mapelements_isNull = mapelements_resultIsNull;
      /* 107 */         if (!mapelements_isNull) {
      /* 108 */           Object mapelements_funcResult = null;
      /* 109 */           mapelements_funcResult = ((scala.Function1) references[1]).apply(mapelements_argValue);
      /* 110 */           if (mapelements_funcResult == null) {
      /* 111 */             mapelements_isNull = true;
      /* 112 */           } else {
      /* 113 */             mapelements_value = (double[]) mapelements_funcResult;
      /* 114 */           }
      /* 115 */
      /* 116 */         }
      /* 117 */         mapelements_isNull = mapelements_value == null;
      /* 118 */       }
      /* 119 */
      /* 120 */       serializefromobject_resultIsNull = false;
      /* 121 */
      /* 122 */       if (!serializefromobject_resultIsNull) {
      /* 123 */         serializefromobject_resultIsNull = mapelements_isNull;
      /* 124 */         serializefromobject_argValue = mapelements_value;
      /* 125 */       }
      /* 126 */
      /* 127 */       boolean serializefromobject_isNull = serializefromobject_resultIsNull;
      /* 128 */       final ArrayData serializefromobject_value = serializefromobject_resultIsNull ? null : org.apache.spark.sql.catalyst.expressions.UnsafeArrayData.fromPrimitiveArray(serializefromobject_argValue);
      /* 129 */       serializefromobject_isNull = serializefromobject_value == null;
      /* 130 */       serializefromobject_holder.reset();
      /* 131 */
      /* 132 */       serializefromobject_rowWriter.zeroOutNullBytes();
      /* 133 */
      /* 134 */       if (serializefromobject_isNull) {
      /* 135 */         serializefromobject_rowWriter.setNullAt(0);
      /* 136 */       } else {
      /* 137 */         // Remember the current cursor so that we can calculate how many bytes are
      /* 138 */         // written later.
      /* 139 */         final int serializefromobject_tmpCursor = serializefromobject_holder.cursor;
      /* 140 */
      /* 141 */         if (serializefromobject_value instanceof UnsafeArrayData) {
      /* 142 */           final int serializefromobject_sizeInBytes = ((UnsafeArrayData) serializefromobject_value).getSizeInBytes();
      /* 143 */           // grow the global buffer before writing data.
      /* 144 */           serializefromobject_holder.grow(serializefromobject_sizeInBytes);
      /* 145 */           ((UnsafeArrayData) serializefromobject_value).writeToMemory(serializefromobject_holder.buffer, serializefromobject_holder.cursor);
      /* 146 */           serializefromobject_holder.cursor += serializefromobject_sizeInBytes;
      /* 147 */
      /* 148 */         } else {
      /* 149 */           final int serializefromobject_numElements = serializefromobject_value.numElements();
      /* 150 */           serializefromobject_arrayWriter.initialize(serializefromobject_holder, serializefromobject_numElements, 8);
      /* 151 */
      /* 152 */           for (int serializefromobject_index = 0; serializefromobject_index < serializefromobject_numElements; serializefromobject_index++) {
      /* 153 */             if (serializefromobject_value.isNullAt(serializefromobject_index)) {
      /* 154 */               serializefromobject_arrayWriter.setNullDouble(serializefromobject_index);
      /* 155 */             } else {
      /* 156 */               final double serializefromobject_element = serializefromobject_value.getDouble(serializefromobject_index);
      /* 157 */               serializefromobject_arrayWriter.write(serializefromobject_index, serializefromobject_element);
      /* 158 */             }
      /* 159 */           }
      /* 160 */         }
      /* 161 */
      /* 162 */         serializefromobject_rowWriter.setOffsetAndSize(0, serializefromobject_tmpCursor, serializefromobject_holder.cursor - serializefromobject_tmpCursor);
      /* 163 */       }
      /* 164 */       serializefromobject_result.setTotalSize(serializefromobject_holder.totalSize());
      /* 165 */       append(serializefromobject_result);
      /* 166 */       if (shouldStop()) return;
      /* 167 */     }
      /* 168 */   }
      ```
      
      ## How was this patch tested?
      
      Add test suites to ``DatasetPrimitiveSuite``
      
      Author: Kazuaki Ishizaki <ishizaki@jp.ibm.com>
      
      Closes #17569 from kiszk/SPARK-20253.
      7a63f5e8
    • Vijay Ramesh's avatar
      [SPARK-20260][MLLIB] String interpolation required for error message · 261eaf51
      Vijay Ramesh authored
      ## What changes were proposed in this pull request?
      This error message doesn't get properly formatted because of a missing `s`.  Currently the error looks like:
      
      ```
      Caused by: java.lang.IllegalArgumentException: requirement failed: indices should be one-based and in ascending order; found current=$current, previous=$previous; line="$line"
      ```
      (note the literal `$current` instead of the interpolated value)
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: Vijay Ramesh <vramesh@demandbase.com>
      
      Closes #17572 from vijaykramesh/master.
      261eaf51
    • Sean Owen's avatar
      [SPARK-19991][CORE][YARN] FileSegmentManagedBuffer performance improvement · 1f0de3c1
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Avoid `NoSuchElementException` every time `ConfigProvider.get(val, default)` falls back to default. This apparently causes non-trivial overhead in at least one path, and can easily be avoided.
      
      See https://github.com/apache/spark/pull/17329
      
      ## How was this patch tested?
      
      Existing tests
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #17567 from srowen/SPARK-19991.
      1f0de3c1
    • asmith26's avatar
      [MINOR] Issue: Change "slice" vs "partition" in exception messages (and code?) · 34fc48fb
      asmith26 authored
      ## What changes were proposed in this pull request?
      
      Came across the term "slice" when running some spark scala code. Consequently, a Google search indicated that "slices" and "partitions" refer to the same things; indeed see:
      
      - [This issue](https://issues.apache.org/jira/browse/SPARK-1701)
      - [This pull request](https://github.com/apache/spark/pull/2305)
      - [This StackOverflow answer](http://stackoverflow.com/questions/23436640/what-is-the-difference-between-an-rdd-partition-and-a-slice) and [this one](http://stackoverflow.com/questions/24269495/what-are-the-differences-between-slices-and-partitions-of-rdds)
      
      Thus this pull request fixes the occurrence of slice I came accross. Nonetheless, [it would appear](https://github.com/apache/spark/search?utf8=%E2%9C%93&q=slice&type=) there are still many references to "slice/slices" - thus I thought I'd raise this Pull Request to address the issue (sorry if this is the wrong place, I'm not too familar with raising apache issues).
      
      ## How was this patch tested?
      
      (Not tested locally - only a minor exception message change.)
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: asmith26 <asmith26@users.noreply.github.com>
      
      Closes #17565 from asmith26/master.
      34fc48fb
  6. Apr 07, 2017
    • Reynold Xin's avatar
      [SPARK-20262][SQL] AssertNotNull should throw NullPointerException · e1afc4dc
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      AssertNotNull currently throws RuntimeException. It should throw NullPointerException, which is more specific.
      
      ## How was this patch tested?
      N/A
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #17573 from rxin/SPARK-20262.
      e1afc4dc
    • Wenchen Fan's avatar
      [SPARK-20246][SQL] should not push predicate down through aggregate with... · 7577e9c3
      Wenchen Fan authored
      [SPARK-20246][SQL] should not push predicate down through aggregate with non-deterministic expressions
      
      ## What changes were proposed in this pull request?
      
      Similar to `Project`, when `Aggregate` has non-deterministic expressions, we should not push predicate down through it, as it will change the number of input rows and thus change the evaluation result of non-deterministic expressions in `Aggregate`.
      
      ## How was this patch tested?
      
      new regression test
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #17562 from cloud-fan/filter.
      7577e9c3
Loading