Skip to content
Snippets Groups Projects
  1. Nov 26, 2015
    • Shixiong Zhu's avatar
      [SPARK-11996][CORE] Make the executor thread dump work again · 0c1e72e7
      Shixiong Zhu authored
      In the previous implementation, the driver needs to know the executor listening address to send the thread dump request. However, in Netty RPC, the executor doesn't listen to any port, so the executor thread dump feature is broken.
      
      This patch makes the driver use the endpointRef stored in BlockManagerMasterEndpoint to send the thread dump request to fix it.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #9976 from zsxwing/executor-thread-dump.
      0c1e72e7
    • muxator's avatar
      doc typo: "classificaion" -> "classification" · 4376b5be
      muxator authored
      Author: muxator <muxator@users.noreply.github.com>
      
      Closes #10008 from muxator/patch-1.
      4376b5be
    • Reynold Xin's avatar
      [SPARK-11973][SQL] Improve optimizer code readability. · de28e4d4
      Reynold Xin authored
      This is a followup for https://github.com/apache/spark/pull/9959.
      
      I added more documentation and rewrote some monadic code into simpler ifs.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #9995 from rxin/SPARK-11973.
      de28e4d4
    • Yin Huai's avatar
      [SPARK-11998][SQL][TEST-HADOOP2.0] When downloading Hadoop artifacts from... · ad765623
      Yin Huai authored
      [SPARK-11998][SQL][TEST-HADOOP2.0] When downloading Hadoop artifacts from maven, we need to try to download the version that is used by Spark
      
      If we need to download Hive/Hadoop artifacts, try to download a Hadoop that matches the Hadoop used by Spark. If the Hadoop artifact cannot be resolved (e.g. Hadoop version is a vendor specific version like 2.0.0-cdh4.1.1), we will use Hadoop 2.4.0 (we used to hard code this version as the hadoop that we will download from maven) and we will not share Hadoop classes.
      
      I tested this match in my laptop with the following confs (these confs are used by our builds). All tests are good.
      ```
      build/sbt -Phadoop-1 -Dhadoop.version=1.2.1 -Pkinesis-asl -Phive-thriftserver -Phive
      build/sbt -Phadoop-1 -Dhadoop.version=2.0.0-mr1-cdh4.1.1 -Pkinesis-asl -Phive-thriftserver -Phive
      build/sbt -Pyarn -Phadoop-2.2 -Pkinesis-asl -Phive-thriftserver -Phive
      build/sbt -Pyarn -Phadoop-2.3 -Dhadoop.version=2.3.0 -Pkinesis-asl -Phive-thriftserver -Phive
      ```
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #9979 from yhuai/versionsSuite.
      ad765623
    • Dilip Biswal's avatar
      [SPARK-11863][SQL] Unable to resolve order by if it contains mixture of aliases and real columns · bc16a675
      Dilip Biswal authored
      this is based on https://github.com/apache/spark/pull/9844, with some bug fix and clean up.
      
      The problems is that, normal operator should be resolved based on its child, but `Sort` operator can also be resolved based on its grandchild. So we have 3 rules that can resolve `Sort`: `ResolveReferences`, `ResolveSortReferences`(if grandchild is `Project`) and `ResolveAggregateFunctions`(if grandchild is `Aggregate`).
      For example, `select c1 as a , c2 as b from tab group by c1, c2 order by a, c2`, we need to resolve `a` and `c2` for `Sort`. Firstly `a` will be resolved in `ResolveReferences` based on its child, and when we reach `ResolveAggregateFunctions`, we will try to resolve both `a` and `c2` based on its grandchild, but failed because `a` is not a legal aggregate expression.
      
      whoever merge this PR, please give the credit to dilipbiswal
      
      Author: Dilip Biswal <dbiswal@us.ibm.com>
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #9961 from cloud-fan/sort.
      bc16a675
    • Marcelo Vanzin's avatar
      [SPARK-12005][SQL] Work around VerifyError in HyperLogLogPlusPlus. · 001f0528
      Marcelo Vanzin authored
      Just move the code around a bit; that seems to make the JVM happy.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #9985 from vanzin/SPARK-12005.
      001f0528
    • Davies Liu's avatar
      [SPARK-11973] [SQL] push filter through aggregation with alias and literals · 27d69a05
      Davies Liu authored
      Currently, filter can't be pushed through aggregation with alias or literals, this patch fix that.
      
      After this patch, the time of TPC-DS query 4 go down to 13 seconds from 141 seconds (10x improvements).
      
      cc nongli  yhuai
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #9959 from davies/push_filter2.
      27d69a05
    • Shixiong Zhu's avatar
      [SPARK-11999][CORE] Fix the issue that ThreadUtils.newDaemonCachedThreadPool doesn't cache any task · d3ef6933
      Shixiong Zhu authored
      In the previous codes, `newDaemonCachedThreadPool` uses `SynchronousQueue`, which is wrong. `SynchronousQueue` is an empty queue that cannot cache any task. This patch uses `LinkedBlockingQueue` to fix it along with other fixes to make sure `newDaemonCachedThreadPool` can use at most `maxThreadNumber` threads, and after that, cache tasks to `LinkedBlockingQueue`.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #9978 from zsxwing/cached-threadpool.
      d3ef6933
    • gatorsmile's avatar
      [SPARK-11980][SPARK-10621][SQL] Fix json_tuple and add test cases for · 068b6438
      gatorsmile authored
      Added Python test cases for the function `isnan`, `isnull`, `nanvl` and `json_tuple`.
      
      Fixed a bug in the function `json_tuple`
      
      rxin , could you help me review my changes? Please let me know anything is missing.
      
      Thank you! Have a good Thanksgiving day!
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #9977 from gatorsmile/json_tuple.
      068b6438
  2. Nov 25, 2015
    • Davies Liu's avatar
      [SPARK-12003] [SQL] remove the prefix for name after expanded star · d1930ec0
      Davies Liu authored
      Right now, the expended start will include the name of expression as prefix for column, that's not better than without expending, we should not have the prefix.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #9984 from davies/expand_star.
      d1930ec0
    • Carson Wang's avatar
      [SPARK-11206] Support SQL UI on the history server · cc243a07
      Carson Wang authored
      On the live web UI, there is a SQL tab which provides valuable information for the SQL query. But once the workload is finished, we won't see the SQL tab on the history server. It will be helpful if we support SQL UI on the history server so we can analyze it even after its execution.
      
      To support SQL UI on the history server:
      1. I added an `onOtherEvent` method to the `SparkListener` trait and post all SQL related events to the same event bus.
      2. Two SQL events `SparkListenerSQLExecutionStart` and `SparkListenerSQLExecutionEnd` are defined in the sql module.
      3. The new SQL events are written to event log using Jackson.
      4.  A new trait `SparkHistoryListenerFactory` is added to allow the history server to feed events to the SQL history listener. The SQL implementation is loaded at runtime using `java.util.ServiceLoader`.
      
      Author: Carson Wang <carson.wang@intel.com>
      
      Closes #9297 from carsonwang/SqlHistoryUI.
      cc243a07
    • Daoyuan Wang's avatar
      [SPARK-11983][SQL] remove all unused codegen fallback trait · 21e56064
      Daoyuan Wang authored
      Author: Daoyuan Wang <daoyuan.wang@intel.com>
      
      Closes #9966 from adrian-wang/removeFallback.
      21e56064
    • Reynold Xin's avatar
    • Marcelo Vanzin's avatar
      [SPARK-11866][NETWORK][CORE] Make sure timed out RPCs are cleaned up. · 4e81783e
      Marcelo Vanzin authored
      This change does a couple of different things to make sure that the RpcEnv-level
      code and the network library agree about the status of outstanding RPCs.
      
      For RPCs that do not expect a reply ("RpcEnv.send"), support for one way
      messages (hello CORBA!) was added to the network layer. This is a
      "fire and forget" message that does not require any state to be kept
      by the TransportClient; as a result, the RpcEnv 'Ack' message is not needed
      anymore.
      
      For RPCs that do expect a reply ("RpcEnv.ask"), the network library now
      returns the internal RPC id; if the RpcEnv layer decides to time out the
      RPC before the network layer does, it now asks the TransportClient to
      forget about the RPC, so that if the network-level timeout occurs, the
      client is not killed.
      
      As part of implementing the above, I cleaned up some of the code in the
      netty rpc backend, removing types that were not necessary and factoring
      out some common code. Of interest is a slight change in the exceptions
      when posting messages to a stopped RpcEnv; that's mostly to avoid nasty
      error messages from the local-cluster backend when shutting down, which
      pollutes the terminal output.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #9917 from vanzin/SPARK-11866.
      4e81783e
    • Shixiong Zhu's avatar
      [SPARK-11935][PYSPARK] Send the Python exceptions in TransformFunction and... · d29e2ef4
      Shixiong Zhu authored
      [SPARK-11935][PYSPARK] Send the Python exceptions in TransformFunction and TransformFunctionSerializer to Java
      
      The Python exception track in TransformFunction and TransformFunctionSerializer is not sent back to Java. Py4j just throws a very general exception, which is hard to debug.
      
      This PRs adds `getFailure` method to get the failure message in Java side.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #9922 from zsxwing/SPARK-11935.
      d29e2ef4
    • jerryshao's avatar
      [SPARK-10558][CORE] Fix wrong executor state in Master · 88875d94
      jerryshao authored
      `ExecutorAdded` can only be sent to `AppClient` when worker report back the executor state as `LOADING`, otherwise because of concurrency issue, `AppClient` will possibly receive `ExectuorAdded` at first, then `ExecutorStateUpdated` with `LOADING` state.
      
      Also Master will change the executor state from `LAUNCHING` to `RUNNING` (`AppClient` report back the state as `RUNNING`), then to `LOADING` (worker report back to state as `LOADING`), it should be `LAUNCHING` -> `LOADING` -> `RUNNING`.
      
      Also it is wrongly shown in master UI, the state of executor should be `RUNNING` rather than `LOADING`:
      
      ![screen shot 2015-09-11 at 2 30 28 pm](https://cloud.githubusercontent.com/assets/850797/9809254/3155d840-5899-11e5-8cdf-ad06fef75762.png)
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #8714 from jerryshao/SPARK-10558.
      88875d94
    • wangt's avatar
      [SPARK-11880][WINDOWS][SPARK SUBMIT] bin/load-spark-env.cmd loads... · 9f3e59a1
      wangt authored
      [SPARK-11880][WINDOWS][SPARK SUBMIT] bin/load-spark-env.cmd loads spark-env.cmd from wrong directory
      
      * On windows the `bin/load-spark-env.cmd` tries to load `spark-env.cmd` from `%~dp0..\..\conf`, where `~dp0` points to `bin` and `conf` is only one level up.
      * Updated `bin/load-spark-env.cmd` to load `spark-env.cmd` from `%~dp0..\conf`, instead of `%~dp0..\..\conf`
      
      Author: wangt <wangtao.upc@gmail.com>
      
      Closes #9863 from toddwan/master.
      9f3e59a1
    • Alex Bozarth's avatar
      [SPARK-10864][WEB UI] app name is hidden if window is resized · 83653ac5
      Alex Bozarth authored
      Currently the Web UI navbar has a minimum width of 1200px; so if a window is resized smaller than that the app name goes off screen. The 1200px width seems to have been chosen since it fits the longest example app name without wrapping.
      
      To work with smaller window widths I made the tabs wrap since it looked better than wrapping the app name. This is a distinct change in how the navbar looks and I'm not sure if it's what we actually want to do.
      
      Other notes:
      - min-width set to 600px to keep the tabs from wrapping individually (will need to be adjusted if tabs are added)
      - app name will also wrap (making three levels) if a really really long app name is used
      
      Author: Alex Bozarth <ajbozart@us.ibm.com>
      
      Closes #9874 from ajbozarth/spark10864.
      83653ac5
    • Jeff Zhang's avatar
      [DOCUMENTATION] Fix minor doc error · 67b67320
      Jeff Zhang authored
      Author: Jeff Zhang <zjffdu@apache.org>
      
      Closes #9956 from zjffdu/dev_typo.
      67b67320
    • Yu ISHIKAWA's avatar
      [MINOR] Remove unnecessary spaces in `include_example.rb` · 0dee44a6
      Yu ISHIKAWA authored
      Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com>
      
      Closes #9960 from yu-iskw/minor-remove-spaces.
      0dee44a6
    • Davies Liu's avatar
      [SPARK-11969] [SQL] [PYSPARK] visualization of SQL query for pyspark · dc1d324f
      Davies Liu authored
      Currently, we does not have visualization for SQL query from Python, this PR fix that.
      
      cc zsxwing
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #9949 from davies/pyspark_sql_ui.
      dc1d324f
    • Zhongshuai Pei's avatar
      [SPARK-11974][CORE] Not all the temp dirs had been deleted when the JVM exits · 6b781576
      Zhongshuai Pei authored
      deleting the temp dir like that
      
      ```
      
      scala> import scala.collection.mutable
      import scala.collection.mutable
      
      scala> val a = mutable.Set(1,2,3,4,7,0,8,98,9)
      a: scala.collection.mutable.Set[Int] = Set(0, 9, 1, 2, 3, 7, 4, 8, 98)
      
      scala> a.foreach(x => {a.remove(x) })
      
      scala> a.foreach(println(_))
      98
      ```
      
      You may not modify a collection while traversing or iterating over it.This can not delete all element of the collection
      
      Author: Zhongshuai Pei <peizhongshuai@huawei.com>
      
      Closes #9951 from DoingDone9/Bug_RemainDir.
      6b781576
    • felixcheung's avatar
      [SPARK-11984][SQL][PYTHON] Fix typos in doc for pivot for scala and python · faabdfa2
      felixcheung authored
      Author: felixcheung <felixcheung_m@hotmail.com>
      
      Closes #9967 from felixcheung/pypivotdoc.
      faabdfa2
    • Marcelo Vanzin's avatar
      [SPARK-11956][CORE] Fix a few bugs in network lib-based file transfer. · c1f85fc7
      Marcelo Vanzin authored
      - NettyRpcEnv::openStream() now correctly propagates errors to
        the read side of the pipe.
      - NettyStreamManager now throws if the file being transferred does
        not exist.
      - The network library now correctly handles zero-sized streams.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #9941 from vanzin/SPARK-11956.
      c1f85fc7
    • Mark Hamstra's avatar
      [SPARK-10666][SPARK-6880][CORE] Use properties from ActiveJob associated with a Stage · 0a5aef75
      Mark Hamstra authored
      This issue was addressed in https://github.com/apache/spark/pull/5494, but the fix in that PR, while safe in the sense that it will prevent the SparkContext from shutting down, misses the actual bug.  The intent of `submitMissingTasks` should be understood as "submit the Tasks that are missing for the Stage, and run them as part of the ActiveJob identified by jobId".  Because of a long-standing bug, the `jobId` parameter was never being used.  Instead, we were trying to use the jobId with which the Stage was created -- which may no longer exist as an ActiveJob, hence the crash reported in SPARK-6880.
      
      The correct fix is to use the ActiveJob specified by the supplied jobId parameter, which is guaranteed to exist at the call sites of submitMissingTasks.
      
      This fix should be applied to all maintenance branches, since it has existed since 1.0.
      
      kayousterhout pankajarora12
      
      Author: Mark Hamstra <markhamstra@gmail.com>
      Author: Imran Rashid <irashid@cloudera.com>
      
      Closes #6291 from markhamstra/SPARK-6880.
      0a5aef75
    • Jeff Zhang's avatar
      [SPARK-11860][PYSAPRK][DOCUMENTATION] Invalid argument specification … · b9b6fbe8
      Jeff Zhang authored
      …for registerFunction [Python]
      
      Straightforward change on the python doc
      
      Author: Jeff Zhang <zjffdu@apache.org>
      
      Closes #9901 from zjffdu/SPARK-11860.
      b9b6fbe8
    • Ashwin Swaroop's avatar
      [SPARK-11686][CORE] Issue WARN when dynamic allocation is disabled due to... · 63850026
      Ashwin Swaroop authored
      [SPARK-11686][CORE] Issue WARN when dynamic allocation is disabled due to spark.dynamicAllocation.enabled and spark.executor.instances both set
      
      Changed the log type to a 'warning' instead of 'info' as required.
      
      Author: Ashwin Swaroop <Ashwin Swaroop>
      
      Closes #9926 from ashwinswaroop/master.
      63850026
    • Reynold Xin's avatar
      [SPARK-11981][SQL] Move implementations of methods back to DataFrame from Queryable · a0f1a118
      Reynold Xin authored
      Also added show methods to Dataset.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #9964 from rxin/SPARK-11981.
      a0f1a118
    • gatorsmile's avatar
      [SPARK-11970][SQL] Adding JoinType into JoinWith and support Sample in Dataset API · 2610e061
      gatorsmile authored
      Except inner join, maybe the other join types are also useful when users are using the joinWith function. Thus, added the joinType into the existing joinWith call in Dataset APIs.
      
      Also providing another joinWith interface for the cartesian-join-like functionality.
      
      Please provide your opinions. marmbrus rxin cloud-fan Thank you!
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #9921 from gatorsmile/joinWith.
      2610e061
    • Tathagata Das's avatar
      [SPARK-11979][STREAMING] Empty TrackStateRDD cannot be checkpointed and... · 21698868
      Tathagata Das authored
      [SPARK-11979][STREAMING] Empty TrackStateRDD cannot be checkpointed and recovered from checkpoint file
      
      This solves the following exception caused when empty state RDD is checkpointed and recovered. The root cause is that an empty OpenHashMapBasedStateMap cannot be deserialized as the initialCapacity is set to zero.
      ```
      Job aborted due to stage failure: Task 0 in stage 6.0 failed 1 times, most recent failure: Lost task 0.0 in stage 6.0 (TID 20, localhost): java.lang.IllegalArgumentException: requirement failed: Invalid initial capacity
      	at scala.Predef$.require(Predef.scala:233)
      	at org.apache.spark.streaming.util.OpenHashMapBasedStateMap.<init>(StateMap.scala:96)
      	at org.apache.spark.streaming.util.OpenHashMapBasedStateMap.<init>(StateMap.scala:86)
      	at org.apache.spark.streaming.util.OpenHashMapBasedStateMap.readObject(StateMap.scala:291)
      	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      	at java.lang.reflect.Method.invoke(Method.java:606)
      	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
      	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
      	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
      	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
      	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
      	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
      	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
      	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
      	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
      	at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
      	at org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:181)
      	at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
      	at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
      	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
      	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
      	at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
      	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
      	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
      	at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
      	at scala.collection.AbstractIterator.to(Iterator.scala:1157)
      	at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
      	at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
      	at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
      	at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
      	at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:921)
      	at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:921)
      	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
      	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)
      	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
      	at org.apache.spark.scheduler.Task.run(Task.scala:88)
      	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      	at java.lang.Thread.run(Thread.java:744)
      ```
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #9958 from tdas/SPARK-11979.
      21698868
  3. Nov 24, 2015
    • Reynold Xin's avatar
      [SPARK-10621][SQL] Consistent naming for functions in SQL, Python, Scala · 151d7c2b
      Reynold Xin authored
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #9948 from rxin/SPARK-10621.
      151d7c2b
    • Burak Yavuz's avatar
      [STREAMING][FLAKY-TEST] Catch execution context race condition in `FileBasedWriteAheadLog.close()` · a5d98876
      Burak Yavuz authored
      There is a race condition in `FileBasedWriteAheadLog.close()`, where if delete's of old log files are in progress, the write ahead log may close, and result in a `RejectedExecutionException`. This is okay, and should be handled gracefully.
      
      Example test failures:
      https://amplab.cs.berkeley.edu/jenkins/job/Spark-1.6-SBT/AMPLAB_JENKINS_BUILD_PROFILE=hadoop1.0,label=spark-test/95/testReport/junit/org.apache.spark.streaming.util/BatchedWriteAheadLogWithCloseFileAfterWriteSuite/BatchedWriteAheadLog___clean_old_logs/
      
      The reason the test fails is in `afterEach`, `writeAheadLog.close` is called, and there may still be async deletes in flight.
      
      tdas zsxwing
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #9953 from brkyvz/flaky-ss.
      a5d98876
    • Reynold Xin's avatar
      [SPARK-11947][SQL] Mark deprecated methods with "This will be removed in Spark 2.0." · 4d6bbbc0
      Reynold Xin authored
      Also fixed some documentation as I saw them.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #9930 from rxin/SPARK-11947.
      4d6bbbc0
    • Reynold Xin's avatar
      [SPARK-11967][SQL] Consistent use of varargs for multiple paths in DataFrameReader · 25bbd3c1
      Reynold Xin authored
      This patch makes it consistent to use varargs in all DataFrameReader methods, including Parquet, JSON, text, and the generic load function.
      
      Also added a few more API tests for the Java API.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #9945 from rxin/SPARK-11967.
      25bbd3c1
    • gatorsmile's avatar
      [SPARK-11914][SQL] Support coalesce and repartition in Dataset APIs · 238ae51b
      gatorsmile authored
      This PR is to provide two common `coalesce` and `repartition` in Dataset APIs.
      
      After reading the comments of SPARK-9999, I am unclear about the plan for supporting re-partitioning in Dataset APIs. Currently, both RDD APIs and Dataframe APIs provide users such a flexibility to control the number of partitions.
      
      In most traditional RDBMS, they expose the number of partitions, the partitioning columns, the table partitioning methods to DBAs for performance tuning and storage planning. Normally, these parameters could largely affect the query performance. Since the actual performance depends on the workload types, I think it is almost impossible to automate the discovery of the best partitioning strategy for all the scenarios.
      
      I am wondering if Dataset APIs are planning to hide these APIs from users? Feel free to reject my PR if it does not match the plan.
      
      Thank you for your answers. marmbrus rxin cloud-fan
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #9899 from gatorsmile/coalesce.
      238ae51b
    • Cheng Lian's avatar
      [SPARK-11783][SQL] Fixes execution Hive client when using remote Hive metastore · c7f95df5
      Cheng Lian authored
      When using remote Hive metastore, `hive.metastore.uris` is set to the metastore URI.  However, it overrides `javax.jdo.option.ConnectionURL` unexpectedly, thus the execution Hive client connects to the actual remote Hive metastore instead of the Derby metastore created in the temporary directory.  Cleaning this configuration for the execution Hive client fixes this issue.
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #9895 from liancheng/spark-11783.clean-remote-metastore-config.
      c7f95df5
    • Reynold Xin's avatar
    • Davies Liu's avatar
      [SPARK-11805] free the array in UnsafeExternalSorter during spilling · 58d9b260
      Davies Liu authored
      After calling spill() on SortedIterator, the array inside InMemorySorter is not needed, it should be freed during spilling, this could help to join multiple tables with limited memory.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #9793 from davies/free_array.
      58d9b260
    • Marcelo Vanzin's avatar
      [SPARK-11929][CORE] Make the repl log4j configuration override the root logger. · e6dd2374
      Marcelo Vanzin authored
      In the default Spark distribution, there are currently two separate
      log4j config files, with different default values for the root logger,
      so that when running the shell you have a different default log level.
      This makes the shell more usable, since the logs don't overwhelm the
      output.
      
      But if you install a custom log4j.properties, you lose that, because
      then it's going to be used no matter whether you're running a regular
      app or the shell.
      
      With this change, the overriding of the log level is done differently;
      the log level repl's main class (org.apache.spark.repl.Main) is used
      to define the root logger's level when running the shell, defaulting
      to WARN if it's not set explicitly.
      
      On a somewhat related change, the shell output about the "sc" variable
      was changed a bit to contain a little more useful information about
      the application, since when the root logger's log level is WARN, that
      information is never shown to the user.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #9816 from vanzin/shell-logging.
      e6dd2374
    • Reynold Xin's avatar
      [SPARK-11946][SQL] Audit pivot API for 1.6. · f3152722
      Reynold Xin authored
      Currently pivot's signature looks like
      
      ```scala
      scala.annotation.varargs
      def pivot(pivotColumn: Column, values: Column*): GroupedData
      
      scala.annotation.varargs
      def pivot(pivotColumn: String, values: Any*): GroupedData
      ```
      
      I think we can remove the one that takes "Column" types, since callers should always be passing in literals. It'd also be more clear if the values are not varargs, but rather Seq or java.util.List.
      
      I also made similar changes for Python.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #9929 from rxin/SPARK-11946.
      f3152722
Loading