Skip to content
Snippets Groups Projects
  1. Nov 30, 2015
  2. Nov 29, 2015
  3. Nov 28, 2015
    • felixcheung's avatar
      [SPARK-9319][SPARKR] Add support for setting column names, types · c793d2d9
      felixcheung authored
      Add support for for colnames, colnames<-, coltypes<-
      Also added tests for names, names<- which have no test previously.
      
      I merged with PR 8984 (coltypes). Clicked the wrong thing, crewed up the PR. Recreated it here. Was #9218
      
      shivaram sun-rui
      
      Author: felixcheung <felixcheung_m@hotmail.com>
      
      Closes #9654 from felixcheung/colnamescoltypes.
      c793d2d9
    • felixcheung's avatar
      [SPARK-12029][SPARKR] Improve column functions signature, param check, tests,... · 28e46ab4
      felixcheung authored
      [SPARK-12029][SPARKR] Improve column functions signature, param check, tests, fix doc and add examples
      
      shivaram sun-rui
      
      Author: felixcheung <felixcheung_m@hotmail.com>
      
      Closes #10019 from felixcheung/rfunctionsdoc.
      28e46ab4
    • gatorsmile's avatar
      [SPARK-12028] [SQL] get_json_object returns an incorrect result when the value is null literals · 149cd692
      gatorsmile authored
      When calling `get_json_object` for the following two cases, both results are `"null"`:
      
      ```scala
          val tuple: Seq[(String, String)] = ("5", """{"f1": null}""") :: Nil
          val df: DataFrame = tuple.toDF("key", "jstring")
          val res = df.select(functions.get_json_object($"jstring", "$.f1")).collect()
      ```
      ```scala
          val tuple2: Seq[(String, String)] = ("5", """{"f1": "null"}""") :: Nil
          val df2: DataFrame = tuple2.toDF("key", "jstring")
          val res3 = df2.select(functions.get_json_object($"jstring", "$.f1")).collect()
      ```
      
      Fixed the problem and also added a test case.
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #10018 from gatorsmile/get_json_object.
      149cd692
  4. Nov 27, 2015
  5. Nov 26, 2015
    • Dilip Biswal's avatar
      [SPARK-11997] [SQL] NPE when save a DataFrame as parquet and partitioned by long column · a374e20b
      Dilip Biswal authored
      Check for partition column null-ability while building the partition spec.
      
      Author: Dilip Biswal <dbiswal@us.ibm.com>
      
      Closes #10001 from dilipbiswal/spark-11997.
      a374e20b
    • Reynold Xin's avatar
      Fix style violation for b63938a8 · 10e315c2
      Reynold Xin authored
      10e315c2
    • Jeremy Derr's avatar
      [SPARK-11991] fixes · 5eaed4e4
      Jeremy Derr authored
      If `--private-ips` is required but not provided, spark_ec2.py may behave inappropriately, including attempting to ssh to localhost in attempts to verify ssh connectivity to the cluster.
      
      This fixes that behavior by raising a `UsageError` exception if `get_dns_name` is unable to determine a hostname as a result.
      
      Author: Jeremy Derr <jcderr@radius.com>
      
      Closes #9975 from jcderr/SPARK-11991/ec_spark.py_hostname_check.
      5eaed4e4
    • Huaxin Gao's avatar
      [SPARK-11778][SQL] add regression test · 4d4cbc03
      Huaxin Gao authored
      Fix regression test for SPARK-11778.
       marmbrus
      Could you please take a look?
      Thank you very much!!
      
      Author: Huaxin Gao <huaxing@oc0558782468.ibm.com>
      
      Closes #9890 from huaxingao/spark-11778-regression-test.
      4d4cbc03
    • Jeff Zhang's avatar
      [SPARK-11917][PYSPARK] Add SQLContext#dropTempTable to PySpark · d8220885
      Jeff Zhang authored
      Author: Jeff Zhang <zjffdu@apache.org>
      
      Closes #9903 from zjffdu/SPARK-11917.
      d8220885
    • mariusvniekerk's avatar
      [SPARK-11881][SQL] Fix for postgresql fetchsize > 0 · b63938a8
      mariusvniekerk authored
      Reference: https://jdbc.postgresql.org/documentation/head/query.html#query-with-cursor
      In order for PostgreSQL to honor the fetchSize non-zero setting, its Connection.autoCommit needs to be set to false. Otherwise, it will just quietly ignore the fetchSize setting.
      
      This adds a new side-effecting dialect specific beforeFetch method that will fire before a select query is ran.
      
      Author: mariusvniekerk <marius.v.niekerk@gmail.com>
      
      Closes #9861 from mariusvniekerk/SPARK-11881.
      b63938a8
    • Yanbo Liang's avatar
      [SPARK-12011][SQL] Stddev/Variance etc should support columnName as arguments · 6f6bb0e8
      Yanbo Liang authored
      Spark SQL aggregate function:
      ```Java
      stddev
      stddev_pop
      stddev_samp
      variance
      var_pop
      var_samp
      skewness
      kurtosis
      collect_list
      collect_set
      ```
      should support ```columnName``` as arguments like other aggregate function(max/min/count/sum).
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #9994 from yanboliang/SPARK-12011.
      6f6bb0e8
    • Shixiong Zhu's avatar
      [SPARK-11996][CORE] Make the executor thread dump work again · 0c1e72e7
      Shixiong Zhu authored
      In the previous implementation, the driver needs to know the executor listening address to send the thread dump request. However, in Netty RPC, the executor doesn't listen to any port, so the executor thread dump feature is broken.
      
      This patch makes the driver use the endpointRef stored in BlockManagerMasterEndpoint to send the thread dump request to fix it.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #9976 from zsxwing/executor-thread-dump.
      0c1e72e7
    • muxator's avatar
      doc typo: "classificaion" -> "classification" · 4376b5be
      muxator authored
      Author: muxator <muxator@users.noreply.github.com>
      
      Closes #10008 from muxator/patch-1.
      4376b5be
    • Reynold Xin's avatar
      [SPARK-11973][SQL] Improve optimizer code readability. · de28e4d4
      Reynold Xin authored
      This is a followup for https://github.com/apache/spark/pull/9959.
      
      I added more documentation and rewrote some monadic code into simpler ifs.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #9995 from rxin/SPARK-11973.
      de28e4d4
    • Yin Huai's avatar
      [SPARK-11998][SQL][TEST-HADOOP2.0] When downloading Hadoop artifacts from... · ad765623
      Yin Huai authored
      [SPARK-11998][SQL][TEST-HADOOP2.0] When downloading Hadoop artifacts from maven, we need to try to download the version that is used by Spark
      
      If we need to download Hive/Hadoop artifacts, try to download a Hadoop that matches the Hadoop used by Spark. If the Hadoop artifact cannot be resolved (e.g. Hadoop version is a vendor specific version like 2.0.0-cdh4.1.1), we will use Hadoop 2.4.0 (we used to hard code this version as the hadoop that we will download from maven) and we will not share Hadoop classes.
      
      I tested this match in my laptop with the following confs (these confs are used by our builds). All tests are good.
      ```
      build/sbt -Phadoop-1 -Dhadoop.version=1.2.1 -Pkinesis-asl -Phive-thriftserver -Phive
      build/sbt -Phadoop-1 -Dhadoop.version=2.0.0-mr1-cdh4.1.1 -Pkinesis-asl -Phive-thriftserver -Phive
      build/sbt -Pyarn -Phadoop-2.2 -Pkinesis-asl -Phive-thriftserver -Phive
      build/sbt -Pyarn -Phadoop-2.3 -Dhadoop.version=2.3.0 -Pkinesis-asl -Phive-thriftserver -Phive
      ```
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #9979 from yhuai/versionsSuite.
      ad765623
    • Dilip Biswal's avatar
      [SPARK-11863][SQL] Unable to resolve order by if it contains mixture of aliases and real columns · bc16a675
      Dilip Biswal authored
      this is based on https://github.com/apache/spark/pull/9844, with some bug fix and clean up.
      
      The problems is that, normal operator should be resolved based on its child, but `Sort` operator can also be resolved based on its grandchild. So we have 3 rules that can resolve `Sort`: `ResolveReferences`, `ResolveSortReferences`(if grandchild is `Project`) and `ResolveAggregateFunctions`(if grandchild is `Aggregate`).
      For example, `select c1 as a , c2 as b from tab group by c1, c2 order by a, c2`, we need to resolve `a` and `c2` for `Sort`. Firstly `a` will be resolved in `ResolveReferences` based on its child, and when we reach `ResolveAggregateFunctions`, we will try to resolve both `a` and `c2` based on its grandchild, but failed because `a` is not a legal aggregate expression.
      
      whoever merge this PR, please give the credit to dilipbiswal
      
      Author: Dilip Biswal <dbiswal@us.ibm.com>
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #9961 from cloud-fan/sort.
      bc16a675
    • Marcelo Vanzin's avatar
      [SPARK-12005][SQL] Work around VerifyError in HyperLogLogPlusPlus. · 001f0528
      Marcelo Vanzin authored
      Just move the code around a bit; that seems to make the JVM happy.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #9985 from vanzin/SPARK-12005.
      001f0528
    • Davies Liu's avatar
      [SPARK-11973] [SQL] push filter through aggregation with alias and literals · 27d69a05
      Davies Liu authored
      Currently, filter can't be pushed through aggregation with alias or literals, this patch fix that.
      
      After this patch, the time of TPC-DS query 4 go down to 13 seconds from 141 seconds (10x improvements).
      
      cc nongli  yhuai
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #9959 from davies/push_filter2.
      27d69a05
    • Shixiong Zhu's avatar
      [SPARK-11999][CORE] Fix the issue that ThreadUtils.newDaemonCachedThreadPool doesn't cache any task · d3ef6933
      Shixiong Zhu authored
      In the previous codes, `newDaemonCachedThreadPool` uses `SynchronousQueue`, which is wrong. `SynchronousQueue` is an empty queue that cannot cache any task. This patch uses `LinkedBlockingQueue` to fix it along with other fixes to make sure `newDaemonCachedThreadPool` can use at most `maxThreadNumber` threads, and after that, cache tasks to `LinkedBlockingQueue`.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #9978 from zsxwing/cached-threadpool.
      d3ef6933
    • gatorsmile's avatar
      [SPARK-11980][SPARK-10621][SQL] Fix json_tuple and add test cases for · 068b6438
      gatorsmile authored
      Added Python test cases for the function `isnan`, `isnull`, `nanvl` and `json_tuple`.
      
      Fixed a bug in the function `json_tuple`
      
      rxin , could you help me review my changes? Please let me know anything is missing.
      
      Thank you! Have a good Thanksgiving day!
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #9977 from gatorsmile/json_tuple.
      068b6438
  6. Nov 25, 2015
    • Davies Liu's avatar
      [SPARK-12003] [SQL] remove the prefix for name after expanded star · d1930ec0
      Davies Liu authored
      Right now, the expended start will include the name of expression as prefix for column, that's not better than without expending, we should not have the prefix.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #9984 from davies/expand_star.
      d1930ec0
    • Carson Wang's avatar
      [SPARK-11206] Support SQL UI on the history server · cc243a07
      Carson Wang authored
      On the live web UI, there is a SQL tab which provides valuable information for the SQL query. But once the workload is finished, we won't see the SQL tab on the history server. It will be helpful if we support SQL UI on the history server so we can analyze it even after its execution.
      
      To support SQL UI on the history server:
      1. I added an `onOtherEvent` method to the `SparkListener` trait and post all SQL related events to the same event bus.
      2. Two SQL events `SparkListenerSQLExecutionStart` and `SparkListenerSQLExecutionEnd` are defined in the sql module.
      3. The new SQL events are written to event log using Jackson.
      4.  A new trait `SparkHistoryListenerFactory` is added to allow the history server to feed events to the SQL history listener. The SQL implementation is loaded at runtime using `java.util.ServiceLoader`.
      
      Author: Carson Wang <carson.wang@intel.com>
      
      Closes #9297 from carsonwang/SqlHistoryUI.
      cc243a07
    • Daoyuan Wang's avatar
      [SPARK-11983][SQL] remove all unused codegen fallback trait · 21e56064
      Daoyuan Wang authored
      Author: Daoyuan Wang <daoyuan.wang@intel.com>
      
      Closes #9966 from adrian-wang/removeFallback.
      21e56064
    • Reynold Xin's avatar
    • Marcelo Vanzin's avatar
      [SPARK-11866][NETWORK][CORE] Make sure timed out RPCs are cleaned up. · 4e81783e
      Marcelo Vanzin authored
      This change does a couple of different things to make sure that the RpcEnv-level
      code and the network library agree about the status of outstanding RPCs.
      
      For RPCs that do not expect a reply ("RpcEnv.send"), support for one way
      messages (hello CORBA!) was added to the network layer. This is a
      "fire and forget" message that does not require any state to be kept
      by the TransportClient; as a result, the RpcEnv 'Ack' message is not needed
      anymore.
      
      For RPCs that do expect a reply ("RpcEnv.ask"), the network library now
      returns the internal RPC id; if the RpcEnv layer decides to time out the
      RPC before the network layer does, it now asks the TransportClient to
      forget about the RPC, so that if the network-level timeout occurs, the
      client is not killed.
      
      As part of implementing the above, I cleaned up some of the code in the
      netty rpc backend, removing types that were not necessary and factoring
      out some common code. Of interest is a slight change in the exceptions
      when posting messages to a stopped RpcEnv; that's mostly to avoid nasty
      error messages from the local-cluster backend when shutting down, which
      pollutes the terminal output.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #9917 from vanzin/SPARK-11866.
      4e81783e
    • Shixiong Zhu's avatar
      [SPARK-11935][PYSPARK] Send the Python exceptions in TransformFunction and... · d29e2ef4
      Shixiong Zhu authored
      [SPARK-11935][PYSPARK] Send the Python exceptions in TransformFunction and TransformFunctionSerializer to Java
      
      The Python exception track in TransformFunction and TransformFunctionSerializer is not sent back to Java. Py4j just throws a very general exception, which is hard to debug.
      
      This PRs adds `getFailure` method to get the failure message in Java side.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #9922 from zsxwing/SPARK-11935.
      d29e2ef4
    • jerryshao's avatar
      [SPARK-10558][CORE] Fix wrong executor state in Master · 88875d94
      jerryshao authored
      `ExecutorAdded` can only be sent to `AppClient` when worker report back the executor state as `LOADING`, otherwise because of concurrency issue, `AppClient` will possibly receive `ExectuorAdded` at first, then `ExecutorStateUpdated` with `LOADING` state.
      
      Also Master will change the executor state from `LAUNCHING` to `RUNNING` (`AppClient` report back the state as `RUNNING`), then to `LOADING` (worker report back to state as `LOADING`), it should be `LAUNCHING` -> `LOADING` -> `RUNNING`.
      
      Also it is wrongly shown in master UI, the state of executor should be `RUNNING` rather than `LOADING`:
      
      ![screen shot 2015-09-11 at 2 30 28 pm](https://cloud.githubusercontent.com/assets/850797/9809254/3155d840-5899-11e5-8cdf-ad06fef75762.png)
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #8714 from jerryshao/SPARK-10558.
      88875d94
    • wangt's avatar
      [SPARK-11880][WINDOWS][SPARK SUBMIT] bin/load-spark-env.cmd loads... · 9f3e59a1
      wangt authored
      [SPARK-11880][WINDOWS][SPARK SUBMIT] bin/load-spark-env.cmd loads spark-env.cmd from wrong directory
      
      * On windows the `bin/load-spark-env.cmd` tries to load `spark-env.cmd` from `%~dp0..\..\conf`, where `~dp0` points to `bin` and `conf` is only one level up.
      * Updated `bin/load-spark-env.cmd` to load `spark-env.cmd` from `%~dp0..\conf`, instead of `%~dp0..\..\conf`
      
      Author: wangt <wangtao.upc@gmail.com>
      
      Closes #9863 from toddwan/master.
      9f3e59a1
    • Alex Bozarth's avatar
      [SPARK-10864][WEB UI] app name is hidden if window is resized · 83653ac5
      Alex Bozarth authored
      Currently the Web UI navbar has a minimum width of 1200px; so if a window is resized smaller than that the app name goes off screen. The 1200px width seems to have been chosen since it fits the longest example app name without wrapping.
      
      To work with smaller window widths I made the tabs wrap since it looked better than wrapping the app name. This is a distinct change in how the navbar looks and I'm not sure if it's what we actually want to do.
      
      Other notes:
      - min-width set to 600px to keep the tabs from wrapping individually (will need to be adjusted if tabs are added)
      - app name will also wrap (making three levels) if a really really long app name is used
      
      Author: Alex Bozarth <ajbozart@us.ibm.com>
      
      Closes #9874 from ajbozarth/spark10864.
      83653ac5
    • Jeff Zhang's avatar
      [DOCUMENTATION] Fix minor doc error · 67b67320
      Jeff Zhang authored
      Author: Jeff Zhang <zjffdu@apache.org>
      
      Closes #9956 from zjffdu/dev_typo.
      67b67320
    • Yu ISHIKAWA's avatar
      [MINOR] Remove unnecessary spaces in `include_example.rb` · 0dee44a6
      Yu ISHIKAWA authored
      Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com>
      
      Closes #9960 from yu-iskw/minor-remove-spaces.
      0dee44a6
    • Davies Liu's avatar
      [SPARK-11969] [SQL] [PYSPARK] visualization of SQL query for pyspark · dc1d324f
      Davies Liu authored
      Currently, we does not have visualization for SQL query from Python, this PR fix that.
      
      cc zsxwing
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #9949 from davies/pyspark_sql_ui.
      dc1d324f
    • Zhongshuai Pei's avatar
      [SPARK-11974][CORE] Not all the temp dirs had been deleted when the JVM exits · 6b781576
      Zhongshuai Pei authored
      deleting the temp dir like that
      
      ```
      
      scala> import scala.collection.mutable
      import scala.collection.mutable
      
      scala> val a = mutable.Set(1,2,3,4,7,0,8,98,9)
      a: scala.collection.mutable.Set[Int] = Set(0, 9, 1, 2, 3, 7, 4, 8, 98)
      
      scala> a.foreach(x => {a.remove(x) })
      
      scala> a.foreach(println(_))
      98
      ```
      
      You may not modify a collection while traversing or iterating over it.This can not delete all element of the collection
      
      Author: Zhongshuai Pei <peizhongshuai@huawei.com>
      
      Closes #9951 from DoingDone9/Bug_RemainDir.
      6b781576
    • felixcheung's avatar
      [SPARK-11984][SQL][PYTHON] Fix typos in doc for pivot for scala and python · faabdfa2
      felixcheung authored
      Author: felixcheung <felixcheung_m@hotmail.com>
      
      Closes #9967 from felixcheung/pypivotdoc.
      faabdfa2
Loading