Skip to content
Snippets Groups Projects
  1. Jul 08, 2017
    • Xiao Li's avatar
      [SPARK-21307][REVERT][SQL] Remove SQLConf parameters from the parser-related classes · c3712b77
      Xiao Li authored
      ## What changes were proposed in this pull request?
      Since we do not set active sessions when parsing the plan, we are unable to correctly use SQLConf.get to find the correct active session. Since https://github.com/apache/spark/pull/18531 breaks the build, I plan to revert it at first.
      
      ## How was this patch tested?
      The existing test cases
      
      Author: Xiao Li <gatorsmile@gmail.com>
      
      Closes #18568 from gatorsmile/revert18531.
      c3712b77
    • jinxing's avatar
      [SPARK-21343] Refine the document for spark.reducer.maxReqSizeShuffleToMem. · 062c336d
      jinxing authored
      ## What changes were proposed in this pull request?
      
      In current code, reducer can break the old shuffle service when `spark.reducer.maxReqSizeShuffleToMem` is enabled. Let's refine document.
      
      Author: jinxing <jinxing6042@126.com>
      
      Closes #18566 from jinxing64/SPARK-21343.
      062c336d
    • Marcelo Vanzin's avatar
      [SPARK-20342][CORE] Update task accumulators before sending task end event. · 9131bdb7
      Marcelo Vanzin authored
      This makes sures that listeners get updated task information; otherwise it's
      possible to write incomplete task information into event logs, for example,
      making the information in a replayed UI inconsistent with the original
      application.
      
      Added a new unit test to try to detect the problem, but it's not guaranteed
      to fail since it's a race; but it fails pretty reliably for me without the
      scheduler changes.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #18393 from vanzin/SPARK-20342.try2.
      9131bdb7
    • Zhenhua Wang's avatar
      [SPARK-21083][SQL] Store zero size and row count when analyzing empty table · 9fccc362
      Zhenhua Wang authored
      ## What changes were proposed in this pull request?
      
      We should be able to store zero size and row count after analyzing empty table.
      
      This pr also enhances the test cases for re-analyzing tables.
      
      ## How was this patch tested?
      
      Added a new test case and enhanced some test cases.
      
      Author: Zhenhua Wang <wangzhenhua@huawei.com>
      
      Closes #18292 from wzhfy/analyzeNewColumn.
      9fccc362
    • Dongjoon Hyun's avatar
      [SPARK-21345][SQL][TEST][TEST-MAVEN] SparkSessionBuilderSuite should clean up stopped sessions. · 0b8dd2d0
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      `SparkSessionBuilderSuite` should clean up stopped sessions. Otherwise, it leaves behind some stopped `SparkContext`s interfereing with other test suites using `ShardSQLContext`.
      
      Recently, master branch fails consequtively.
      - https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/
      
      ## How was this patch tested?
      
      Pass the Jenkins with a updated suite.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #18567 from dongjoon-hyun/SPARK-SESSION.
      0b8dd2d0
    • caoxuewen's avatar
      [SPARK-20609][MLLIB][TEST] manually cleared 'spark.local.dir' before/after a... · 330bf5c9
      caoxuewen authored
      [SPARK-20609][MLLIB][TEST] manually cleared 'spark.local.dir' before/after a test in ALSCleanerSuite
      
      ## What changes were proposed in this pull request?
      
      This PR is similar to #17869.
      Once` 'spark.local.dir'` is set. Unless this is manually cleared before/after a test. it could return the same directory even if this property is configured.
      and add before/after for each likewise in ALSCleanerSuite.
      
      ## How was this patch tested?
      existing test.
      
      Author: caoxuewen <cao.xuewen@zte.com.cn>
      
      Closes #18537 from heary-cao/ALSCleanerSuite.
      330bf5c9
    • Joachim Hereth's avatar
      Mesos doc fixes · 01f183e8
      Joachim Hereth authored
      ## What changes were proposed in this pull request?
      
      Some link fixes for the documentation [Running Spark on Mesos](https://spark.apache.org/docs/latest/running-on-mesos.html):
      
      * Updated Link to Mesos Frameworks (Projects built on top of Mesos)
      * Update Link to Mesos binaries from Mesosphere (former link was redirected to dcos install page)
      
      ## How was this patch tested?
      
      Documentation was built and changed page manually/visually inspected.
      
      No code was changed, hence no dev tests.
      
      Since these changes are rather trivial I did not open a new JIRA ticket.
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: Joachim Hereth <joachim.hereth@numberfour.eu>
      
      Closes #18564 from daten-kieker/mesos_doc_fixes.
      01f183e8
    • Michael Patterson's avatar
      [SPARK-20456][DOCS] Add examples for functions collection for pyspark · f5f02d21
      Michael Patterson authored
      ## What changes were proposed in this pull request?
      
      This adds documentation to many functions in pyspark.sql.functions.py:
      `upper`, `lower`, `reverse`, `unix_timestamp`, `from_unixtime`, `rand`, `randn`, `collect_list`, `collect_set`, `lit`
      Add units to the trigonometry functions.
      Renames columns in datetime examples to be more informative.
      Adds links between some functions.
      
      ## How was this patch tested?
      
      `./dev/lint-python`
      `python python/pyspark/sql/functions.py`
      `./python/run-tests.py --module pyspark-sql`
      
      Author: Michael Patterson <map222@gmail.com>
      
      Closes #17865 from map222/spark-20456.
      f5f02d21
    • wangmiao1981's avatar
      [SPARK-20307][SPARKR] SparkR: pass on setHandleInvalid to spark.mllib... · a7b46c62
      wangmiao1981 authored
      [SPARK-20307][SPARKR] SparkR: pass on setHandleInvalid to spark.mllib functions that use StringIndexer
      
      ## What changes were proposed in this pull request?
      
      For randomForest classifier, if test data contains unseen labels, it will throw an error. The StringIndexer already has the handleInvalid logic. The patch add a new method to set the underlying StringIndexer handleInvalid logic.
      
      This patch should also apply to other classifiers. This PR focuses on the main logic and randomForest classifier. I will do follow-up PR for other classifiers.
      
      ## How was this patch tested?
      
      Add a new unit test based on the error case in the JIRA.
      
      Author: wangmiao1981 <wm624@hotmail.com>
      
      Closes #18496 from wangmiao1981/handle.
      a7b46c62
    • Prashant Sharma's avatar
      [SPARK-21069][SS][DOCS] Add rate source to programming guide. · d0bfc673
      Prashant Sharma authored
      ## What changes were proposed in this pull request?
      
      SPARK-20979 added a new structured streaming source: Rate source. This patch adds the corresponding documentation to programming guide.
      
      ## How was this patch tested?
      
      Tested by running jekyll locally.
      
      Author: Prashant Sharma <prashant@apache.org>
      Author: Prashant Sharma <prashsh1@in.ibm.com>
      
      Closes #18562 from ScrapCodes/spark-21069/rate-source-docs.
      d0bfc673
    • Marcelo Vanzin's avatar
      [SPARK-20379][CORE] Allow SSL config to reference env variables. · 9760c15a
      Marcelo Vanzin authored
      This change exposes the internal code path in SparkConf that allows
      configs to be read with variable substitution applied, and uses that
      new method in SSLOptions so that SSL configs can reference other
      variables, and more importantly, environment variables, providing
      a secure way to provide passwords to Spark when using SSL.
      
      The approach is a little bit hacky, but is the smallest change possible.
      Otherwise, the concept of "namespaced configs" would have to be added
      to the config system, which would create a lot of noise for not much
      gain at this point.
      
      Tested with added unit tests, and on a real cluster with SSL enabled.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #18394 from vanzin/SPARK-20379.try2.
      9760c15a
    • Takeshi Yamamuro's avatar
      [SPARK-21281][SQL] Use string types by default if array and map have no argument · 7896e7b9
      Takeshi Yamamuro authored
      ## What changes were proposed in this pull request?
      This pr modified code to use string types by default if `array` and `map` in functions have no argument. This behaviour is the same with Hive one;
      ```
      hive> CREATE TEMPORARY TABLE t1 AS SELECT map();
      hive> DESCRIBE t1;
      _c0   map<string,string>
      
      hive> CREATE TEMPORARY TABLE t2 AS SELECT array();
      hive> DESCRIBE t2;
      _c0   array<string>
      ```
      
      ## How was this patch tested?
      Added tests in `DataFrameFunctionsSuite`.
      
      Author: Takeshi Yamamuro <yamamuro@apache.org>
      
      Closes #18516 from maropu/SPARK-21281.
      7896e7b9
    • Andrew Ray's avatar
      [SPARK-21100][SQL] Add summary method as alternative to describe that gives... · e1a172c2
      Andrew Ray authored
      [SPARK-21100][SQL] Add summary method as alternative to describe that gives quartiles similar to Pandas
      
      ## What changes were proposed in this pull request?
      
      Adds method `summary`  that allows user to specify which statistics and percentiles to calculate. By default it include the existing statistics from `describe` and quartiles (25th, 50th, and 75th percentiles) similar to Pandas. Also changes the implementation of `describe` to delegate to `summary`.
      
      ## How was this patch tested?
      
      additional unit test
      
      Author: Andrew Ray <ray.andrew@gmail.com>
      
      Closes #18307 from aray/SPARK-21100.
      e1a172c2
  2. Jul 07, 2017
    • Wang Gengliang's avatar
      [SPARK-21336] Revise rand comparison in BatchEvalPythonExecSuite · a0fe32a2
      Wang Gengliang authored
      ## What changes were proposed in this pull request?
      
      Revise rand comparison in BatchEvalPythonExecSuite
      
      In BatchEvalPythonExecSuite, there are two cases using the case "rand() > 3"
      Rand() generates a random value in [0, 1), it is wired to be compared with 3, use 0.3 instead
      
      ## How was this patch tested?
      
      unit test
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: Wang Gengliang <ltnwgl@gmail.com>
      
      Closes #18560 from gengliangwang/revise_BatchEvalPythonExecSuite.
      a0fe32a2
    • CodingCat's avatar
      [SPARK-19358][CORE] LiveListenerBus shall log the event name when dropping... · fbbe37ed
      CodingCat authored
      [SPARK-19358][CORE] LiveListenerBus shall log the event name when dropping them due to a fully filled queue
      
      ## What changes were proposed in this pull request?
      
      Some dropped event will make the whole application behaves unexpectedly, e.g. some UI problem...we shall log the dropped event name to facilitate the debugging
      
      ## How was this patch tested?
      
      Existing tests
      
      Author: CodingCat <zhunansjtu@gmail.com>
      
      Closes #16697 from CodingCat/SPARK-19358.
      fbbe37ed
    • Wenchen Fan's avatar
      [SPARK-21335][SQL] support un-aliased subquery · fef08130
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      un-aliased subquery is supported by Spark SQL for a long time. Its semantic was not well defined and had confusing behaviors, and it's not a standard SQL syntax, so we disallowed it in https://issues.apache.org/jira/browse/SPARK-20690 .
      
      However, this is a breaking change, and we do have existing queries using un-aliased subquery. We should add the support back and fix its semantic.
      
      This PR fixes the un-aliased subquery by assigning a default alias name.
      
      After this PR, there is no syntax change from branch 2.2 to master, but we invalid a weird use case:
      `SELECT v.i from (SELECT i FROM v)`. Now this query will throw analysis exception because users should not be able to use the qualifier inside a subquery.
      
      ## How was this patch tested?
      
      new regression test
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #18559 from cloud-fan/sub-query.
      fef08130
    • Yan Facai (颜发才)'s avatar
      [SPARK-21285][ML] VectorAssembler reports the column name of unsupported data type · 56536e99
      Yan Facai (颜发才) authored
      ## What changes were proposed in this pull request?
      add the column name in the exception which is raised by unsupported data type.
      
      ## How was this patch tested?
      + [x] pass all tests.
      
      Author: Yan Facai (颜发才) <facai.yan@gmail.com>
      
      Closes #18523 from facaiy/ENH/vectorassembler_add_col.
      56536e99
    • Jacek Laskowski's avatar
      [SPARK-21313][SS] ConsoleSink's string representation · 7fcbb9b5
      Jacek Laskowski authored
      ## What changes were proposed in this pull request?
      
      Add `toString` with options for `ConsoleSink` so it shows nicely in query progress.
      
      **BEFORE**
      
      ```
        "sink" : {
          "description" : "org.apache.spark.sql.execution.streaming.ConsoleSink4b340441"
        }
      ```
      
      **AFTER**
      
      ```
        "sink" : {
          "description" : "ConsoleSink[numRows=10, truncate=false]"
        }
      ```
      
      /cc zsxwing tdas
      
      ## How was this patch tested?
      
      Local build
      
      Author: Jacek Laskowski <jacek@japila.pl>
      
      Closes #18539 from jaceklaskowski/SPARK-21313-ConsoleSink-toString.
      7fcbb9b5
    • Liang-Chi Hsieh's avatar
      [SPARK-20703][SQL][FOLLOW-UP] Associate metrics with data writes onto DataFrameWriter operations · 5df99bd3
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      Remove time metrics since it seems no way to measure it in non per-row tracking.
      
      ## How was this patch tested?
      
      Existing tests.
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #18558 from viirya/SPARK-20703-followup.
      5df99bd3
    • Kazuaki Ishizaki's avatar
      [SPARK-21217][SQL] Support ColumnVector.Array.to<type>Array() · c09b31eb
      Kazuaki Ishizaki authored
      ## What changes were proposed in this pull request?
      
      This PR implements bulk-copy for `ColumnVector.Array.to<type>Array()` methods (e.g. `toIntArray()`) in `ColumnVector.Array` by using `System.arrayCopy()` or `Platform.copyMemory()`.
      
      Before this PR, when one of these method is called, the generic method in `ArrayData` is called. It is not fast since element-wise copy is performed.
      
      This PR can improve performance of a benchmark program by 1.9x and 3.2x.
      
      Without this PR
      ```
      OpenJDK 64-Bit Server VM 1.8.0_131-8u131-b11-0ubuntu1.16.04.2-b11 on Linux 4.4.0-66-generic
      Intel(R) Xeon(R) CPU E5-2667 v3  3.20GHz
      
      Int Array                                Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)
      ------------------------------------------------------------------------------------------------
      ON_HEAP                                        586 /  628         14.3          69.9
      OFF_HEAP                                       893 /  902          9.4         106.5
      ```
      
      With this PR
      ```
      OpenJDK 64-Bit Server VM 1.8.0_131-8u131-b11-0ubuntu1.16.04.2-b11 on Linux 4.4.0-66-generic
      Intel(R) Xeon(R) CPU E5-2667 v3  3.20GHz
      
      Int Array                                Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)
      ------------------------------------------------------------------------------------------------
      ON_HEAP                                        306 /  331         27.4          36.4
      OFF_HEAP                                       282 /  287         29.8          33.6
      ```
      
      Source program
      ```
          (MemoryMode.ON_HEAP :: MemoryMode.OFF_HEAP :: Nil).foreach { memMode => {
            val len = 8 * 1024 * 1024
            val column = ColumnVector.allocate(len * 2, new ArrayType(IntegerType, false), memMode)
      
            val data = column.arrayData
            var i = 0
            while (i < len) {
              data.putInt(i, i)
              i += 1
            }
            column.putArray(0, 0, len)
      
            val benchmark = new Benchmark("Int Array", len, minNumIters = 20)
            benchmark.addCase(s"$memMode") { iter =>
              var i = 0
              while (i < 50) {
                column.getArray(0).toIntArray
                i += 1
              }
            }
            benchmark.run
          }}
      ```
      
      ## How was this patch tested?
      
      Added test suite
      
      Author: Kazuaki Ishizaki <ishizaki@jp.ibm.com>
      
      Closes #18425 from kiszk/SPARK-21217.
      c09b31eb
    • Takuya UESHIN's avatar
      [SPARK-21327][SQL][PYSPARK] ArrayConstructor should handle an array of... · 53c2eb59
      Takuya UESHIN authored
      [SPARK-21327][SQL][PYSPARK] ArrayConstructor should handle an array of typecode 'l' as long rather than int in Python 2.
      
      ## What changes were proposed in this pull request?
      
      Currently `ArrayConstructor` handles an array of typecode `'l'` as `int` when converting Python object in Python 2 into Java object, so if the value is larger than `Integer.MAX_VALUE` or smaller than `Integer.MIN_VALUE` then the overflow occurs.
      
      ```python
      import array
      data = [Row(longarray=array.array('l', [-9223372036854775808, 0, 9223372036854775807]))]
      df = spark.createDataFrame(data)
      df.show(truncate=False)
      ```
      
      ```
      +----------+
      |longarray |
      +----------+
      |[0, 0, -1]|
      +----------+
      ```
      
      This should be:
      
      ```
      +----------------------------------------------+
      |longarray                                     |
      +----------------------------------------------+
      |[-9223372036854775808, 0, 9223372036854775807]|
      +----------------------------------------------+
      ```
      
      ## How was this patch tested?
      
      Added a test and existing tests.
      
      Author: Takuya UESHIN <ueshin@databricks.com>
      
      Closes #18553 from ueshin/issues/SPARK-21327.
      53c2eb59
  3. Jul 06, 2017
    • hyukjinkwon's avatar
      [SPARK-21326][SPARK-21066][ML] Use TextFileFormat in LibSVMFileFormat and... · d451b7f4
      hyukjinkwon authored
      [SPARK-21326][SPARK-21066][ML] Use TextFileFormat in LibSVMFileFormat and allow multiple input paths for determining numFeatures
      
      ## What changes were proposed in this pull request?
      
      This is related with [SPARK-19918](https://issues.apache.org/jira/browse/SPARK-19918) and [SPARK-18362](https://issues.apache.org/jira/browse/SPARK-18362).
      
      This PR proposes to use `TextFileFormat` and allow multiple input paths (but with a warning) when determining the number of features in LibSVM data source via an extra scan.
      
      There are three points here:
      
      - The main advantage of this change should be to remove file-listing bottlenecks in driver side.
      
      - Another advantage is ones from using `FileScanRDD`. For example, I guess we can use `spark.sql.files.ignoreCorruptFiles` option when determining the number of features.
      
      - We can unify the schema inference code path in text based data sources. This is also a preparation for [SPARK-21289](https://issues.apache.org/jira/browse/SPARK-21289).
      
      ## How was this patch tested?
      
      Unit tests in `LibSVMRelationSuite`.
      
      Closes #18288
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #18556 from HyukjinKwon/libsvm-schema.
      d451b7f4
    • Jacek Laskowski's avatar
      [SPARK-21329][SS] Make EventTimeWatermarkExec explicitly UnaryExecNode · e5bb2617
      Jacek Laskowski authored
      ## What changes were proposed in this pull request?
      
      Making EventTimeWatermarkExec explicitly UnaryExecNode
      
      /cc tdas zsxwing
      
      ## How was this patch tested?
      
      Local build.
      
      Author: Jacek Laskowski <jacek@japila.pl>
      
      Closes #18509 from jaceklaskowski/EventTimeWatermarkExec-UnaryExecNode.
      e5bb2617
    • Wenchen Fan's avatar
      [SPARK-20946][SQL] Do not update conf for existing SparkContext in SparkSession.getOrCreate · 40c7add3
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      SparkContext is shared by all sessions, we should not update its conf for only one session.
      
      ## How was this patch tested?
      
      existing tests
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #18536 from cloud-fan/config.
      40c7add3
    • Tathagata Das's avatar
      [SPARK-21267][SS][DOCS] Update Structured Streaming Documentation · 0217dfd2
      Tathagata Das authored
      ## What changes were proposed in this pull request?
      
      Few changes to the Structured Streaming documentation
      - Clarify that the entire stream input table is not materialized
      - Add information for Ganglia
      - Add Kafka Sink to the main docs
      - Removed a couple of leftover experimental tags
      - Added more associated reading material and talk videos.
      
      In addition, https://github.com/apache/spark/pull/16856 broke the link to the RDD programming guide in several places while renaming the page. This PR fixes those sameeragarwal cloud-fan.
      - Added a redirection to avoid breaking internal and possible external links.
      - Removed unnecessary redirection pages that were there since the separate scala, java, and python programming guides were merged together in 2013 or 2014.
      
      ## How was this patch tested?
      
      (Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
      (If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #18485 from tdas/SPARK-21267.
      0217dfd2
    • Wang Gengliang's avatar
      [SPARK-21323][SQL] Rename plans.logical.statsEstimation.Range to ValueInterval · bf66335a
      Wang Gengliang authored
      ## What changes were proposed in this pull request?
      
      Rename org.apache.spark.sql.catalyst.plans.logical.statsEstimation.Range to ValueInterval.
      The current naming is identical to logical operator "range".
      Refactoring it to ValueInterval is more accurate.
      
      ## How was this patch tested?
      
      unit test
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: Wang Gengliang <ltnwgl@gmail.com>
      
      Closes #18549 from gengliangwang/ValueInterval.
      bf66335a
    • Liang-Chi Hsieh's avatar
      [SPARK-21204][SQL] Add support for Scala Set collection types in serialization · 48e44b24
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      Currently we can't produce a `Dataset` containing `Set` in SparkSQL. This PR tries to support serialization/deserialization of `Set`.
      
      Because there's no corresponding internal data type in SparkSQL for a `Set`, the most proper choice for serializing a set should be an array.
      
      ## How was this patch tested?
      
      Added unit tests.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #18416 from viirya/SPARK-21204.
      48e44b24
    • Bogdan Raducanu's avatar
      [SPARK-21228][SQL] InSet incorrect handling of structs · 26ac085d
      Bogdan Raducanu authored
      ## What changes were proposed in this pull request?
      When data type is struct, InSet now uses TypeUtils.getInterpretedOrdering (similar to EqualTo) to build a TreeSet. In other cases it will use a HashSet as before (which should be faster). Similarly, In.eval uses Ordering.equiv instead of equals.
      
      ## How was this patch tested?
      New test in SQLQuerySuite.
      
      Author: Bogdan Raducanu <bogdan@databricks.com>
      
      Closes #18455 from bogdanrdc/SPARK-21228.
      26ac085d
    • caoxuewen's avatar
      [SPARK-20950][CORE] add a new config to diskWriteBufferSize which is hard coded before · 565e7a8d
      caoxuewen authored
      ## What changes were proposed in this pull request?
      
      This PR Improvement in two:
      1.With spark.shuffle.spill.diskWriteBufferSize configure diskWriteBufferSize of ShuffleExternalSorter.
          when change the size of the diskWriteBufferSize to test `forceSorterToSpill`
          The average performance of running 10 times is as follows:(their unit is MS).
      ```
      diskWriteBufferSize:       1M    512K    256K    128K    64K    32K    16K    8K    4K
      ---------------------------------------------------------------------------------------
      RecordSize = 2.5M          742   722     694     686     667    668    671    669   683
      RecordSize = 1M            294   293     292     287     283    285    281    279   285
      ```
      
      2.Remove outputBufferSizeInBytes and inputBufferSizeInBytes to initialize in mergeSpillsWithFileStream function.
      
      ## How was this patch tested?
      The unit test.
      
      Author: caoxuewen <cao.xuewen@zte.com.cn>
      
      Closes #18174 from heary-cao/buffersize.
      565e7a8d
    • Wang Gengliang's avatar
      [SPARK-21273][SQL][FOLLOW-UP] Add missing test cases back and revise code style · d540dfbf
      Wang Gengliang authored
      ## What changes were proposed in this pull request?
      
      Add missing test cases back and revise code style
      
      Follow up the previous PR: https://github.com/apache/spark/pull/18479
      
      ## How was this patch tested?
      
      Unit test
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: Wang Gengliang <ltnwgl@gmail.com>
      
      Closes #18548 from gengliangwang/stat_propagation_revise.
      d540dfbf
    • wangzhenhua's avatar
      [SPARK-21324][TEST] Improve statistics test suites · b8e4d567
      wangzhenhua authored
      ## What changes were proposed in this pull request?
      
      1. move `StatisticsCollectionTestBase` to a separate file.
      2. move some test cases to `StatisticsCollectionSuite` so that `hive/StatisticsSuite` only keeps tests that need hive support.
      3. clear up some test cases.
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: wangzhenhua <wangzhenhua@huawei.com>
      Author: Zhenhua Wang <wzh_zju@163.com>
      
      Closes #18545 from wzhfy/cleanStatSuites.
      b8e4d567
    • Liang-Chi Hsieh's avatar
      [SPARK-20703][SQL] Associate metrics with data writes onto DataFrameWriter operations · 6ff05a66
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      Right now in the UI, after SPARK-20213, we can show the operations to write data out. However, there is no way to associate metrics with data writes. We should show relative metrics on the operations.
      
      #### Supported commands
      
      This change supports updating metrics for file-based data writing operations, including `InsertIntoHadoopFsRelationCommand`, `InsertIntoHiveTable`.
      
      Supported metrics:
      
      * number of written files
      * number of dynamic partitions
      * total bytes of written data
      * total number of output rows
      * average writing data out time (ms)
      * (TODO) min/med/max number of output rows per file/partition
      * (TODO) min/med/max bytes of written data per file/partition
      
      ####  Commands not supported
      
      `InsertIntoDataSourceCommand`, `SaveIntoDataSourceCommand`:
      
      The two commands uses DataSource APIs to write data out, i.e., the logic of writing data out is delegated to the DataSource implementations, such as  `InsertableRelation.insert` and `CreatableRelationProvider.createRelation`. So we can't obtain metrics from delegated methods for now.
      
      `CreateHiveTableAsSelectCommand`, `CreateDataSourceTableAsSelectCommand` :
      
      The two commands invokes other commands to write data out. The invoked commands can even write to non file-based data source. We leave them as future TODO.
      
      #### How to update metrics of writing files out
      
      A `RunnableCommand` which wants to update metrics, needs to override its `metrics` and provide the metrics data structure to `ExecutedCommandExec`.
      
      The metrics are prepared during the execution of `FileFormatWriter`. The callback function passed to `FileFormatWriter` will accept the metrics and update accordingly.
      
      There is a metrics updating function in `RunnableCommand`. In runtime, the function will be bound to the spark context and `metrics` of `ExecutedCommandExec` and pass to `FileFormatWriter`.
      
      ## How was this patch tested?
      
      Updated unit tests.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #18159 from viirya/SPARK-20703-2.
      6ff05a66
    • jerryshao's avatar
      [SPARK-21012][SUBMIT] Add glob support for resources adding to Spark · 5800144a
      jerryshao authored
      Current "--jars (spark.jars)", "--files (spark.files)", "--py-files (spark.submit.pyFiles)" and "--archives (spark.yarn.dist.archives)" only support non-glob path. This is OK for most of the cases, but when user requires to add more jars, files into Spark, it is too verbose to list one by one. So here propose to add glob path support for resources.
      
      Also improving the code of downloading resources.
      
      ## How was this patch tested?
      
      UT added, also verified manually in local cluster.
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #18235 from jerryshao/SPARK-21012.
      5800144a
    • Tathagata Das's avatar
      [SS][MINOR] Fix flaky test in DatastreamReaderWriterSuite. temp checkpoint dir should be deleted · 60043f22
      Tathagata Das authored
      ## What changes were proposed in this pull request?
      
      Stopping query while it is being initialized can throw interrupt exception, in which case temporary checkpoint directories will not be deleted, and the test will fail.
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #18442 from tdas/DatastreamReaderWriterSuite-fix.
      60043f22
    • Sumedh Wale's avatar
      [SPARK-21312][SQL] correct offsetInBytes in UnsafeRow.writeToStream · 14a3bb3a
      Sumedh Wale authored
      ## What changes were proposed in this pull request?
      
      Corrects offsetInBytes calculation in UnsafeRow.writeToStream. Known failures include writes to some DataSources that have own SparkPlan implementations and cause EXCHANGE in writes.
      
      ## How was this patch tested?
      
      Extended UnsafeRowSuite.writeToStream to include an UnsafeRow over byte array having non-zero offset.
      
      Author: Sumedh Wale <swale@snappydata.io>
      
      Closes #18535 from sumwale/SPARK-21312.
      14a3bb3a
    • gatorsmile's avatar
      [SPARK-21308][SQL] Remove SQLConf parameters from the optimizer · 75b168fd
      gatorsmile authored
      ### What changes were proposed in this pull request?
      This PR removes SQLConf parameters from the optimizer rules
      
      ### How was this patch tested?
      The existing test cases
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #18533 from gatorsmile/rmSQLConfOptimizer.
      75b168fd
  4. Jul 05, 2017
    • Shixiong Zhu's avatar
      [SPARK-21248][SS] The clean up codes in StreamExecution should not be interrupted · ab866f11
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      This PR uses `runUninterruptibly` to avoid that the clean up codes in StreamExecution is interrupted. It also removes an optimization in `runUninterruptibly` to make sure this method never throw `InterruptedException`.
      
      ## How was this patch tested?
      
      Jenkins
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #18461 from zsxwing/SPARK-21248.
      ab866f11
    • Dongjoon Hyun's avatar
      [SPARK-21278][PYSPARK] Upgrade to Py4J 0.10.6 · c8d0aba1
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This PR aims to bump Py4J in order to fix the following float/double bug.
      Py4J 0.10.5 fixes this (https://github.com/bartdag/py4j/issues/272) and the latest Py4J is 0.10.6.
      
      **BEFORE**
      ```
      >>> df = spark.range(1)
      >>> df.select(df['id'] + 17.133574204226083).show()
      +--------------------+
      |(id + 17.1335742042)|
      +--------------------+
      |       17.1335742042|
      +--------------------+
      ```
      
      **AFTER**
      ```
      >>> df = spark.range(1)
      >>> df.select(df['id'] + 17.133574204226083).show()
      +-------------------------+
      |(id + 17.133574204226083)|
      +-------------------------+
      |       17.133574204226083|
      +-------------------------+
      ```
      
      ## How was this patch tested?
      
      Manual.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #18546 from dongjoon-hyun/SPARK-21278.
      c8d0aba1
    • gatorsmile's avatar
      [SPARK-21307][SQL] Remove SQLConf parameters from the parser-related classes. · c8e7f445
      gatorsmile authored
      ### What changes were proposed in this pull request?
      This PR is to remove SQLConf parameters from the parser-related classes.
      
      ### How was this patch tested?
      The existing test cases.
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #18531 from gatorsmile/rmSQLConfParser.
      c8e7f445
    • Jeff Zhang's avatar
      [SPARK-19439][PYSPARK][SQL] PySpark's registerJavaFunction Should Support UDAFs · 742da086
      Jeff Zhang authored
      ## What changes were proposed in this pull request?
      
      Support register Java UDAFs in PySpark so that user can use Java UDAF in PySpark. Besides that I also add api in `UDFRegistration`
      
      ## How was this patch tested?
      
      Unit test is added
      
      Author: Jeff Zhang <zjffdu@apache.org>
      
      Closes #17222 from zjffdu/SPARK-19439.
      742da086
Loading