Skip to content
Snippets Groups Projects
  1. Sep 21, 2016
    • hyukjinkwon's avatar
      [SPARK-17583][SQL] Remove uesless rowSeparator variable and set auto-expanding... · 25a020be
      hyukjinkwon authored
      [SPARK-17583][SQL] Remove uesless rowSeparator variable and set auto-expanding buffer as default for maxCharsPerColumn option in CSV
      
      ## What changes were proposed in this pull request?
      
      This PR includes the changes below:
      
      1. Upgrade Univocity library from 2.1.1 to 2.2.1
      
        This includes some performance improvement and also enabling auto-extending buffer in `maxCharsPerColumn` option in CSV. Please refer the [release notes](https://github.com/uniVocity/univocity-parsers/releases).
      
      2. Remove useless `rowSeparator` variable existing in `CSVOptions`
      
        We have this unused variable in [CSVOptions.scala#L127](https://github.com/apache/spark/blob/29952ed096fd2a0a19079933ff691671d6f00835/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVOptions.scala#L127) but it seems possibly causing confusion that it actually does not care of `\r\n`. For example, we have an issue open about this, [SPARK-17227](https://issues.apache.org/jira/browse/SPARK-17227), describing this variable.
      
        This variable is virtually not being used because we rely on `LineRecordReader` in Hadoop which deals with only both `\n` and `\r\n`.
      
      3. Set the default value of `maxCharsPerColumn` to auto-expending.
      
        We are setting 1000000 for the length of each column. It'd be more sensible we allow auto-expending rather than fixed length by default.
      
        To make sure, using `-1` is being described in the release note, [2.2.0](https://github.com/uniVocity/univocity-parsers/releases/tag/v2.2.0).
      
      ## How was this patch tested?
      
      N/A
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #15138 from HyukjinKwon/SPARK-17583.
      Unverified
      25a020be
    • VinceShieh's avatar
      [SPARK-17219][ML] Add NaN value handling in Bucketizer · 57dc326b
      VinceShieh authored
      ## What changes were proposed in this pull request?
      This PR fixes an issue when Bucketizer is called to handle a dataset containing NaN value.
      Sometimes, null value might also be useful to users, so in these cases, Bucketizer should
      reserve one extra bucket for NaN values, instead of throwing an illegal exception.
      Before:
      ```
      Bucketizer.transform on NaN value threw an illegal exception.
      ```
      After:
      ```
      NaN values will be grouped in an extra bucket.
      ```
      ## How was this patch tested?
      New test cases added in `BucketizerSuite`.
      Signed-off-by: VinceShieh <vincent.xieintel.com>
      
      Author: VinceShieh <vincent.xie@intel.com>
      
      Closes #14858 from VinceShieh/spark-17219.
      Unverified
      57dc326b
    • Burak Yavuz's avatar
      [SPARK-17599] Prevent ListingFileCatalog from failing if path doesn't exist · 28fafa3e
      Burak Yavuz authored
      ## What changes were proposed in this pull request?
      
      The `ListingFileCatalog` lists files given a set of resolved paths. If a folder is deleted at any time between the paths were resolved and the file catalog can check for the folder, the Spark job fails. This may abruptly stop long running StructuredStreaming jobs for example.
      
      Folders may be deleted by users or automatically by retention policies. These cases should not prevent jobs from successfully completing.
      
      ## How was this patch tested?
      
      Unit test in `FileCatalogSuite`
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #15153 from brkyvz/SPARK-17599.
      28fafa3e
    • Sean Zhong's avatar
      [SPARK-17617][SQL] Remainder(%) expression.eval returns incorrect result on double value · 3977223a
      Sean Zhong authored
      ## What changes were proposed in this pull request?
      
      Remainder(%) expression's `eval()` returns incorrect result when the dividend is a big double. The reason is that Remainder converts the double dividend to decimal to do "%", and that lose precision.
      
      This bug only affects the `eval()` that is used by constant folding, the codegen path is not impacted.
      
      ### Before change
      ```
      scala> -5083676433652386516D % 10
      res2: Double = -6.0
      
      scala> spark.sql("select -5083676433652386516D % 10 as a").show
      +---+
      |  a|
      +---+
      |0.0|
      +---+
      ```
      
      ### After change
      ```
      scala> spark.sql("select -5083676433652386516D % 10 as a").show
      +----+
      |   a|
      +----+
      |-6.0|
      +----+
      ```
      
      ## How was this patch tested?
      
      Unit test.
      
      Author: Sean Zhong <seanzhong@databricks.com>
      
      Closes #15171 from clockfly/SPARK-17617.
      3977223a
    • wm624@hotmail.com's avatar
      [CORE][DOC] Fix errors in comments · 61876a42
      wm624@hotmail.com authored
      ## What changes were proposed in this pull request?
      While reading source code of CORE and SQL core, I found some minor errors in comments such as extra space, missing blank line and grammar error.
      
      I fixed these minor errors and might find more during my source code study.
      
      ## How was this patch tested?
      Manually build
      
      Author: wm624@hotmail.com <wm624@hotmail.com>
      
      Closes #15151 from wangmiao1981/mem.
      Unverified
      61876a42
    • jerryshao's avatar
      [SPARK-15698][SQL][STREAMING][FOLLW-UP] Fix FileStream source and sink log get configuration issue · e48ebc4e
      jerryshao authored
      ## What changes were proposed in this pull request?
      
      This issue was introduced in the previous commit of SPARK-15698. Mistakenly change the way to get configuration back to original one, so here with the follow up PR to revert them up.
      
      ## How was this patch tested?
      
      N/A
      
      Ping zsxwing , please review again, sorry to bring the inconvenience. Thanks a lot.
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #15173 from jerryshao/SPARK-15698-follow.
      e48ebc4e
  2. Sep 20, 2016
    • petermaxlee's avatar
      [SPARK-17513][SQL] Make StreamExecution garbage-collect its metadata · 976f3b12
      petermaxlee authored
      ## What changes were proposed in this pull request?
      This PR modifies StreamExecution such that it discards metadata for batches that have already been fully processed. I used the purge method that was added as part of SPARK-17235.
      
      This is a resubmission of 15126, which was based on work by frreiss in #15067, but fixed the test case along with some typos.
      
      ## How was this patch tested?
      A new test case in StreamingQuerySuite. The test case would fail without the changes in this pull request.
      
      Author: petermaxlee <petermaxlee@gmail.com>
      
      Closes #15166 from petermaxlee/SPARK-17513-2.
      976f3b12
    • Yin Huai's avatar
      [SPARK-17549][SQL] Revert "[] Only collect table size stat in driver for cached relation." · 9ac68dbc
      Yin Huai authored
      This reverts commit 39e2bad6 because of the problem mentioned at https://issues.apache.org/jira/browse/SPARK-17549?focusedCommentId=15505060&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15505060
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #15157 from yhuai/revert-SPARK-17549.
      9ac68dbc
    • jerryshao's avatar
      [SPARK-15698][SQL][STREAMING] Add the ability to remove the old MetadataLog in FileStreamSource · a6aade00
      jerryshao authored
      ## What changes were proposed in this pull request?
      
      Current `metadataLog` in `FileStreamSource` will add a checkpoint file in each batch but do not have the ability to remove/compact, which will lead to large number of small files when running for a long time. So here propose to compact the old logs into one file. This method is quite similar to `FileStreamSinkLog` but simpler.
      
      ## How was this patch tested?
      
      Unit test added.
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #13513 from jerryshao/SPARK-15698.
      a6aade00
    • Wenchen Fan's avatar
      [SPARK-17051][SQL] we should use hadoopConf in InsertIntoHiveTable · eb004c66
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      Hive confs in hive-site.xml will be loaded in `hadoopConf`, so we should use `hadoopConf` in `InsertIntoHiveTable` instead of `SessionState.conf`
      
      ## How was this patch tested?
      
      N/A
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #14634 from cloud-fan/bug.
      eb004c66
    • gatorsmile's avatar
      [SPARK-17502][SQL] Fix Multiple Bugs in DDL Statements on Temporary Views · d5ec5dbb
      gatorsmile authored
      ### What changes were proposed in this pull request?
      - When the permanent tables/views do not exist but the temporary view exists, the expected error should be `NoSuchTableException` for partition-related ALTER TABLE commands. However, it always reports a confusing error message. For example,
      ```
      Partition spec is invalid. The spec (a, b) must match the partition spec () defined in table '`testview`';
      ```
      - When the permanent tables/views do not exist but the temporary view exists, the expected error should be `NoSuchTableException` for `ALTER TABLE ... UNSET TBLPROPERTIES`. However, it reports a missing table property. For example,
      ```
      Attempted to unset non-existent property 'p' in table '`testView`';
      ```
      - When `ANALYZE TABLE` is called on a view or a temporary view, we should issue an error message. However, it reports a strange error:
      ```
      ANALYZE TABLE is not supported for Project
      ```
      
      - When inserting into a temporary view that is generated from `Range`, we will get the following error message:
      ```
      assertion failed: No plan for 'InsertIntoTable Range (0, 10, step=1, splits=Some(1)), false, false
      +- Project [1 AS 1#20]
         +- OneRowRelation$
      ```
      
      This PR is to fix the above four issues.
      
      ### How was this patch tested?
      Added multiple test cases
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #15054 from gatorsmile/tempViewDDL.
      d5ec5dbb
    • Wenchen Fan's avatar
      f039d964
    • petermaxlee's avatar
      [SPARK-17513][SQL] Make StreamExecution garbage-collect its metadata · be9d57fc
      petermaxlee authored
      ## What changes were proposed in this pull request?
      This PR modifies StreamExecution such that it discards metadata for batches that have already been fully processed. I used the purge method that was added as part of SPARK-17235.
      
      This is based on work by frreiss in #15067, but fixed the test case along with some typos.
      
      ## How was this patch tested?
      A new test case in StreamingQuerySuite. The test case would fail without the changes in this pull request.
      
      Author: petermaxlee <petermaxlee@gmail.com>
      Author: frreiss <frreiss@us.ibm.com>
      
      Closes #15126 from petermaxlee/SPARK-17513.
      be9d57fc
  3. Sep 19, 2016
    • Josh Rosen's avatar
      [SPARK-17160] Properly escape field names in code-generated error messages · e719b1c0
      Josh Rosen authored
      This patch addresses a corner-case escaping bug where field names which contain special characters were unsafely interpolated into error message string literals in generated Java code, leading to compilation errors.
      
      This patch addresses these issues by using `addReferenceObj` to store the error messages as string fields rather than inline string constants.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #15156 from JoshRosen/SPARK-17160.
      e719b1c0
    • Davies Liu's avatar
      [SPARK-17100] [SQL] fix Python udf in filter on top of outer join · d8104158
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      In optimizer, we try to evaluate the condition to see whether it's nullable or not, but some expressions are not evaluable, we should check that before evaluate it.
      
      ## How was this patch tested?
      
      Added regression tests.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #15103 from davies/udf_join.
      d8104158
    • Davies Liu's avatar
      [SPARK-16439] [SQL] bring back the separator in SQL UI · e0632062
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      Currently, the SQL metrics looks like `number of rows: 111111111111`, it's very hard to read how large the number is. So a separator was added by #12425, but removed by #14142, because the separator is weird in some locales (for example, pl_PL), this PR will add that back, but always use "," as the separator, since the SQL UI are all in English.
      
      ## How was this patch tested?
      
      Existing tests.
      ![metrics](https://cloud.githubusercontent.com/assets/40902/14573908/21ad2f00-030d-11e6-9e2c-c544f30039ea.png)
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #15106 from davies/metric_sep.
      e0632062
    • Sean Owen's avatar
      [SPARK-17297][DOCS] Clarify window/slide duration as absolute time, not relative to a calendar · d720a401
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Clarify that slide and window duration are absolute, and not relative to a calendar.
      
      ## How was this patch tested?
      
      Doc build (no functional change)
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #15142 from srowen/SPARK-17297.
      Unverified
      d720a401
  4. Sep 18, 2016
    • petermaxlee's avatar
      [SPARK-17571][SQL] AssertOnQuery.condition should always return Boolean value · 8f0c35a4
      petermaxlee authored
      ## What changes were proposed in this pull request?
      AssertOnQuery has two apply constructor: one that accepts a closure that returns boolean, and another that accepts a closure that returns Unit. This is actually very confusing because developers could mistakenly think that AssertOnQuery always require a boolean return type and verifies the return result, when indeed the value of the last statement is ignored in one of the constructors.
      
      This pull request makes the two constructor consistent and always require boolean value. It will overall make the test suites more robust against developer errors.
      
      As an evidence for the confusing behavior, this change also identified a bug with an existing test case due to file system time granularity. This pull request fixes that test case as well.
      
      ## How was this patch tested?
      This is a test only change.
      
      Author: petermaxlee <petermaxlee@gmail.com>
      
      Closes #15127 from petermaxlee/SPARK-17571.
      8f0c35a4
    • Liwei Lin's avatar
      [SPARK-16462][SPARK-16460][SPARK-15144][SQL] Make CSV cast null values properly · 1dbb725d
      Liwei Lin authored
      ## Problem
      
      CSV in Spark 2.0.0:
      -  does not read null values back correctly for certain data types such as `Boolean`, `TimestampType`, `DateType` -- this is a regression comparing to 1.6;
      - does not read empty values (specified by `options.nullValue`) as `null`s for `StringType` -- this is compatible with 1.6 but leads to problems like SPARK-16903.
      
      ## What changes were proposed in this pull request?
      
      This patch makes changes to read all empty values back as `null`s.
      
      ## How was this patch tested?
      
      New test cases.
      
      Author: Liwei Lin <lwlin7@gmail.com>
      
      Closes #14118 from lw-lin/csv-cast-null.
      Unverified
      1dbb725d
    • jiangxingbo's avatar
      [SPARK-17506][SQL] Improve the check double values equality rule. · 5d3f4615
      jiangxingbo authored
      ## What changes were proposed in this pull request?
      
      In `ExpressionEvalHelper`, we check the equality between two double values by comparing whether the expected value is within the range [target - tolerance, target + tolerance], but this can cause a negative false when the compared numerics are very large.
      Before:
      ```
      val1 = 1.6358558070241E306
      val2 = 1.6358558070240974E306
      ExpressionEvalHelper.compareResults(val1, val2)
      false
      ```
      In fact, `val1` and `val2` are but with different precisions, we should tolerant this case by comparing with percentage range, eg.,expected is within range [target - target * tolerance_percentage, target + target * tolerance_percentage].
      After:
      ```
      val1 = 1.6358558070241E306
      val2 = 1.6358558070240974E306
      ExpressionEvalHelper.compareResults(val1, val2)
      true
      ```
      
      ## How was this patch tested?
      
      Exsiting testcases.
      
      Author: jiangxingbo <jiangxb1987@gmail.com>
      
      Closes #15059 from jiangxb1987/deq.
      Unverified
      5d3f4615
    • Wenchen Fan's avatar
      [SPARK-17541][SQL] fix some DDL bugs about table management when same-name temp view exists · 3fe630d3
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      In `SessionCatalog`, we have several operations(`tableExists`, `dropTable`, `loopupRelation`, etc) that handle both temp views and metastore tables/views. This brings some bugs to DDL commands that want to handle temp view only or metastore table/view only. These bugs are:
      
      1. `CREATE TABLE USING` will fail if a same-name temp view exists
      2. `Catalog.dropTempView`will un-cache and drop metastore table if a same-name table exists
      3. `saveAsTable` will fail or have unexpected behaviour if a same-name temp view exists.
      
      These bug fixes are pulled out from https://github.com/apache/spark/pull/14962 and targets both master and 2.0 branch
      
      ## How was this patch tested?
      
      new regression tests
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #15099 from cloud-fan/fix-view.
      3fe630d3
    • gatorsmile's avatar
      [SPARK-17518][SQL] Block Users to Specify the Internal Data Source Provider Hive · 3a3c9ffb
      gatorsmile authored
      ### What changes were proposed in this pull request?
      In Spark 2.1, we introduced a new internal provider `hive` for telling Hive serde tables from data source tables. This PR is to block users to specify this in `DataFrameWriter` and SQL APIs.
      
      ### How was this patch tested?
      Added a test case
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #15073 from gatorsmile/formatHive.
      3a3c9ffb
  5. Sep 17, 2016
  6. Sep 16, 2016
    • Marcelo Vanzin's avatar
      [SPARK-17549][SQL] Only collect table size stat in driver for cached relation. · 39e2bad6
      Marcelo Vanzin authored
      The existing code caches all stats for all columns for each partition
      in the driver; for a large relation, this causes extreme memory usage,
      which leads to gc hell and application failures.
      
      It seems that only the size in bytes of the data is actually used in the
      driver, so instead just colllect that. In executors, the full stats are
      still kept, but that's not a big problem; we expect the data to be distributed
      and thus not really incur in too much memory pressure in each individual
      executor.
      
      There are also potential improvements on the executor side, since the data
      being stored currently is very wasteful (e.g. storing boxed types vs.
      primitive types for stats). But that's a separate issue.
      
      On a mildly related change, I'm also adding code to catch exceptions in the
      code generator since Janino was breaking with the test data I tried this
      patch on.
      
      Tested with unit tests and by doing a count a very wide table (20k columns)
      with many partitions.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #15112 from vanzin/SPARK-17549.
      39e2bad6
    • Sean Owen's avatar
      [SPARK-17561][DOCS] DataFrameWriter documentation formatting problems · b9323fc9
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Fix `<ul> / <li>` problems in SQL scaladoc.
      
      ## How was this patch tested?
      
      Scaladoc build and manual verification of generated HTML.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #15117 from srowen/SPARK-17561.
      b9323fc9
    • Sean Zhong's avatar
      [SPARK-17426][SQL] Refactor `TreeNode.toJSON` to avoid OOM when converting unknown fields to JSON · a425a37a
      Sean Zhong authored
      ## What changes were proposed in this pull request?
      
      This PR is a follow up of SPARK-17356. Current implementation of `TreeNode.toJSON` recursively converts all fields of TreeNode to JSON, even if the field is of type `Seq` or type Map. This may trigger out of memory exception in cases like:
      
      1. the Seq or Map can be very big. Converting them to JSON may take huge memory, which may trigger out of memory error.
      2. Some user space input may also be propagated to the Plan. The user space input can be of arbitrary type, and may also be self-referencing. Trying to print user space input to JSON may trigger out of memory error or stack overflow error.
      
      For a code example, please check the Jira description of SPARK-17426.
      
      In this PR, we refactor the `TreeNode.toJSON` so that we only convert a field to JSON string if the field is a safe type.
      
      ## How was this patch tested?
      
      Unit test.
      
      Author: Sean Zhong <seanzhong@databricks.com>
      
      Closes #14990 from clockfly/json_oom2.
      a425a37a
  7. Sep 15, 2016
    • Andrew Ray's avatar
      [SPARK-17458][SQL] Alias specified for aggregates in a pivot are not honored · b72486f8
      Andrew Ray authored
      ## What changes were proposed in this pull request?
      
      This change preserves aliases that are given for pivot aggregations
      
      ## How was this patch tested?
      
      New unit test
      
      Author: Andrew Ray <ray.andrew@gmail.com>
      
      Closes #15111 from aray/SPARK-17458.
      b72486f8
    • Sean Zhong's avatar
      [SPARK-17364][SQL] Antlr lexer wrongly treats full qualified identifier as a... · a6b81820
      Sean Zhong authored
      [SPARK-17364][SQL] Antlr lexer wrongly treats full qualified identifier as a decimal number token when parsing SQL string
      
      ## What changes were proposed in this pull request?
      
      The Antlr lexer we use to tokenize a SQL string may wrongly tokenize a fully qualified identifier as a decimal number token. For example, table identifier `default.123_table` is wrongly tokenized as
      ```
      default // Matches lexer rule IDENTIFIER
      .123 // Matches lexer rule DECIMAL_VALUE
      _TABLE // Matches lexer rule IDENTIFIER
      ```
      
      The correct tokenization for `default.123_table` should be:
      ```
      default // Matches lexer rule IDENTIFIER,
      . // Matches a single dot
      123_TABLE // Matches lexer rule IDENTIFIER
      ```
      
      This PR fix the Antlr grammar so that it can tokenize fully qualified identifier correctly:
      1. Fully qualified table name can be parsed correctly. For example, `select * from database.123_suffix`.
      2. Fully qualified column name can be parsed correctly, for example `select a.123_suffix from a`.
      
      ### Before change
      
      #### Case 1: Failed to parse fully qualified column name
      
      ```
      scala> spark.sql("select a.123_column from a").show
      org.apache.spark.sql.catalyst.parser.ParseException:
      extraneous input '.123' expecting {<EOF>,
      ...
      , IDENTIFIER, BACKQUOTED_IDENTIFIER}(line 1, pos 8)
      == SQL ==
      select a.123_column from a
      --------^^^
      ```
      
      #### Case 2: Failed to parse fully qualified table name
      ```
      scala> spark.sql("select * from default.123_table")
      org.apache.spark.sql.catalyst.parser.ParseException:
      extraneous input '.123' expecting {<EOF>,
      ...
      IDENTIFIER, BACKQUOTED_IDENTIFIER}(line 1, pos 21)
      
      == SQL ==
      select * from default.123_table
      ---------------------^^^
      ```
      
      ### After Change
      
      #### Case 1: fully qualified column name, no ParseException thrown
      ```
      scala> spark.sql("select a.123_column from a").show
      ```
      
      #### Case 2: fully qualified table name, no ParseException thrown
      ```
      scala> spark.sql("select * from default.123_table")
      ```
      
      ## How was this patch tested?
      
      Unit test.
      
      Author: Sean Zhong <seanzhong@databricks.com>
      
      Closes #15006 from clockfly/SPARK-17364.
      a6b81820
    • 岑玉海's avatar
      [SPARK-17429][SQL] use ImplicitCastInputTypes with function Length · fe767395
      岑玉海 authored
      ## What changes were proposed in this pull request?
      select length(11);
      select length(2.0);
      these sql will return errors, but hive is ok.
      this PR will support casting input types implicitly for function length
      the correct result is:
      select length(11) return 2
      select length(2.0) return 3
      
      Author: 岑玉海 <261810726@qq.com>
      Author: cenyuhai <cenyuhai@didichuxing.com>
      
      Closes #15014 from cenyuhai/SPARK-17429.
      fe767395
    • Herman van Hovell's avatar
      [SPARK-17114][SQL] Fix aggregates grouped by literals with empty input · d403562e
      Herman van Hovell authored
      ## What changes were proposed in this pull request?
      This PR fixes an issue with aggregates that have an empty input, and use a literals as their grouping keys. These aggregates are currently interpreted as aggregates **without** grouping keys, this triggers the ungrouped code path (which aways returns a single row).
      
      This PR fixes the `RemoveLiteralFromGroupExpressions` optimizer rule, which changes the semantics of the Aggregate by eliminating all literal grouping keys.
      
      ## How was this patch tested?
      Added tests to `SQLQueryTestSuite`.
      
      Author: Herman van Hovell <hvanhovell@databricks.com>
      
      Closes #15101 from hvanhovell/SPARK-17114-3.
      d403562e
    • John Muller's avatar
      [SPARK-17536][SQL] Minor performance improvement to JDBC batch inserts · 71a65825
      John Muller authored
      ## What changes were proposed in this pull request?
      
      Optimize a while loop during batch inserts
      
      ## How was this patch tested?
      
      Unit tests were done, specifically "mvn  test" for sql
      
      Author: John Muller <jmuller@us.imshealth.com>
      
      Closes #15098 from blue666man/SPARK-17536.
      71a65825
    • Adam Roberts's avatar
      [SPARK-17524][TESTS] Use specified spark.buffer.pageSize · f893e262
      Adam Roberts authored
      ## What changes were proposed in this pull request?
      
      This PR has the appendRowUntilExceedingPageSize test in RowBasedKeyValueBatchSuite use whatever spark.buffer.pageSize value a user has specified to prevent a test failure for anyone testing Apache Spark on a box with a reduced page size. The test is currently hardcoded to use the default page size which is 64 MB so this minor PR is a test improvement
      
      ## How was this patch tested?
      Existing unit tests with 1 MB page size and with 64 MB (the default) page size
      
      Author: Adam Roberts <aroberts@uk.ibm.com>
      
      Closes #15079 from a-roberts/patch-5.
      f893e262
    • gatorsmile's avatar
      [SPARK-17440][SPARK-17441] Fixed Multiple Bugs in ALTER TABLE · 6a6adb16
      gatorsmile authored
      ### What changes were proposed in this pull request?
      For the following `ALTER TABLE` DDL, we should issue an exception when the target table is a `VIEW`:
      ```SQL
       ALTER TABLE viewName SET LOCATION '/path/to/your/lovely/heart'
      
       ALTER TABLE viewName SET SERDE 'whatever'
      
       ALTER TABLE viewName SET SERDEPROPERTIES ('x' = 'y')
      
       ALTER TABLE viewName PARTITION (a=1, b=2) SET SERDEPROPERTIES ('x' = 'y')
      
       ALTER TABLE viewName ADD IF NOT EXISTS PARTITION (a='4', b='8')
      
       ALTER TABLE viewName DROP IF EXISTS PARTITION (a='2')
      
       ALTER TABLE viewName RECOVER PARTITIONS
      
       ALTER TABLE viewName PARTITION (a='1', b='q') RENAME TO PARTITION (a='100', b='p')
      ```
      
      In addition, `ALTER TABLE RENAME PARTITION` is unable to handle data source tables, just like the other `ALTER PARTITION` commands. We should issue an exception instead.
      
      ### How was this patch tested?
      Added a few test cases.
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #15004 from gatorsmile/altertable.
      6a6adb16
  8. Sep 14, 2016
    • Shixiong Zhu's avatar
      [SPARK-17463][CORE] Make CollectionAccumulator and SetAccumulator's value can be read thread-safely · e33bfaed
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      Make CollectionAccumulator and SetAccumulator's value can be read thread-safely to fix the ConcurrentModificationException reported in [JIRA](https://issues.apache.org/jira/browse/SPARK-17463).
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #15063 from zsxwing/SPARK-17463.
      e33bfaed
    • Xin Wu's avatar
      [SPARK-10747][SQL] Support NULLS FIRST|LAST clause in ORDER BY · 040e4697
      Xin Wu authored
      ## What changes were proposed in this pull request?
      Currently, ORDER BY clause returns nulls value according to sorting order (ASC|DESC), considering null value is always smaller than non-null values.
      However, SQL2003 standard support NULLS FIRST or NULLS LAST to allow users to specify whether null values should be returned first or last, regardless of sorting order (ASC|DESC).
      
      This PR is to support this new feature.
      
      ## How was this patch tested?
      New test cases are added to test NULLS FIRST|LAST for regular select queries and windowing queries.
      
      (If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
      
      Author: Xin Wu <xinwu@us.ibm.com>
      
      Closes #14842 from xwu0226/SPARK-10747.
      040e4697
    • hyukjinkwon's avatar
      [MINOR][SQL] Add missing functions for some options in SQLConf and use them where applicable · a79838bd
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      I first thought they are missing because they are kind of hidden options but it seems they are just missing.
      
      For example, `spark.sql.parquet.mergeSchema` is documented in [sql-programming-guide.md](https://github.com/apache/spark/blob/master/docs/sql-programming-guide.md) but this function is missing whereas many options such as `spark.sql.join.preferSortMergeJoin` are not documented but have its own function individually.
      
      So, this PR suggests making them consistent by adding the missing functions for some options in `SQLConf` and use them where applicable, in order to make them more readable.
      
      ## How was this patch tested?
      
      Existing tests should cover this.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #14678 from HyukjinKwon/sqlconf-cleanup.
      a79838bd
    • Josh Rosen's avatar
      [SPARK-17514] df.take(1) and df.limit(1).collect() should perform the same in Python · 6d06ff6f
      Josh Rosen authored
      ## What changes were proposed in this pull request?
      
      In PySpark, `df.take(1)` runs a single-stage job which computes only one partition of the DataFrame, while `df.limit(1).collect()` computes all partitions and runs a two-stage job. This difference in performance is confusing.
      
      The reason why `limit(1).collect()` is so much slower is that `collect()` internally maps to `df.rdd.<some-pyspark-conversions>.toLocalIterator`, which causes Spark SQL to build a query where a global limit appears in the middle of the plan; this, in turn, ends up being executed inefficiently because limits in the middle of plans are now implemented by repartitioning to a single task rather than by running a `take()` job on the driver (this was done in #7334, a patch which was a prerequisite to allowing partition-local limits to be pushed beneath unions, etc.).
      
      In order to fix this performance problem I think that we should generalize the fix from SPARK-10731 / #8876 so that `DataFrame.collect()` also delegates to the Scala implementation and shares the same performance properties. This patch modifies `DataFrame.collect()` to first collect all results to the driver and then pass them to Python, allowing this query to be planned using Spark's `CollectLimit` optimizations.
      
      ## How was this patch tested?
      
      Added a regression test in `sql/tests.py` which asserts that the expected number of jobs, stages, and tasks are run for both queries.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #15068 from JoshRosen/pyspark-collect-limit.
      6d06ff6f
    • gatorsmile's avatar
      [SPARK-17409][SQL] Do Not Optimize Query in CTAS More Than Once · 52738d4e
      gatorsmile authored
      ### What changes were proposed in this pull request?
      As explained in https://github.com/apache/spark/pull/14797:
      >Some analyzer rules have assumptions on logical plans, optimizer may break these assumption, we should not pass an optimized query plan into QueryExecution (will be analyzed again), otherwise we may some weird bugs.
      For example, we have a rule for decimal calculation to promote the precision before binary operations, use PromotePrecision as placeholder to indicate that this rule should not apply twice. But a Optimizer rule will remove this placeholder, that break the assumption, then the rule applied twice, cause wrong result.
      
      We should not optimize the query in CTAS more than once. For example,
      ```Scala
      spark.range(99, 101).createOrReplaceTempView("tab1")
      val sqlStmt = "SELECT id, cast(id as long) * cast('1.0' as decimal(38, 18)) as num FROM tab1"
      sql(s"CREATE TABLE tab2 USING PARQUET AS $sqlStmt")
      checkAnswer(spark.table("tab2"), sql(sqlStmt))
      ```
      Before this PR, the results do not match
      ```
      == Results ==
      !== Correct Answer - 2 ==       == Spark Answer - 2 ==
      ![100,100.000000000000000000]   [100,null]
       [99,99.000000000000000000]     [99,99.000000000000000000]
      ```
      After this PR, the results match.
      ```
      +---+----------------------+
      |id |num                   |
      +---+----------------------+
      |99 |99.000000000000000000 |
      |100|100.000000000000000000|
      +---+----------------------+
      ```
      
      In this PR, we do not treat the `query` in CTAS as a child. Thus, the `query` will not be optimized when optimizing CTAS statement. However, we still need to analyze it for normalizing and verifying the CTAS in the Analyzer. Thus, we do it in the analyzer rule `PreprocessDDL`, because so far only this rule needs the analyzed plan of the `query`.
      
      ### How was this patch tested?
      Added a test
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #15048 from gatorsmile/ctasOptimized.
      52738d4e
Loading