Skip to content
Snippets Groups Projects
  1. Sep 22, 2016
    • gatorsmile's avatar
      [SPARK-17492][SQL] Fix Reading Cataloged Data Sources without Extending SchemaRelationProvider · 3a80f92f
      gatorsmile authored
      ### What changes were proposed in this pull request?
      For data sources without extending `SchemaRelationProvider`, we expect users to not specify schemas when they creating tables. If the schema is input from users, an exception is issued.
      
      Since Spark 2.1, for any data source, to avoid infer the schema every time, we store the schema in the metastore catalog. Thus, when reading a cataloged data source table, the schema could be read from metastore catalog. In this case, we also got an exception. For example,
      
      ```Scala
      sql(
        s"""
           |CREATE TABLE relationProvierWithSchema
           |USING org.apache.spark.sql.sources.SimpleScanSource
           |OPTIONS (
           |  From '1',
           |  To '10'
           |)
         """.stripMargin)
      spark.table(tableName).show()
      ```
      ```
      org.apache.spark.sql.sources.SimpleScanSource does not allow user-specified schemas.;
      ```
      
      This PR is to fix the above issue. When building a data source, we introduce a flag `isSchemaFromUsers` to indicate whether the schema is really input from users. If true, we issue an exception. Otherwise, we will call the `createRelation` of `RelationProvider` to generate the `BaseRelation`, in which it contains the actual schema.
      
      ### How was this patch tested?
      Added a few cases.
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #15046 from gatorsmile/tempViewCases.
      3a80f92f
    • Yadong Qi's avatar
      [SPARK-17425][SQL] Override sameResult in HiveTableScanExec to make... · cb324f61
      Yadong Qi authored
      [SPARK-17425][SQL] Override sameResult in HiveTableScanExec to make ReuseExchange work in text format table
      
      ## What changes were proposed in this pull request?
      The PR will override the `sameResult` in `HiveTableScanExec` to make `ReuseExchange` work in text format table.
      
      ## How was this patch tested?
      # SQL
      ```sql
      SELECT * FROM src t1
      JOIN src t2 ON t1.key = t2.key
      JOIN src t3 ON t1.key = t3.key;
      ```
      
      # Before
      ```
      == Physical Plan ==
      *BroadcastHashJoin [key#30], [key#34], Inner, BuildRight
      :- *BroadcastHashJoin [key#30], [key#32], Inner, BuildRight
      :  :- *Filter isnotnull(key#30)
      :  :  +- HiveTableScan [key#30, value#31], MetastoreRelation default, src
      :  +- BroadcastExchange HashedRelationBroadcastMode(List(cast(input[0, int, false] as bigint)))
      :     +- *Filter isnotnull(key#32)
      :        +- HiveTableScan [key#32, value#33], MetastoreRelation default, src
      +- BroadcastExchange HashedRelationBroadcastMode(List(cast(input[0, int, false] as bigint)))
         +- *Filter isnotnull(key#34)
            +- HiveTableScan [key#34, value#35], MetastoreRelation default, src
      ```
      
      # After
      ```
      == Physical Plan ==
      *BroadcastHashJoin [key#2], [key#6], Inner, BuildRight
      :- *BroadcastHashJoin [key#2], [key#4], Inner, BuildRight
      :  :- *Filter isnotnull(key#2)
      :  :  +- HiveTableScan [key#2, value#3], MetastoreRelation default, src
      :  +- BroadcastExchange HashedRelationBroadcastMode(List(cast(input[0, int, false] as bigint)))
      :     +- *Filter isnotnull(key#4)
      :        +- HiveTableScan [key#4, value#5], MetastoreRelation default, src
      +- ReusedExchange [key#6, value#7], BroadcastExchange HashedRelationBroadcastMode(List(cast(input[0, int, false] as bigint)))
      ```
      
      cc: davies cloud-fan
      
      Author: Yadong Qi <qiyadong2010@gmail.com>
      
      Closes #14988 from watermen/SPARK-17425.
      cb324f61
  2. Sep 21, 2016
    • Wenchen Fan's avatar
      [SPARK-17609][SQL] SessionCatalog.tableExists should not check temp view · b50b34f5
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      After #15054 , there is no place in Spark SQL that need `SessionCatalog.tableExists` to check temp views, so this PR makes `SessionCatalog.tableExists` only check permanent table/view and removes some hacks.
      
      This PR also improves the `getTempViewOrPermanentTableMetadata` that is introduced in  #15054 , to make the code simpler.
      
      ## How was this patch tested?
      
      existing tests
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #15160 from cloud-fan/exists.
      b50b34f5
    • Davies Liu's avatar
      [SPARK-17494][SQL] changePrecision() on compact decimal should respect rounding mode · 8bde03bf
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      Floor()/Ceil() of decimal is implemented using changePrecision() by passing a rounding mode, but the rounding mode is not respected when the decimal is in compact mode (could fit within a Long).
      
      This Update the changePrecision() to respect rounding mode, which could be ROUND_FLOOR, ROUND_CEIL, ROUND_HALF_UP, ROUND_HALF_EVEN.
      
      ## How was this patch tested?
      
      Added regression tests.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #15154 from davies/decimal_round.
      8bde03bf
    • Michael Armbrust's avatar
      [SPARK-17627] Mark Streaming Providers Experimental · 3497ebe5
      Michael Armbrust authored
      All of structured streaming is experimental in its first release.  We missed the annotation on two of the APIs.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #15188 from marmbrus/experimentalApi.
      3497ebe5
    • Burak Yavuz's avatar
      [SPARK-17569] Make StructuredStreaming FileStreamSource batch generation faster · 7cbe2164
      Burak Yavuz authored
      ## What changes were proposed in this pull request?
      
      While getting the batch for a `FileStreamSource` in StructuredStreaming, we know which files we must take specifically. We already have verified that they exist, and have committed them to a metadata log. When creating the FileSourceRelation however for an incremental execution, the code checks the existence of every single file once again!
      
      When you have 100,000s of files in a folder, creating the first batch takes 2 hours+ when working with S3! This PR disables that check
      
      ## How was this patch tested?
      
      Added a unit test to `FileStreamSource`.
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #15122 from brkyvz/SPARK-17569.
      7cbe2164
    • Liang-Chi Hsieh's avatar
      [SPARK-17590][SQL] Analyze CTE definitions at once and allow CTE subquery to define CTE · 248922fd
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      We substitute logical plan with CTE definitions in the analyzer rule CTESubstitution. A CTE definition can be used in the logical plan for multiple times, and its analyzed logical plan should be the same. We should not analyze CTE definitions multiple times when they are reused in the query.
      
      By analyzing CTE definitions before substitution, we can support defining CTE in subquery.
      
      ## How was this patch tested?
      
      Jenkins tests.
      
      Author: Liang-Chi Hsieh <simonh@tw.ibm.com>
      
      Closes #15146 from viirya/cte-analysis-once.
      248922fd
    • hyukjinkwon's avatar
      [SPARK-17583][SQL] Remove uesless rowSeparator variable and set auto-expanding... · 25a020be
      hyukjinkwon authored
      [SPARK-17583][SQL] Remove uesless rowSeparator variable and set auto-expanding buffer as default for maxCharsPerColumn option in CSV
      
      ## What changes were proposed in this pull request?
      
      This PR includes the changes below:
      
      1. Upgrade Univocity library from 2.1.1 to 2.2.1
      
        This includes some performance improvement and also enabling auto-extending buffer in `maxCharsPerColumn` option in CSV. Please refer the [release notes](https://github.com/uniVocity/univocity-parsers/releases).
      
      2. Remove useless `rowSeparator` variable existing in `CSVOptions`
      
        We have this unused variable in [CSVOptions.scala#L127](https://github.com/apache/spark/blob/29952ed096fd2a0a19079933ff691671d6f00835/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVOptions.scala#L127) but it seems possibly causing confusion that it actually does not care of `\r\n`. For example, we have an issue open about this, [SPARK-17227](https://issues.apache.org/jira/browse/SPARK-17227), describing this variable.
      
        This variable is virtually not being used because we rely on `LineRecordReader` in Hadoop which deals with only both `\n` and `\r\n`.
      
      3. Set the default value of `maxCharsPerColumn` to auto-expending.
      
        We are setting 1000000 for the length of each column. It'd be more sensible we allow auto-expending rather than fixed length by default.
      
        To make sure, using `-1` is being described in the release note, [2.2.0](https://github.com/uniVocity/univocity-parsers/releases/tag/v2.2.0).
      
      ## How was this patch tested?
      
      N/A
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #15138 from HyukjinKwon/SPARK-17583.
      25a020be
    • VinceShieh's avatar
      [SPARK-17219][ML] Add NaN value handling in Bucketizer · 57dc326b
      VinceShieh authored
      ## What changes were proposed in this pull request?
      This PR fixes an issue when Bucketizer is called to handle a dataset containing NaN value.
      Sometimes, null value might also be useful to users, so in these cases, Bucketizer should
      reserve one extra bucket for NaN values, instead of throwing an illegal exception.
      Before:
      ```
      Bucketizer.transform on NaN value threw an illegal exception.
      ```
      After:
      ```
      NaN values will be grouped in an extra bucket.
      ```
      ## How was this patch tested?
      New test cases added in `BucketizerSuite`.
      Signed-off-by: VinceShieh <vincent.xieintel.com>
      
      Author: VinceShieh <vincent.xie@intel.com>
      
      Closes #14858 from VinceShieh/spark-17219.
      57dc326b
    • Burak Yavuz's avatar
      [SPARK-17599] Prevent ListingFileCatalog from failing if path doesn't exist · 28fafa3e
      Burak Yavuz authored
      ## What changes were proposed in this pull request?
      
      The `ListingFileCatalog` lists files given a set of resolved paths. If a folder is deleted at any time between the paths were resolved and the file catalog can check for the folder, the Spark job fails. This may abruptly stop long running StructuredStreaming jobs for example.
      
      Folders may be deleted by users or automatically by retention policies. These cases should not prevent jobs from successfully completing.
      
      ## How was this patch tested?
      
      Unit test in `FileCatalogSuite`
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #15153 from brkyvz/SPARK-17599.
      28fafa3e
    • Sean Zhong's avatar
      [SPARK-17617][SQL] Remainder(%) expression.eval returns incorrect result on double value · 3977223a
      Sean Zhong authored
      ## What changes were proposed in this pull request?
      
      Remainder(%) expression's `eval()` returns incorrect result when the dividend is a big double. The reason is that Remainder converts the double dividend to decimal to do "%", and that lose precision.
      
      This bug only affects the `eval()` that is used by constant folding, the codegen path is not impacted.
      
      ### Before change
      ```
      scala> -5083676433652386516D % 10
      res2: Double = -6.0
      
      scala> spark.sql("select -5083676433652386516D % 10 as a").show
      +---+
      |  a|
      +---+
      |0.0|
      +---+
      ```
      
      ### After change
      ```
      scala> spark.sql("select -5083676433652386516D % 10 as a").show
      +----+
      |   a|
      +----+
      |-6.0|
      +----+
      ```
      
      ## How was this patch tested?
      
      Unit test.
      
      Author: Sean Zhong <seanzhong@databricks.com>
      
      Closes #15171 from clockfly/SPARK-17617.
      3977223a
    • wm624@hotmail.com's avatar
      [CORE][DOC] Fix errors in comments · 61876a42
      wm624@hotmail.com authored
      ## What changes were proposed in this pull request?
      While reading source code of CORE and SQL core, I found some minor errors in comments such as extra space, missing blank line and grammar error.
      
      I fixed these minor errors and might find more during my source code study.
      
      ## How was this patch tested?
      Manually build
      
      Author: wm624@hotmail.com <wm624@hotmail.com>
      
      Closes #15151 from wangmiao1981/mem.
      61876a42
    • jerryshao's avatar
      [SPARK-15698][SQL][STREAMING][FOLLW-UP] Fix FileStream source and sink log get configuration issue · e48ebc4e
      jerryshao authored
      ## What changes were proposed in this pull request?
      
      This issue was introduced in the previous commit of SPARK-15698. Mistakenly change the way to get configuration back to original one, so here with the follow up PR to revert them up.
      
      ## How was this patch tested?
      
      N/A
      
      Ping zsxwing , please review again, sorry to bring the inconvenience. Thanks a lot.
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #15173 from jerryshao/SPARK-15698-follow.
      e48ebc4e
  3. Sep 20, 2016
    • petermaxlee's avatar
      [SPARK-17513][SQL] Make StreamExecution garbage-collect its metadata · 976f3b12
      petermaxlee authored
      ## What changes were proposed in this pull request?
      This PR modifies StreamExecution such that it discards metadata for batches that have already been fully processed. I used the purge method that was added as part of SPARK-17235.
      
      This is a resubmission of 15126, which was based on work by frreiss in #15067, but fixed the test case along with some typos.
      
      ## How was this patch tested?
      A new test case in StreamingQuerySuite. The test case would fail without the changes in this pull request.
      
      Author: petermaxlee <petermaxlee@gmail.com>
      
      Closes #15166 from petermaxlee/SPARK-17513-2.
      976f3b12
    • Yin Huai's avatar
      [SPARK-17549][SQL] Revert "[] Only collect table size stat in driver for cached relation." · 9ac68dbc
      Yin Huai authored
      This reverts commit 39e2bad6 because of the problem mentioned at https://issues.apache.org/jira/browse/SPARK-17549?focusedCommentId=15505060&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15505060
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #15157 from yhuai/revert-SPARK-17549.
      9ac68dbc
    • jerryshao's avatar
      [SPARK-15698][SQL][STREAMING] Add the ability to remove the old MetadataLog in FileStreamSource · a6aade00
      jerryshao authored
      ## What changes were proposed in this pull request?
      
      Current `metadataLog` in `FileStreamSource` will add a checkpoint file in each batch but do not have the ability to remove/compact, which will lead to large number of small files when running for a long time. So here propose to compact the old logs into one file. This method is quite similar to `FileStreamSinkLog` but simpler.
      
      ## How was this patch tested?
      
      Unit test added.
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #13513 from jerryshao/SPARK-15698.
      a6aade00
    • Wenchen Fan's avatar
      [SPARK-17051][SQL] we should use hadoopConf in InsertIntoHiveTable · eb004c66
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      Hive confs in hive-site.xml will be loaded in `hadoopConf`, so we should use `hadoopConf` in `InsertIntoHiveTable` instead of `SessionState.conf`
      
      ## How was this patch tested?
      
      N/A
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #14634 from cloud-fan/bug.
      eb004c66
    • gatorsmile's avatar
      [SPARK-17502][SQL] Fix Multiple Bugs in DDL Statements on Temporary Views · d5ec5dbb
      gatorsmile authored
      ### What changes were proposed in this pull request?
      - When the permanent tables/views do not exist but the temporary view exists, the expected error should be `NoSuchTableException` for partition-related ALTER TABLE commands. However, it always reports a confusing error message. For example,
      ```
      Partition spec is invalid. The spec (a, b) must match the partition spec () defined in table '`testview`';
      ```
      - When the permanent tables/views do not exist but the temporary view exists, the expected error should be `NoSuchTableException` for `ALTER TABLE ... UNSET TBLPROPERTIES`. However, it reports a missing table property. For example,
      ```
      Attempted to unset non-existent property 'p' in table '`testView`';
      ```
      - When `ANALYZE TABLE` is called on a view or a temporary view, we should issue an error message. However, it reports a strange error:
      ```
      ANALYZE TABLE is not supported for Project
      ```
      
      - When inserting into a temporary view that is generated from `Range`, we will get the following error message:
      ```
      assertion failed: No plan for 'InsertIntoTable Range (0, 10, step=1, splits=Some(1)), false, false
      +- Project [1 AS 1#20]
         +- OneRowRelation$
      ```
      
      This PR is to fix the above four issues.
      
      ### How was this patch tested?
      Added multiple test cases
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #15054 from gatorsmile/tempViewDDL.
      d5ec5dbb
    • Wenchen Fan's avatar
      f039d964
    • petermaxlee's avatar
      [SPARK-17513][SQL] Make StreamExecution garbage-collect its metadata · be9d57fc
      petermaxlee authored
      ## What changes were proposed in this pull request?
      This PR modifies StreamExecution such that it discards metadata for batches that have already been fully processed. I used the purge method that was added as part of SPARK-17235.
      
      This is based on work by frreiss in #15067, but fixed the test case along with some typos.
      
      ## How was this patch tested?
      A new test case in StreamingQuerySuite. The test case would fail without the changes in this pull request.
      
      Author: petermaxlee <petermaxlee@gmail.com>
      Author: frreiss <frreiss@us.ibm.com>
      
      Closes #15126 from petermaxlee/SPARK-17513.
      be9d57fc
  4. Sep 19, 2016
    • Josh Rosen's avatar
      [SPARK-17160] Properly escape field names in code-generated error messages · e719b1c0
      Josh Rosen authored
      This patch addresses a corner-case escaping bug where field names which contain special characters were unsafely interpolated into error message string literals in generated Java code, leading to compilation errors.
      
      This patch addresses these issues by using `addReferenceObj` to store the error messages as string fields rather than inline string constants.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #15156 from JoshRosen/SPARK-17160.
      e719b1c0
    • Davies Liu's avatar
      [SPARK-17100] [SQL] fix Python udf in filter on top of outer join · d8104158
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      In optimizer, we try to evaluate the condition to see whether it's nullable or not, but some expressions are not evaluable, we should check that before evaluate it.
      
      ## How was this patch tested?
      
      Added regression tests.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #15103 from davies/udf_join.
      d8104158
    • Davies Liu's avatar
      [SPARK-16439] [SQL] bring back the separator in SQL UI · e0632062
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      Currently, the SQL metrics looks like `number of rows: 111111111111`, it's very hard to read how large the number is. So a separator was added by #12425, but removed by #14142, because the separator is weird in some locales (for example, pl_PL), this PR will add that back, but always use "," as the separator, since the SQL UI are all in English.
      
      ## How was this patch tested?
      
      Existing tests.
      ![metrics](https://cloud.githubusercontent.com/assets/40902/14573908/21ad2f00-030d-11e6-9e2c-c544f30039ea.png)
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #15106 from davies/metric_sep.
      e0632062
    • Sean Owen's avatar
      [SPARK-17297][DOCS] Clarify window/slide duration as absolute time, not relative to a calendar · d720a401
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Clarify that slide and window duration are absolute, and not relative to a calendar.
      
      ## How was this patch tested?
      
      Doc build (no functional change)
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #15142 from srowen/SPARK-17297.
      d720a401
  5. Sep 18, 2016
    • petermaxlee's avatar
      [SPARK-17571][SQL] AssertOnQuery.condition should always return Boolean value · 8f0c35a4
      petermaxlee authored
      ## What changes were proposed in this pull request?
      AssertOnQuery has two apply constructor: one that accepts a closure that returns boolean, and another that accepts a closure that returns Unit. This is actually very confusing because developers could mistakenly think that AssertOnQuery always require a boolean return type and verifies the return result, when indeed the value of the last statement is ignored in one of the constructors.
      
      This pull request makes the two constructor consistent and always require boolean value. It will overall make the test suites more robust against developer errors.
      
      As an evidence for the confusing behavior, this change also identified a bug with an existing test case due to file system time granularity. This pull request fixes that test case as well.
      
      ## How was this patch tested?
      This is a test only change.
      
      Author: petermaxlee <petermaxlee@gmail.com>
      
      Closes #15127 from petermaxlee/SPARK-17571.
      8f0c35a4
    • Liwei Lin's avatar
      [SPARK-16462][SPARK-16460][SPARK-15144][SQL] Make CSV cast null values properly · 1dbb725d
      Liwei Lin authored
      ## Problem
      
      CSV in Spark 2.0.0:
      -  does not read null values back correctly for certain data types such as `Boolean`, `TimestampType`, `DateType` -- this is a regression comparing to 1.6;
      - does not read empty values (specified by `options.nullValue`) as `null`s for `StringType` -- this is compatible with 1.6 but leads to problems like SPARK-16903.
      
      ## What changes were proposed in this pull request?
      
      This patch makes changes to read all empty values back as `null`s.
      
      ## How was this patch tested?
      
      New test cases.
      
      Author: Liwei Lin <lwlin7@gmail.com>
      
      Closes #14118 from lw-lin/csv-cast-null.
      1dbb725d
    • jiangxingbo's avatar
      [SPARK-17506][SQL] Improve the check double values equality rule. · 5d3f4615
      jiangxingbo authored
      ## What changes were proposed in this pull request?
      
      In `ExpressionEvalHelper`, we check the equality between two double values by comparing whether the expected value is within the range [target - tolerance, target + tolerance], but this can cause a negative false when the compared numerics are very large.
      Before:
      ```
      val1 = 1.6358558070241E306
      val2 = 1.6358558070240974E306
      ExpressionEvalHelper.compareResults(val1, val2)
      false
      ```
      In fact, `val1` and `val2` are but with different precisions, we should tolerant this case by comparing with percentage range, eg.,expected is within range [target - target * tolerance_percentage, target + target * tolerance_percentage].
      After:
      ```
      val1 = 1.6358558070241E306
      val2 = 1.6358558070240974E306
      ExpressionEvalHelper.compareResults(val1, val2)
      true
      ```
      
      ## How was this patch tested?
      
      Exsiting testcases.
      
      Author: jiangxingbo <jiangxb1987@gmail.com>
      
      Closes #15059 from jiangxb1987/deq.
      5d3f4615
    • Wenchen Fan's avatar
      [SPARK-17541][SQL] fix some DDL bugs about table management when same-name temp view exists · 3fe630d3
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      In `SessionCatalog`, we have several operations(`tableExists`, `dropTable`, `loopupRelation`, etc) that handle both temp views and metastore tables/views. This brings some bugs to DDL commands that want to handle temp view only or metastore table/view only. These bugs are:
      
      1. `CREATE TABLE USING` will fail if a same-name temp view exists
      2. `Catalog.dropTempView`will un-cache and drop metastore table if a same-name table exists
      3. `saveAsTable` will fail or have unexpected behaviour if a same-name temp view exists.
      
      These bug fixes are pulled out from https://github.com/apache/spark/pull/14962 and targets both master and 2.0 branch
      
      ## How was this patch tested?
      
      new regression tests
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #15099 from cloud-fan/fix-view.
      3fe630d3
    • gatorsmile's avatar
      [SPARK-17518][SQL] Block Users to Specify the Internal Data Source Provider Hive · 3a3c9ffb
      gatorsmile authored
      ### What changes were proposed in this pull request?
      In Spark 2.1, we introduced a new internal provider `hive` for telling Hive serde tables from data source tables. This PR is to block users to specify this in `DataFrameWriter` and SQL APIs.
      
      ### How was this patch tested?
      Added a test case
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #15073 from gatorsmile/formatHive.
      3a3c9ffb
  6. Sep 17, 2016
  7. Sep 16, 2016
    • Marcelo Vanzin's avatar
      [SPARK-17549][SQL] Only collect table size stat in driver for cached relation. · 39e2bad6
      Marcelo Vanzin authored
      The existing code caches all stats for all columns for each partition
      in the driver; for a large relation, this causes extreme memory usage,
      which leads to gc hell and application failures.
      
      It seems that only the size in bytes of the data is actually used in the
      driver, so instead just colllect that. In executors, the full stats are
      still kept, but that's not a big problem; we expect the data to be distributed
      and thus not really incur in too much memory pressure in each individual
      executor.
      
      There are also potential improvements on the executor side, since the data
      being stored currently is very wasteful (e.g. storing boxed types vs.
      primitive types for stats). But that's a separate issue.
      
      On a mildly related change, I'm also adding code to catch exceptions in the
      code generator since Janino was breaking with the test data I tried this
      patch on.
      
      Tested with unit tests and by doing a count a very wide table (20k columns)
      with many partitions.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #15112 from vanzin/SPARK-17549.
      39e2bad6
    • Sean Owen's avatar
      [SPARK-17561][DOCS] DataFrameWriter documentation formatting problems · b9323fc9
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Fix `<ul> / <li>` problems in SQL scaladoc.
      
      ## How was this patch tested?
      
      Scaladoc build and manual verification of generated HTML.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #15117 from srowen/SPARK-17561.
      b9323fc9
    • Sean Zhong's avatar
      [SPARK-17426][SQL] Refactor `TreeNode.toJSON` to avoid OOM when converting unknown fields to JSON · a425a37a
      Sean Zhong authored
      ## What changes were proposed in this pull request?
      
      This PR is a follow up of SPARK-17356. Current implementation of `TreeNode.toJSON` recursively converts all fields of TreeNode to JSON, even if the field is of type `Seq` or type Map. This may trigger out of memory exception in cases like:
      
      1. the Seq or Map can be very big. Converting them to JSON may take huge memory, which may trigger out of memory error.
      2. Some user space input may also be propagated to the Plan. The user space input can be of arbitrary type, and may also be self-referencing. Trying to print user space input to JSON may trigger out of memory error or stack overflow error.
      
      For a code example, please check the Jira description of SPARK-17426.
      
      In this PR, we refactor the `TreeNode.toJSON` so that we only convert a field to JSON string if the field is a safe type.
      
      ## How was this patch tested?
      
      Unit test.
      
      Author: Sean Zhong <seanzhong@databricks.com>
      
      Closes #14990 from clockfly/json_oom2.
      a425a37a
  8. Sep 15, 2016
    • Andrew Ray's avatar
      [SPARK-17458][SQL] Alias specified for aggregates in a pivot are not honored · b72486f8
      Andrew Ray authored
      ## What changes were proposed in this pull request?
      
      This change preserves aliases that are given for pivot aggregations
      
      ## How was this patch tested?
      
      New unit test
      
      Author: Andrew Ray <ray.andrew@gmail.com>
      
      Closes #15111 from aray/SPARK-17458.
      b72486f8
    • Sean Zhong's avatar
      [SPARK-17364][SQL] Antlr lexer wrongly treats full qualified identifier as a... · a6b81820
      Sean Zhong authored
      [SPARK-17364][SQL] Antlr lexer wrongly treats full qualified identifier as a decimal number token when parsing SQL string
      
      ## What changes were proposed in this pull request?
      
      The Antlr lexer we use to tokenize a SQL string may wrongly tokenize a fully qualified identifier as a decimal number token. For example, table identifier `default.123_table` is wrongly tokenized as
      ```
      default // Matches lexer rule IDENTIFIER
      .123 // Matches lexer rule DECIMAL_VALUE
      _TABLE // Matches lexer rule IDENTIFIER
      ```
      
      The correct tokenization for `default.123_table` should be:
      ```
      default // Matches lexer rule IDENTIFIER,
      . // Matches a single dot
      123_TABLE // Matches lexer rule IDENTIFIER
      ```
      
      This PR fix the Antlr grammar so that it can tokenize fully qualified identifier correctly:
      1. Fully qualified table name can be parsed correctly. For example, `select * from database.123_suffix`.
      2. Fully qualified column name can be parsed correctly, for example `select a.123_suffix from a`.
      
      ### Before change
      
      #### Case 1: Failed to parse fully qualified column name
      
      ```
      scala> spark.sql("select a.123_column from a").show
      org.apache.spark.sql.catalyst.parser.ParseException:
      extraneous input '.123' expecting {<EOF>,
      ...
      , IDENTIFIER, BACKQUOTED_IDENTIFIER}(line 1, pos 8)
      == SQL ==
      select a.123_column from a
      --------^^^
      ```
      
      #### Case 2: Failed to parse fully qualified table name
      ```
      scala> spark.sql("select * from default.123_table")
      org.apache.spark.sql.catalyst.parser.ParseException:
      extraneous input '.123' expecting {<EOF>,
      ...
      IDENTIFIER, BACKQUOTED_IDENTIFIER}(line 1, pos 21)
      
      == SQL ==
      select * from default.123_table
      ---------------------^^^
      ```
      
      ### After Change
      
      #### Case 1: fully qualified column name, no ParseException thrown
      ```
      scala> spark.sql("select a.123_column from a").show
      ```
      
      #### Case 2: fully qualified table name, no ParseException thrown
      ```
      scala> spark.sql("select * from default.123_table")
      ```
      
      ## How was this patch tested?
      
      Unit test.
      
      Author: Sean Zhong <seanzhong@databricks.com>
      
      Closes #15006 from clockfly/SPARK-17364.
      a6b81820
    • 岑玉海's avatar
      [SPARK-17429][SQL] use ImplicitCastInputTypes with function Length · fe767395
      岑玉海 authored
      ## What changes were proposed in this pull request?
      select length(11);
      select length(2.0);
      these sql will return errors, but hive is ok.
      this PR will support casting input types implicitly for function length
      the correct result is:
      select length(11) return 2
      select length(2.0) return 3
      
      Author: 岑玉海 <261810726@qq.com>
      Author: cenyuhai <cenyuhai@didichuxing.com>
      
      Closes #15014 from cenyuhai/SPARK-17429.
      fe767395
    • Herman van Hovell's avatar
      [SPARK-17114][SQL] Fix aggregates grouped by literals with empty input · d403562e
      Herman van Hovell authored
      ## What changes were proposed in this pull request?
      This PR fixes an issue with aggregates that have an empty input, and use a literals as their grouping keys. These aggregates are currently interpreted as aggregates **without** grouping keys, this triggers the ungrouped code path (which aways returns a single row).
      
      This PR fixes the `RemoveLiteralFromGroupExpressions` optimizer rule, which changes the semantics of the Aggregate by eliminating all literal grouping keys.
      
      ## How was this patch tested?
      Added tests to `SQLQueryTestSuite`.
      
      Author: Herman van Hovell <hvanhovell@databricks.com>
      
      Closes #15101 from hvanhovell/SPARK-17114-3.
      d403562e
    • John Muller's avatar
      [SPARK-17536][SQL] Minor performance improvement to JDBC batch inserts · 71a65825
      John Muller authored
      ## What changes were proposed in this pull request?
      
      Optimize a while loop during batch inserts
      
      ## How was this patch tested?
      
      Unit tests were done, specifically "mvn  test" for sql
      
      Author: John Muller <jmuller@us.imshealth.com>
      
      Closes #15098 from blue666man/SPARK-17536.
      71a65825
Loading