Skip to content
Snippets Groups Projects
  1. Apr 24, 2016
    • Dongjoon Hyun's avatar
      [SPARK-14868][BUILD] Enable NewLineAtEofChecker in checkstyle and fix lint-java errors · d34d6503
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      Spark uses `NewLineAtEofChecker` rule in Scala by ScalaStyle. And, most Java code also comply with the rule. This PR aims to enforce the same rule `NewlineAtEndOfFile` by CheckStyle explicitly. Also, this fixes lint-java errors since SPARK-14465. The followings are the items.
      
      - Adds a new line at the end of the files (19 files)
      - Fixes 25 lint-java errors (12 RedundantModifier, 6 **ArrayTypeStyle**, 2 LineLength, 2 UnusedImports, 2 RegexpSingleline, 1 ModifierOrder)
      
      ## How was this patch tested?
      
      After the Jenkins test succeeds, `dev/lint-java` should pass. (Currently, Jenkins dose not run lint-java.)
      ```bash
      $ dev/lint-java
      Using `mvn` from path: /usr/local/bin/mvn
      Checkstyle checks passed.
      ```
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #12632 from dongjoon-hyun/SPARK-14868.
      d34d6503
    • Reynold Xin's avatar
      [SPARK-14876][SQL] SparkSession should be case insensitive by default · d0ca5797
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch changes SparkSession to be case insensitive by default, in order to match other database systems.
      
      ## How was this patch tested?
      N/A - I'm sure some tests will fail and I will need to fix those.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12643 from rxin/SPARK-14876.
      d0ca5797
    • Reynold Xin's avatar
      Disable flaky script transformation test · 0c8e5332
      Reynold Xin authored
      0c8e5332
    • jliwork's avatar
      [SPARK-14548][SQL] Support not greater than and not less than operator in Spark SQL · f0f1a8af
      jliwork authored
      !< means not less than which is equivalent to >=
      !> means not greater than which is equivalent to <=
      
      I'd to create a PR to support these two operators.
      
      I've added new test cases in: DataFrameSuite, ExpressionParserSuite, JDBCSuite, PlanParserSuite, SQLQuerySuite
      
      dilipbiswal viirya gatorsmile
      
      Author: jliwork <jiali@us.ibm.com>
      
      Closes #12316 from jliwork/SPARK-14548.
      f0f1a8af
    • gatorsmile's avatar
      [SPARK-14691][SQL] Simplify and Unify Error Generation for Unsupported Alter Table DDL · 337289d7
      gatorsmile authored
      #### What changes were proposed in this pull request?
      So far, we are capturing each unsupported Alter Table in separate visit functions. They should be unified and issue the same ParseException instead.
      
      This PR is to refactor the existing implementation and make error message consistent for Alter Table DDL.
      
      #### How was this patch tested?
      Updated the existing test cases and also added new test cases to ensure all the unsupported statements are covered.
      
      Author: gatorsmile <gatorsmile@gmail.com>
      Author: xiaoli <lixiao1983@gmail.com>
      Author: Xiao Li <xiaoli@Xiaos-MacBook-Pro.local>
      
      Closes #12459 from gatorsmile/cleanAlterTable.
      337289d7
    • Jacek Laskowski's avatar
      [DOCS][MINOR] Screenshot + minor fixes to improve reading for accumulators · 8df8a818
      Jacek Laskowski authored
      ## What changes were proposed in this pull request?
      
      Added screenshot + minor fixes to improve reading
      
      ## How was this patch tested?
      
      Manual
      
      Author: Jacek Laskowski <jacek@japila.pl>
      
      Closes #12569 from jaceklaskowski/docs-accumulators.
      8df8a818
    • Steve Loughran's avatar
      [SPARK-13267][WEB UI] document the ?param arguments of the REST API; lift the… · db7113b1
      Steve Loughran authored
      Add to the REST API details on the ? args and examples from the test suite.
      
      I've used the existing table, adding all the fields to the second table.
      
      see [in the pr](https://github.com/steveloughran/spark/blob/history/SPARK-13267-doc-params/docs/monitoring.md).
      
      There's a slightly more sophisticated option: make the table 3 columns wide, and for all existing entries, have the initial `td` span 2 columns. The new entries would then have an empty 1st column, param in 2nd and text in 3rd, with any examples after a `br` entry.
      
      Author: Steve Loughran <stevel@hortonworks.com>
      
      Closes #11152 from steveloughran/history/SPARK-13267-doc-params.
      db7113b1
    • mathieu longtin's avatar
      Support single argument version of sqlContext.getConf · 902c15c5
      mathieu longtin authored
      ## What changes were proposed in this pull request?
      
      In Python, sqlContext.getConf didn't allow getting the system default (getConf with one parameter).
      
      Now the following are supported:
      ```
      sqlContext.getConf(confName)  # System default if not locally set, this is new
      sqlContext.getConf(confName, myDefault)  # myDefault if not locally set, old behavior
      ```
      
      I also added doctests to this function. The original behavior does not change.
      
      ## How was this patch tested?
      
      Manually, but doctests were added.
      
      Author: mathieu longtin <mathieu.longtin@nuance.com>
      
      Closes #12488 from mathieulongtin/pyfixgetconf3.
      902c15c5
    • Yin Huai's avatar
      [SPARK-14879][SQL] Move CreateMetastoreDataSource and CreateMetastoreDataSourceAsSelect to sql/core · 1672149c
      Yin Huai authored
      ## What changes were proposed in this pull request?
      
      CreateMetastoreDataSource and CreateMetastoreDataSourceAsSelect are not Hive-specific. So, this PR moves them from sql/hive to sql/core. Also, I am adding `Command` suffix to these two classes.
      
      ## How was this patch tested?
      Existing tests.
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #12645 from yhuai/moveCreateDataSource.
      1672149c
  2. Apr 23, 2016
    • Tathagata Das's avatar
      [SPARK-14833][SQL][STREAMING][TEST] Refactor StreamTests to test for source... · 28538596
      Tathagata Das authored
      [SPARK-14833][SQL][STREAMING][TEST] Refactor StreamTests to test for source fault-tolerance correctly.
      
      ## What changes were proposed in this pull request?
      
      Current StreamTest allows testing of a streaming Dataset generated explicitly wraps a source. This is different from the actual production code path where the source object is dynamically created through a DataSource object every time a query is started. So all the fault-tolerance testing in FileSourceSuite and FileSourceStressSuite is not really testing the actual code path as they are just reusing the FileStreamSource object.
      
      This PR fixes StreamTest and the FileSource***Suite to test this correctly. Instead of maintaining a mapping of source --> expected offset in StreamTest (which requires reuse of source object), it now maintains a mapping of source index --> offset, so that it is independent of the source object.
      
      Summary of changes
      - StreamTest refactored to keep track of offset by source index instead of source
      - AddData, AddTextData and AddParquetData updated to find the FileStreamSource object from an active query, so that it can work with sources generated when query is started.
      - Refactored unit tests in FileSource***Suite to test using DataFrame/Dataset generated with public, rather than reusing the same FileStreamSource. This correctly tests fault tolerance.
      
      The refactoring changed a lot of indents in FileSourceSuite, so its recommended to hide whitespace changes with this - https://github.com/apache/spark/pull/12592/files?w=1
      
      ## How was this patch tested?
      
      Refactored unit tests.
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #12592 from tdas/SPARK-14833.
      28538596
    • Liang-Chi Hsieh's avatar
      [SPARK-14838] [SQL] Set default size for ObjecType to avoid failure when... · ba5e0b87
      Liang-Chi Hsieh authored
      [SPARK-14838] [SQL] Set default size for ObjecType to avoid failure when estimating sizeInBytes in ObjectProducer
      
      ## What changes were proposed in this pull request?
      
      We have logical plans that produce domain objects which are `ObjectType`. As we can't estimate the size of `ObjectType`, we throw an `UnsupportedOperationException` if trying to do that. We should set a default size for `ObjectType` to avoid this failure.
      
      ## How was this patch tested?
      
      `DatasetSuite`.
      
      Author: Liang-Chi Hsieh <simonh@tw.ibm.com>
      
      Closes #12599 from viirya/skip-broadcast-objectproducer.
      ba5e0b87
    • felixcheung's avatar
      [SPARK-12148][SPARKR] fix doc after renaming DataFrame to SparkDataFrame · 1b7eab74
      felixcheung authored
      ## What changes were proposed in this pull request?
      
      Fixed inadvertent roxygen2 doc changes, added class name change to programming guide
      Follow up of #12621
      
      ## How was this patch tested?
      
      manually checked
      
      Author: felixcheung <felixcheung_m@hotmail.com>
      
      Closes #12647 from felixcheung/rdataframe.
      1b7eab74
    • tedyu's avatar
      [SPARK-14856] Correct message in assertion for 'returning batch for wide table' · b45819ac
      tedyu authored
      ## What changes were proposed in this pull request?
      
      There was a typo in the message for second assertion in "returning batch for wide table" test
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: tedyu <yuzhihong@gmail.com>
      
      Closes #12639 from tedyu/master.
      b45819ac
    • Dongjoon Hyun's avatar
      [MINOR] [SQL] Fix error message string in nullSafeEvel of TernaryExpression · bebb0240
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      TernaryExpressions should thows proper error message for itself.
      ```scala
         protected def nullSafeEval(input1: Any, input2: Any, input3: Any): Any =
      -    sys.error(s"BinaryExpressions must override either eval or nullSafeEval")
      +    sys.error(s"TernaryExpressions must override either eval or nullSafeEval")
      ```
      
      ## How was this patch tested?
      
      Manual.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #12642 from dongjoon-hyun/minor_fix_error_msg_in_ternaryexpression.
      bebb0240
    • Reynold Xin's avatar
      [SPARK-14877][SQL] Remove HiveMetastoreTypes class · 162e12b0
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      It is unnecessary as DataType.catalogString largely replaces the need for this class.
      
      ## How was this patch tested?
      Mostly removing dead code and should be covered by existing tests.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12644 from rxin/SPARK-14877.
      162e12b0
    • Reynold Xin's avatar
      [SPARK-14865][SQL] Better error handling for view creation. · e3c1366b
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch improves error handling in view creation. CreateViewCommand itself will analyze the view SQL query first, and if it cannot successfully analyze it, throw an AnalysisException.
      
      In addition, I also added the following two conservative guards for easier identification of Spark bugs:
      
      1. If there is a bug and the generated view SQL cannot be analyzed, throw an exception at runtime. Note that this is not an AnalysisException because it is not caused by the user and more likely indicate a bug in Spark.
      2. SQLBuilder when it gets an unresolved plan, it will also show the plan in the error message.
      
      I also took the chance to simplify the internal implementation of CreateViewCommand, and *removed* a fallback path that would've masked an exception from before.
      
      ## How was this patch tested?
      1. Added a unit test for the user facing error handling.
      2. Manually introduced some bugs in Spark to test the internal defensive error handling.
      3. Also added a test case to test nested views (not super relevant).
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12633 from rxin/SPARK-14865.
      e3c1366b
    • Reynold Xin's avatar
      [SPARK-14869][SQL] Don't mask exceptions in ResolveRelations · 890abd12
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      In order to support running SQL directly on files, we added some code in ResolveRelations to catch the exception thrown by catalog.lookupRelation and ignore it. This unfortunately masks all the exceptions. This patch changes the logic to simply test the table's existence.
      
      ## How was this patch tested?
      I manually hacked some bugs into Spark and made sure the exceptions were being propagated up.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12634 from rxin/SPARK-14869.
      890abd12
    • Reynold Xin's avatar
      [SPARK-14872][SQL] Restructure command package · 5c8a0ec9
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch restructures sql.execution.command package to break the commands into multiple files, in some logical organization: databases, tables, views, functions.
      
      I also renamed basicOperators.scala to basicLogicalOperators.scala and basicPhysicalOperators.scala.
      
      ## How was this patch tested?
      N/A - all I did was moving code around.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12636 from rxin/SPARK-14872.
      5c8a0ec9
    • Reynold Xin's avatar
      [SPARK-14871][SQL] Disable StatsReportListener to declutter output · fddd3aee
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      Spark SQL inherited from Shark to use the StatsReportListener. Unfortunately this clutters the spark-sql CLI output and makes it very difficult to read the actual query results.
      
      ## How was this patch tested?
      Built and tested in spark-sql CLI.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12635 from rxin/SPARK-14871.
      fddd3aee
    • Davies Liu's avatar
      [HOTFIX] disable generated aggregate map · ee6b209a
      Davies Liu authored
      ee6b209a
    • Reynold Xin's avatar
      Turn script transformation back on. · f0bba744
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      
      (Please fill in changes proposed in this fix)
      
      ## How was this patch tested?
      
      (Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
      
      (If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12565 from rxin/test-flaky.
      f0bba744
    • felixcheung's avatar
      [SPARK-14594][SPARKR] check execution return status code · 39d3bc62
      felixcheung authored
      ## What changes were proposed in this pull request?
      
      When JVM backend fails without going proper error handling (eg. process crashed), the R error message could be ambiguous.
      
      ```
      Error in if (returnStatus != 0) { : argument is of length zero
      ```
      
      This change attempts to make it more clear (however, one would still need to investigate why JVM fails)
      
      ## How was this patch tested?
      
      manually
      
      Author: felixcheung <felixcheung_m@hotmail.com>
      
      Closes #12622 from felixcheung/rreturnstatus.
      39d3bc62
    • Reynold Xin's avatar
      Closes some open PRs that have been requested to close. · 6acc72a0
      Reynold Xin authored
      Closes #7647
      Closes #8195
      Closes #8741
      Closes #8972
      Closes #9490
      Closes #10419
      Closes #10761
      Closes #11003
      Closes #11201
      Closes #11803
      Closes #12111
      Closes #12442
      6acc72a0
    • Sean Owen's avatar
      [SPARK-14873][CORE] Java sampleByKey methods take ju.Map but with Scala Double... · be0d5d3b
      Sean Owen authored
      [SPARK-14873][CORE] Java sampleByKey methods take ju.Map but with Scala Double values; results in type Object
      
      ## What changes were proposed in this pull request?
      
      Java `sampleByKey` methods should accept `Map` with `java.lang.Double` values
      
      ## How was this patch tested?
      
      Existing (updated) Jenkins tests
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #12637 from srowen/SPARK-14873.
      be0d5d3b
    • felixcheung's avatar
      [SPARK-12148][SPARKR] SparkR: rename DataFrame to SparkDataFrame · a55fbe2a
      felixcheung authored
      ## What changes were proposed in this pull request?
      
      Changed class name defined in R from "DataFrame" to "SparkDataFrame". A popular package, S4Vector already defines "DataFrame" - this change is to avoid conflict.
      
      Aside from class name and API/roxygen2 references, SparkR APIs like `createDataFrame`, `as.DataFrame` are not changed (S4Vector does not define a "as.DataFrame").
      
      Since in R, one would rarely reference type/class, this change should have minimal/almost-no impact to a SparkR user in terms of back compat.
      
      ## How was this patch tested?
      
      SparkR tests, manually loading S4Vector then SparkR package
      
      Author: felixcheung <felixcheung_m@hotmail.com>
      
      Closes #12621 from felixcheung/rdataframe.
      a55fbe2a
    • Zheng RuiFeng's avatar
      [MINOR][ML][MLLIB] Remove unused imports · 86ca8fef
      Zheng RuiFeng authored
      ## What changes were proposed in this pull request?
      del unused imports in ML/MLLIB
      
      ## How was this patch tested?
      unit tests
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #12497 from zhengruifeng/del_unused_imports.
      86ca8fef
    • Rajesh Balamohan's avatar
      [SPARK-14551][SQL] Reduce number of NameNode calls in OrcRelation · e5226e30
      Rajesh Balamohan authored
      ## What changes were proposed in this pull request?
      When FileSourceStrategy is used, record reader is created which incurs a NN call internally. Later in OrcRelation.unwrapOrcStructs, it ends ups reading the file information to get the ObjectInspector. This incurs additional NN call. It would be good to avoid this additional NN call (specifically for partitioned datasets).
      
      Added OrcRecordReader which is very similar to OrcNewInputFormat.OrcRecordReader with an option of exposing the ObjectInspector. This eliminates the need to look up the file later for generating the object inspector. This would be specifically be useful for partitioned tables/datasets.
      
      ## How was this patch tested?
      Ran tpc-ds queries manually and also verified by running org.apache.spark.sql.hive.orc.OrcSuite,org.apache.spark.sql.hive.orc.OrcQuerySuite,org.apache.spark.sql.hive.orc.OrcPartitionDiscoverySuite,OrcPartitionDiscoverySuite.OrcHadoopFsRelationSuite,org.apache.spark.sql.hive.execution.HiveCompatibilitySuite
      
      …SourceStrategy mode
      
      Author: Rajesh Balamohan <rbalamohan@apache.org>
      
      Closes #12319 from rajeshbalamohan/SPARK-14551.
      e5226e30
    • Reynold Xin's avatar
      [SPARK-14866][SQL] Break SQLQuerySuite out into smaller test suites · 95faa731
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch breaks SQLQuerySuite out into smaller test suites. It was a little bit too large for debugging.
      
      ## How was this patch tested?
      This is a test only change.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12630 from rxin/SPARK-14866.
      95faa731
    • Josh Rosen's avatar
      [SPARK-14863][SQL] Cache TreeNode's hashCode by default · bdde010e
      Josh Rosen authored
      Caching TreeNode's `hashCode` can lead to orders-of-magnitude performance improvement in certain optimizer rules when operating on huge/complex schemas.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #12626 from JoshRosen/cache-treenode-hashcode.
      bdde010e
    • Davies Liu's avatar
      [SPARK-14856] [SQL] returning batch correctly · 39a77e15
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      Currently, the Parquet reader decide whether to return batch based on required schema or full schema, it's not consistent, this PR fix that.
      
      ## How was this patch tested?
      
      Added regression tests.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #12619 from davies/fix_return_batch.
      39a77e15
  3. Apr 22, 2016
    • Reynold Xin's avatar
      [SPARK-14842][SQL] Implement view creation in sql/core · c0611018
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch re-implements view creation command in sql/core, based on the pre-existing view creation command in the Hive module. This consolidates the view creation logical command and physical command into a single one, called CreateViewCommand.
      
      ## How was this patch tested?
      All the code should've been tested by existing tests.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12615 from rxin/SPARK-14842-2.
      c0611018
    • Yin Huai's avatar
      [SPARK-14807] Create a compatibility module · 7dde1da9
      Yin Huai authored
      ## What changes were proposed in this pull request?
      
      This PR creates a compatibility module in sql (called `hive-1-x-compatibility`), which will host HiveContext in Spark 2.0 (moving HiveContext to here will be done separately). This module is not included in assembly because only users who still want to access HiveContext need it.
      
      ## How was this patch tested?
      I manually tested `sbt/sbt -Phive package` and `mvn -Phive package -DskipTests`.
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #12580 from yhuai/compatibility.
      7dde1da9
    • Reynold Xin's avatar
      [SPARK-14855][SQL] Add "Exec" suffix to physical operators · d7d0cad0
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch adds "Exec" suffix to all physical operators. Before this patch, Spark's physical operators and logical operators are named the same (e.g. Project could be logical.Project or execution.Project), which caused small issues in code review and bigger issues in code refactoring.
      
      ## How was this patch tested?
      N/A
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #12617 from rxin/exec-node.
      d7d0cad0
    • Tathagata Das's avatar
      [SPARK-14832][SQL][STREAMING] Refactor DataSource to ensure schema is inferred... · c431a76d
      Tathagata Das authored
      [SPARK-14832][SQL][STREAMING] Refactor DataSource to ensure schema is inferred only once when creating a file stream
      
      ## What changes were proposed in this pull request?
      
      When creating a file stream using sqlContext.write.stream(), existing files are scanned twice for finding the schema
      - Once, when creating a DataSource + StreamingRelation in the DataFrameReader.stream()
      - Again, when creating streaming Source from the DataSource, in DataSource.createSource()
      
      Instead, the schema should be generated only once, at the time of creating the dataframe, and when the streaming source is created, it should just reuse that schema
      
      The solution proposed in this PR is to add a lazy field in DataSource that caches the schema. Then streaming Source created by the DataSource can just reuse the schema.
      
      ## How was this patch tested?
      Refactored unit tests.
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #12591 from tdas/SPARK-14832.
      c431a76d
    • Davies Liu's avatar
      [SPARK-14582][SQL] increase parallelism for small tables · c25b97fc
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      This PR try to increase the parallelism for small table (a few of big files) to reduce the query time, by decrease the maxSplitBytes, the goal is to have at least one task per CPU in the cluster, if the total size of all files is bigger than openCostInBytes * 2 * nCPU.
      
      For example, a small/medium table could be used as dimension table in huge query, this will be useful to reduce the time waiting for broadcast.
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #12344 from davies/more_partition.
      c25b97fc
    • Liwei Lin's avatar
      [SPARK-14701][STREAMING] First stop the event loop, then stop the checkpoint writer in JobGenerator · fde1340c
      Liwei Lin authored
      Currently if we call `streamingContext.stop` (e.g. in a `StreamingListener.onBatchCompleted` callback) when a batch is about to complete, a `rejectedException` may get thrown from `checkPointWriter.executor`, since the `eventLoop` will try to process `DoCheckpoint` events even after the `checkPointWriter.executor` was stopped.
      
      Please see [SPARK-14701](https://issues.apache.org/jira/browse/SPARK-14701) for details and stack traces.
      
      ## What changes were proposed in this pull request?
      
      Reversed the stopping order of `event loop` and `checkpoint writer`.
      
      ## How was this patch tested?
      
      Existing test suits.
      (no dedicated test suits were added because the change is simple to reason about)
      
      Author: Liwei Lin <lwlin7@gmail.com>
      
      Closes #12489 from lw-lin/spark-14701.
      fde1340c
    • Dongjoon Hyun's avatar
      [SPARK-14796][SQL] Add spark.sql.optimizer.inSetConversionThreshold config option. · 3647120a
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      Currently, `OptimizeIn` optimizer replaces `In` expression into `InSet` expression if the size of set is greater than a constant, 10.
      This issue aims to make a configuration `spark.sql.optimizer.inSetConversionThreshold` for that.
      
      After this PR, `OptimizerIn` is configurable.
      ```scala
      scala> sql("select a in (1,2,3) from (select explode(array(1,2)) a) T").explain()
      == Physical Plan ==
      WholeStageCodegen
      :  +- Project [a#7 IN (1,2,3) AS (a IN (1, 2, 3))#8]
      :     +- INPUT
      +- Generate explode([1,2]), false, false, [a#7]
         +- Scan OneRowRelation[]
      
      scala> sqlContext.setConf("spark.sql.optimizer.inSetConversionThreshold", "2")
      
      scala> sql("select a in (1,2,3) from (select explode(array(1,2)) a) T").explain()
      == Physical Plan ==
      WholeStageCodegen
      :  +- Project [a#16 INSET (1,2,3) AS (a IN (1, 2, 3))#17]
      :     +- INPUT
      +- Generate explode([1,2]), false, false, [a#16]
         +- Scan OneRowRelation[]
      ```
      
      ## How was this patch tested?
      
      Pass the Jenkins tests (with a new testcase)
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #12562 from dongjoon-hyun/SPARK-14796.
      3647120a
    • Davies Liu's avatar
      [SPARK-14669] [SQL] Fix some SQL metrics in codegen and added more · 0dcf9dbe
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      1. Fix the "spill size" of TungstenAggregate and Sort
      2. Rename "data size" to "peak memory" to match the actual meaning (also consistent with task metrics)
      3. Added "data size" for ShuffleExchange and BroadcastExchange
      4. Added some timing for Sort, Aggregate and BroadcastExchange (this requires another patch to work)
      
      ## How was this patch tested?
      
      Existing tests.
      ![metrics](https://cloud.githubusercontent.com/assets/40902/14573908/21ad2f00-030d-11e6-9e2c-c544f30039ea.png)
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #12425 from davies/fix_metrics.
      0dcf9dbe
    • Davies Liu's avatar
      [SPARK-14791] [SQL] fix risk condition between broadcast and subquery · 0419d631
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      SparkPlan.prepare() could be called in different threads (BroadcastExchange will call it in a thread pool), it only make sure that doPrepare() will only be called once, the second call to prepare() may return earlier before all the children had finished prepare(). Then some operator may call doProduce() before prepareSubqueries(), `null` will be used as the result of subquery, which is wrong. This cause TPCDS Q23B returns wrong answer sometimes.
      
      This PR added synchronization for prepare(), make sure all the children had finished prepare() before return. Also call prepare() in produce() (similar to execute()).
      
      Added checking for ScalarSubquery to make sure that the subquery has finished before using the result.
      
      ## How was this patch tested?
      
      Manually tested with Q23B, no wrong answer anymore.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #12600 from davies/fix_risk.
      0419d631
    • Davies Liu's avatar
      [SPARK-14763][SQL] fix subquery resolution · c417cec0
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      Currently, a column could be resolved wrongly if there are columns from both outer table and subquery have the same name, we should only resolve the attributes that can't be resolved within subquery. They may have same exprId than other attributes in subquery, so we should create alias for them.
      
      Also, the column in IN subquery could have same exprId, we should create alias for them.
      
      ## How was this patch tested?
      
      Added regression tests. Manually tests TPCDS Q70 and Q95, work well after this patch.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #12539 from davies/fix_subquery.
      c417cec0
Loading