Skip to content
Snippets Groups Projects
  1. Jun 03, 2016
    • Devaraj K's avatar
      [SPARK-15665][CORE] spark-submit --kill and --status are not working · efd3b11a
      Devaraj K authored
      ## What changes were proposed in this pull request?
      --kill and --status were not considered while handling in OptionParser and due to that it was failing. Now handling the --kill and --status options as part of OptionParser.handle.
      
      ## How was this patch tested?
      Added a test org.apache.spark.launcher.SparkSubmitCommandBuilderSuite.testCliKillAndStatus() and also I have verified these manually by running --kill and --status commands.
      
      Author: Devaraj K <devaraj@apache.org>
      
      Closes #13407 from devaraj-kavali/SPARK-15665.
      efd3b11a
    • Ioana Delaney's avatar
      [SPARK-15677][SQL] Query with scalar sub-query in the SELECT list throws... · 9e2eb13c
      Ioana Delaney authored
      [SPARK-15677][SQL] Query with scalar sub-query in the SELECT list throws UnsupportedOperationException
      
      ## What changes were proposed in this pull request?
      Queries with scalar sub-query in the SELECT list run against a local, in-memory relation throw
      UnsupportedOperationException exception.
      
      Problem repro:
      ```SQL
      scala> Seq((1, 1), (2, 2)).toDF("c1", "c2").createOrReplaceTempView("t1")
      scala> Seq((1, 1), (2, 2)).toDF("c1", "c2").createOrReplaceTempView("t2")
      scala> sql("select (select min(c1) from t2) from t1").show()
      
      java.lang.UnsupportedOperationException: Cannot evaluate expression: scalar-subquery#62 []
        at org.apache.spark.sql.catalyst.expressions.Unevaluable$class.eval(Expression.scala:215)
        at org.apache.spark.sql.catalyst.expressions.ScalarSubquery.eval(subquery.scala:62)
        at org.apache.spark.sql.catalyst.expressions.Alias.eval(namedExpressions.scala:142)
        at org.apache.spark.sql.catalyst.expressions.InterpretedProjection.apply(Projection.scala:45)
        at org.apache.spark.sql.catalyst.expressions.InterpretedProjection.apply(Projection.scala:29)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
        at scala.collection.immutable.List.map(List.scala:285)
        at org.apache.spark.sql.catalyst.optimizer.ConvertToLocalRelation$$anonfun$apply$37.applyOrElse(Optimizer.scala:1473)
      ```
      The problem is specific to local, in memory relations. It is caused by rule ConvertToLocalRelation, which attempts to push down
      a scalar-subquery expression to the local tables.
      
      The solution prevents the rule to apply if Project references scalar subqueries.
      
      ## How was this patch tested?
      Added regression tests to SubquerySuite.scala
      
      Author: Ioana Delaney <ioanamdelaney@gmail.com>
      
      Closes #13418 from ioana-delaney/scalarSubV2.
      9e2eb13c
    • bomeng's avatar
      [SPARK-15737][CORE] fix jetty warning · 8fa00dd0
      bomeng authored
      ## What changes were proposed in this pull request?
      
      After upgrading Jetty to 9.2, we always see "WARN org.eclipse.jetty.server.handler.AbstractHandler: No Server set for org.eclipse.jetty.server.handler.ErrorHandler" while running any test cases.
      
      This PR will fix it.
      
      ## How was this patch tested?
      
      The existing test cases will cover it.
      
      Author: bomeng <bmeng@us.ibm.com>
      
      Closes #13475 from bomeng/SPARK-15737.
      8fa00dd0
    • Imran Rashid's avatar
      [SPARK-15714][CORE] Fix flaky o.a.s.scheduler.BlacklistIntegrationSuite · c2f0cb4f
      Imran Rashid authored
      ## What changes were proposed in this pull request?
      
      BlacklistIntegrationSuite (introduced by SPARK-10372) is a bit flaky because of some race conditions:
      1. Failed jobs might have non-empty results, because the resultHandler will be invoked for successful tasks (if there are task successes before failures)
      2. taskScheduler.taskIdToTaskSetManager must be protected by a lock on taskScheduler
      
      (1) has failed a handful of jenkins builds recently.  I don't think I've seen (2) in jenkins, but I've run into with some uncommitted tests I'm working on where there are lots more tasks.
      
      While I was in there, I also made an unrelated fix to `runningTasks`in the test framework -- there was a pointless `O(n)` operation to remove completed tasks, could be `O(1)`.
      
      ## How was this patch tested?
      
      I modified the o.a.s.scheduler.BlacklistIntegrationSuite to have it run the tests 1k times on my laptop.  It failed 11 times before this change, and none with it.  (Pretty sure all the failures were problem (1), though I didn't check all of them).
      
      Also the full suite of tests via jenkins.
      
      Author: Imran Rashid <irashid@cloudera.com>
      
      Closes #13454 from squito/SPARK-15714.
      c2f0cb4f
    • Wenchen Fan's avatar
      [SPARK-15494][SQL] encoder code cleanup · 190ff274
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      Our encoder framework has been evolved a lot, this PR tries to clean up the code to make it more readable and emphasise the concept that encoder should be used as a container of serde expressions.
      
      1. move validation logic to analyzer instead of encoder
      2. only have a `resolveAndBind` method in encoder instead of `resolve` and `bind`, as we don't have the encoder life cycle concept anymore.
      3. `Dataset` don't need to keep a resolved encoder, as there is no such concept anymore. bound encoder is still needed to do serialization outside of query framework.
      4. Using `BoundReference` to represent an unresolved field in deserializer expression is kind of weird, this PR adds a `GetColumnByOrdinal` for this purpose. (serializer expression still use `BoundReference`, we can replace it with `GetColumnByOrdinal` in follow-ups)
      
      ## How was this patch tested?
      
      existing test
      
      Author: Wenchen Fan <wenchen@databricks.com>
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #13269 from cloud-fan/clean-encoder.
      190ff274
    • Dongjoon Hyun's avatar
      [SPARK-15744][SQL] Rename two TungstenAggregation*Suites and update codgen/error messages/comments · b9fcfb3b
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      For consistency, this PR updates some remaining `TungstenAggregation/SortBasedAggregate` after SPARK-15728.
      - Update a comment in codegen in `VectorizedHashMapGenerator.scala`.
      - `TungstenAggregationQuerySuite` --> `HashAggregationQuerySuite`
      - `TungstenAggregationQueryWithControlledFallbackSuite` --> `HashAggregationQueryWithControlledFallbackSuite`
      - Update two error messages in `SQLQuerySuite.scala` and `AggregationQuerySuite.scala`.
      - Update several comments.
      
      ## How was this patch tested?
      
      Manual (Only comment changes and test suite renamings).
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #13487 from dongjoon-hyun/SPARK-15744.
      b9fcfb3b
    • Sameer Agarwal's avatar
      [SPARK-15745][SQL] Use classloader's getResource() for reading resource files in HiveTests · f7288e16
      Sameer Agarwal authored
      ## What changes were proposed in this pull request?
      
      This is a cleaner approach in general but my motivation behind this change in particular is to be able to run these tests from anywhere without relying on system properties.
      
      ## How was this patch tested?
      
      Test only change
      
      Author: Sameer Agarwal <sameer@databricks.com>
      
      Closes #13489 from sameeragarwal/resourcepath.
      f7288e16
    • Xin Wu's avatar
      [SPARK-14959][SQL] handle partitioned table directories in distributed filesystem · 76aa45d3
      Xin Wu authored
      ## What changes were proposed in this pull request?
      ##### The root cause:
      When `DataSource.resolveRelation` is trying to build `ListingFileCatalog` object, `ListLeafFiles` is invoked where a list of `FileStatus` objects are retrieved from the provided path. These FileStatus objects include directories for the partitions (id=0 and id=2 in the jira). However, these directory `FileStatus` objects also try to invoke `getFileBlockLocations` where directory is not allowed for `DistributedFileSystem`, hence the exception happens.
      
      This PR is to remove the block of code that invokes `getFileBlockLocations` for every FileStatus object of the provided path. Instead, we call `HadoopFsRelation.listLeafFiles` directly because this utility method filters out the directories before calling `getFileBlockLocations` for generating `LocatedFileStatus` objects.
      
      ## How was this patch tested?
      Regtest is run. Manual test:
      ```
      scala> spark.read.format("parquet").load("hdfs://bdavm009.svl.ibm.com:8020/user/spark/SPARK-14959_part").show
      +-----+---+
      | text| id|
      +-----+---+
      |hello|  0|
      |world|  0|
      |hello|  1|
      |there|  1|
      +-----+---+
      
             spark.read.format("orc").load("hdfs://bdavm009.svl.ibm.com:8020/user/spark/SPARK-14959_orc").show
      +-----+---+
      | text| id|
      +-----+---+
      |hello|  0|
      |world|  0|
      |hello|  1|
      |there|  1|
      +-----+---+
      ```
      I also tried it with 2 level of partitioning.
      I have not found a way to add test case in the unit test bucket that can test a real hdfs file location. Any suggestions will be appreciated.
      
      Author: Xin Wu <xinwu@us.ibm.com>
      
      Closes #13463 from xwu0226/SPARK-14959.
      76aa45d3
    • Sean Zhong's avatar
      [SPARK-15733][SQL] Makes the explain output less verbose by hiding some... · 6dde2740
      Sean Zhong authored
      [SPARK-15733][SQL] Makes the explain output less verbose by hiding some verbose output like None, null, empty List, and etc.
      
      ## What changes were proposed in this pull request?
      
      This PR makes the explain output less verbose by hiding some verbose output like `None`, `null`, empty List `[]`, empty set `{}`, and etc.
      
      **Before change**:
      
      ```
      == Physical Plan ==
      ExecutedCommand
      :  +- ShowTablesCommand None, None
      ```
      
      **After change**:
      
      ```
      == Physical Plan ==
      ExecutedCommand
      :  +- ShowTablesCommand
      ```
      
      ## How was this patch tested?
      
      Manual test.
      
      Author: Sean Zhong <seanzhong@databricks.com>
      
      Closes #13470 from clockfly/verbose_breakdown_4.
      6dde2740
  2. Jun 02, 2016
    • Eric Liang's avatar
      [SPARK-15724] Add benchmarks for performance over wide schemas · 901b2e69
      Eric Liang authored
      ## What changes were proposed in this pull request?
      
      This adds microbenchmarks for tracking performance of queries over very wide or deeply nested DataFrames. It seems performance degrades when DataFrames get thousands of columns wide or hundreds of fields deep.
      
      ## How was this patch tested?
      
      Current results included.
      
      cc rxin JoshRosen
      
      Author: Eric Liang <ekl@databricks.com>
      
      Closes #13456 from ericl/sc-3468.
      901b2e69
    • Wenchen Fan's avatar
      [SPARK-15732][SQL] better error message when use java reserved keyword as field name · 6323e4bd
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      When users create a case class and use java reserved keyword as field name, spark sql will generate illegal java code and throw exception at runtime.
      
      This PR checks the field names when building the encoder, and if illegal field names are used, throw exception immediately with a good error message.
      
      ## How was this patch tested?
      
      new test in DatasetSuite
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #13485 from cloud-fan/java.
      6323e4bd
    • Andrew Or's avatar
      [SPARK-15715][SQL] Fix alter partition with storage information in Hive · d1c1fbc3
      Andrew Or authored
      ## What changes were proposed in this pull request?
      
      This command didn't work for Hive tables. Now it does:
      ```
      ALTER TABLE boxes PARTITION (width=3)
          SET SERDE 'com.sparkbricks.serde.ColumnarSerDe'
          WITH SERDEPROPERTIES ('compress'='true')
      ```
      
      ## How was this patch tested?
      
      `HiveExternalCatalogSuite`
      
      Author: Andrew Or <andrew@databricks.com>
      
      Closes #13453 from andrewor14/alter-partition-storage.
      d1c1fbc3
    • Xiangrui Meng's avatar
      [SPARK-15740][MLLIB] ignore big model load / save in Word2VecSuite · e23370ec
      Xiangrui Meng authored
      ## What changes were proposed in this pull request?
      
      andrewor14 noticed some OOM errors caused by "test big model load / save" in Word2VecSuite, e.g., https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.2/1168/consoleFull. It doesn't show up in the test result because it was OOMed.
      
      This PR disables the test. I will leave the JIRA open for a proper fix
      
      ## How was this patch tested?
      
      No new features.
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #13478 from mengxr/SPARK-15740.
      e23370ec
    • Wenchen Fan's avatar
      [SPARK-15718][SQL] better error message for writing bucketed data · f34aadc5
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      Currently we don't support bucketing for `save` and `insertInto`.
      
      For `save`, we just write the data out into a directory users specified, and it's not a table, we don't keep its metadata. When we read it back, we have no idea if the data is bucketed or not, so it doesn't make sense to use `save` to write bucketed data, as we can't use the bucket information anyway.
      
      We can support it in the future, once we have features like bucket discovery, or we save bucket information in the data directory too, so that we don't need to rely on a metastore.
      
      For `insertInto`, it inserts data into an existing table, so it doesn't make sense to specify bucket information, as we should get the bucket information from the existing table.
      
      This PR improves the error message for the above 2  cases.
      ## How was this patch tested?
      
      new test in `BukctedWriteSuite`
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #13452 from cloud-fan/error-msg.
      f34aadc5
    • Josh Rosen's avatar
      [SPARK-15736][CORE] Gracefully handle loss of DiskStore files · 229f9022
      Josh Rosen authored
      If an RDD partition is cached on disk and the DiskStore file is lost, then reads of that cached partition will fail and the missing partition is supposed to be recomputed by a new task attempt. In the current BlockManager implementation, however, the missing file does not trigger any metadata updates / does not invalidate the cache, so subsequent task attempts will be scheduled on the same executor and the doomed read will be repeatedly retried, leading to repeated task failures and eventually a total job failure.
      
      In order to fix this problem, the executor with the missing file needs to properly mark the corresponding block as missing so that it stops advertising itself as a cache location for that block.
      
      This patch fixes this bug and adds an end-to-end regression test (in `FailureSuite`) and a set of unit tests (`in BlockManagerSuite`).
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #13473 from JoshRosen/handle-missing-cache-files.
      229f9022
    • Yuhao Yang's avatar
      [SPARK-15668][ML] ml.feature: update check schema to avoid confusion when user... · 5855e005
      Yuhao Yang authored
      [SPARK-15668][ML] ml.feature: update check schema to avoid confusion when user use MLlib.vector as input type
      
      ## What changes were proposed in this pull request?
      
      ml.feature: update check schema to avoid confusion when user use MLlib.vector as input type
      
      ## How was this patch tested?
      existing ut
      
      Author: Yuhao Yang <yuhao.yang@intel.com>
      
      Closes #13411 from hhbyyh/schemaCheck.
      5855e005
    • Nick Pentreath's avatar
      [MINOR] clean up style for storage param setters in ALS · ccd298eb
      Nick Pentreath authored
      Clean up style for param setter methods in ALS to match standard style and the other setter in class (this is an artefact of one of my previous PRs that wasn't cleaned up).
      
      ## How was this patch tested?
      Existing tests - no functionality change.
      
      Author: Nick Pentreath <nickp@za.ibm.com>
      
      Closes #13480 from MLnick/als-param-minor-cleanup.
      ccd298eb
    • Sean Zhong's avatar
      [SPARK-15734][SQL] Avoids printing internal row in explain output · 985d5328
      Sean Zhong authored
      ## What changes were proposed in this pull request?
      
      This PR avoids printing internal rows in explain output for some operators.
      
      **Before change:**
      
      ```
      scala> (1 to 10).toSeq.map(_ => (1,2,3)).toDF().createTempView("df3")
      scala> spark.sql("select * from df3 where 1=2").explain(true)
      ...
      == Analyzed Logical Plan ==
      _1: int, _2: int, _3: int
      Project [_1#37,_2#38,_3#39]
      +- Filter (1 = 2)
         +- SubqueryAlias df3
            +- LocalRelation [_1#37,_2#38,_3#39], [[0,1,2,3],[0,1,2,3],[0,1,2,3],[0,1,2,3],[0,1,2,3],[0,1,2,3],[0,1,2,3],[0,1,2,3],[0,1,2,3],[0,1,2,3]]
      ...
      == Physical Plan ==
      LocalTableScan [_1#37,_2#38,_3#39]
      ```
      
      **After change:**
      
      ```
      scala> spark.sql("select * from df3 where 1=2").explain(true)
      ...
      == Analyzed Logical Plan ==
      _1: int, _2: int, _3: int
      Project [_1#58,_2#59,_3#60]
      +- Filter (1 = 2)
         +- SubqueryAlias df3
            +- LocalRelation [_1#58,_2#59,_3#60]
      ...
      == Physical Plan ==
      LocalTableScan <empty>, [_1#58,_2#59,_3#60]
      ```
      
      ## How was this patch tested?
      Manual test.
      
      Author: Sean Zhong <seanzhong@databricks.com>
      
      Closes #13471 from clockfly/verbose_breakdown_5.
      985d5328
    • Cheng Lian's avatar
      [SPARK-15719][SQL] Disables writing Parquet summary files by default · 43154276
      Cheng Lian authored
      ## What changes were proposed in this pull request?
      
      This PR disables writing Parquet summary files by default (i.e., when Hadoop configuration "parquet.enable.summary-metadata" is not set).
      
      Please refer to [SPARK-15719][1] for more details.
      
      ## How was this patch tested?
      
      New test case added in `ParquetQuerySuite` to check no summary files are written by default.
      
      [1]: https://issues.apache.org/jira/browse/SPARK-15719
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #13455 from liancheng/spark-15719-disable-parquet-summary-files.
      43154276
    • Holden Karau's avatar
      [SPARK-15092][SPARK-15139][PYSPARK][ML] Pyspark TreeEnsemble missing methods · 72353311
      Holden Karau authored
      ## What changes were proposed in this pull request?
      
      Add `toDebugString` and `totalNumNodes` to `TreeEnsembleModels` and add `toDebugString` to `DecisionTreeModel`
      
      ## How was this patch tested?
      
      Extended doc tests.
      
      Author: Holden Karau <holden@us.ibm.com>
      
      Closes #12919 from holdenk/SPARK-15139-pyspark-treeEnsemble-missing-methods.
      72353311
    • Sean Zhong's avatar
      [SPARK-15711][SQL] Ban CREATE TEMPORARY TABLE USING AS SELECT · d109a1be
      Sean Zhong authored
      ## What changes were proposed in this pull request?
      
      This PR bans syntax like `CREATE TEMPORARY TABLE USING AS SELECT`
      
      `CREATE TEMPORARY TABLE ... USING ... AS ...` is not properly implemented, the temporary data is not cleaned up when the session exits. Before a full fix, we probably should ban this syntax.
      
      This PR only impact syntax like `CREATE TEMPORARY TABLE ... USING ... AS ...`.
      Other syntax like `CREATE TEMPORARY TABLE .. USING ...` and `CREATE TABLE ... USING ...` are not impacted.
      
      ## How was this patch tested?
      
      Unit test.
      
      Author: Sean Zhong <seanzhong@databricks.com>
      
      Closes #13451 from clockfly/ban_create_temp_table_using_as.
      d109a1be
    • gatorsmile's avatar
      [SPARK-15515][SQL] Error Handling in Running SQL Directly On Files · 9aff6f3b
      gatorsmile authored
      #### What changes were proposed in this pull request?
      This PR is to address the following issues:
      
      - **ISSUE 1:** For ORC source format, we are reporting the strange error message when we did not enable Hive support:
      ```SQL
      SQL Example:
        select id from `org.apache.spark.sql.hive.orc`.`file_path`
      Error Message:
        Table or view not found: `org.apache.spark.sql.hive.orc`.`file_path`
      ```
      Instead, we should issue the error message like:
      ```
      Expected Error Message:
         The ORC data source must be used with Hive support enabled
      ```
      - **ISSUE 2:** For the Avro format, we report the strange error message like:
      
      The example query is like
        ```SQL
      SQL Example:
        select id from `avro`.`file_path`
        select id from `com.databricks.spark.avro`.`file_path`
      Error Message:
        Table or view not found: `com.databricks.spark.avro`.`file_path`
         ```
      The desired message should be like:
      ```
      Expected Error Message:
        Failed to find data source: avro. Please use Spark package http://spark-packages.org/package/databricks/spark-avro"
      ```
      
      - ~~**ISSUE 3:** Unable to detect incompatibility libraries for Spark 2.0 in Data Source Resolution. We report a strange error message:~~
      
      **Update**: The latest code changes contains
      - For JDBC format, we added an extra checking in the rule `ResolveRelations` of `Analyzer`. Without the PR, Spark will return the error message like: `Option 'url' not specified`. Now, we are reporting `Unsupported data source type for direct query on files: jdbc`
      - Make data source format name case incensitive so that error handling behaves consistent with the normal cases.
      - Added the test cases for all the supported formats.
      
      #### How was this patch tested?
      Added test cases to cover all the above issues
      
      Author: gatorsmile <gatorsmile@gmail.com>
      Author: xiaoli <lixiao1983@gmail.com>
      Author: Xiao Li <xiaoli@Xiaos-MacBook-Pro.local>
      
      Closes #13283 from gatorsmile/runSQLAgainstFile.
      9aff6f3b
    • Reynold Xin's avatar
      [SPARK-15728][SQL] Rename aggregate operators: HashAggregate and SortAggregate · 8900c8d8
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      We currently have two physical aggregate operators: TungstenAggregate and SortBasedAggregate. These names don't make a lot of sense from an end-user point of view. This patch renames them HashAggregate and SortAggregate.
      
      ## How was this patch tested?
      Updated test cases.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #13465 from rxin/SPARK-15728.
      8900c8d8
    • Sameer Agarwal's avatar
      [SPARK-14752][SQL] Explicitly implement KryoSerialization for LazilyGenerateOrdering · 09b3c56c
      Sameer Agarwal authored
      ## What changes were proposed in this pull request?
      
      This patch fixes a number of `com.esotericsoftware.kryo.KryoException: java.lang.NullPointerException` exceptions reported in [SPARK-15604], [SPARK-14752] etc. (while executing sparkSQL queries with the kryo serializer) by explicitly implementing `KryoSerialization` for `LazilyGenerateOrdering`.
      
      ## How was this patch tested?
      
      1. Modified `OrderingSuite` so that all tests in the suite also test kryo serialization (for both interpreted and generated ordering).
      2. Manually verified TPC-DS q1.
      
      Author: Sameer Agarwal <sameer@databricks.com>
      
      Closes #13466 from sameeragarwal/kryo.
      09b3c56c
    • Pete Robbins's avatar
      [SPARK-15606][CORE] Use non-blocking removeExecutor call to avoid deadlocks · 7c07d176
      Pete Robbins authored
      ## What changes were proposed in this pull request?
      Set minimum number of dispatcher threads to 3 to avoid deadlocks on machines with only 2 cores
      
      ## How was this patch tested?
      
      Spark test builds
      
      Author: Pete Robbins <robbinspg@gmail.com>
      
      Closes #13355 from robbinspg/SPARK-13906.
      7c07d176
    • Dongjoon Hyun's avatar
      [SPARK-15076][SQL] Add ReorderAssociativeOperator optimizer · 63b7f127
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This issue add a new optimizer `ReorderAssociativeOperator` by taking advantage of integral associative property. Currently, Spark works like the following.
      
      1) Can optimize `1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + a` into `45 + a`.
      2) Cannot optimize `a + 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9`.
      
      This PR can handle Case 2 for **Add/Multiply** expression whose data types are `ByteType`, `ShortType`, `IntegerType`, and `LongType`. The followings are the plan comparison between `before` and `after` this issue.
      
      **Before**
      ```scala
      scala> sql("select a+1+2+3+4+5+6+7+8+9 from (select explode(array(1)) a)").explain
      == Physical Plan ==
      WholeStageCodegen
      :  +- Project [(((((((((a#7 + 1) + 2) + 3) + 4) + 5) + 6) + 7) + 8) + 9) AS (((((((((a + 1) + 2) + 3) + 4) + 5) + 6) + 7) + 8) + 9)#8]
      :     +- INPUT
      +- Generate explode([1]), false, false, [a#7]
         +- Scan OneRowRelation[]
      scala> sql("select a*1*2*3*4*5*6*7*8*9 from (select explode(array(1)) a)").explain
      == Physical Plan ==
      *Project [(((((((((a#18 * 1) * 2) * 3) * 4) * 5) * 6) * 7) * 8) * 9) AS (((((((((a * 1) * 2) * 3) * 4) * 5) * 6) * 7) * 8) * 9)#19]
      +- Generate explode([1]), false, false, [a#18]
         +- Scan OneRowRelation[]
      ```
      
      **After**
      ```scala
      scala> sql("select a+1+2+3+4+5+6+7+8+9 from (select explode(array(1)) a)").explain
      == Physical Plan ==
      WholeStageCodegen
      :  +- Project [(a#7 + 45) AS (((((((((a + 1) + 2) + 3) + 4) + 5) + 6) + 7) + 8) + 9)#8]
      :     +- INPUT
      +- Generate explode([1]), false, false, [a#7]
         +- Scan OneRowRelation[]
      scala> sql("select a*1*2*3*4*5*6*7*8*9 from (select explode(array(1)) a)").explain
      == Physical Plan ==
      *Project [(a#18 * 362880) AS (((((((((a * 1) * 2) * 3) * 4) * 5) * 6) * 7) * 8) * 9)#19]
      +- Generate explode([1]), false, false, [a#18]
         +- Scan OneRowRelation[]
      ```
      
      This PR is greatly generalized by cloud-fan 's key ideas; he should be credited for the work he did.
      
      ## How was this patch tested?
      
      Pass the Jenkins tests including new testsuite.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #12850 from dongjoon-hyun/SPARK-15076.
      63b7f127
    • hyukjinkwon's avatar
      [SPARK-15322][SQL][FOLLOWUP] Use the new long accumulator for old int accumulators. · 252417fa
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR corrects the remaining cases for using old accumulators.
      
      This does not change some old accumulator usages below:
      
      - `ImplicitSuite.scala` - Tests dedicated to old accumulator, for implicits with `AccumulatorParam`
      
      - `AccumulatorSuite.scala` -  Tests dedicated to old accumulator
      
      - `JavaSparkContext.scala` - For supporting old accumulators for Java API.
      
      - `debug.package.scala` - Usage with `HashSet[String]`. Currently, it seems no implementation for this. I might be able to write an anonymous class for this but I didn't because I think it is not worth writing a lot of codes only for this.
      
      - `SQLMetricsSuite.scala` - This uses the old accumulator for checking type boxing. It seems new accumulator does not require type boxing for this case whereas the old one requires (due to the use of generic).
      
      ## How was this patch tested?
      
      Existing tests cover this.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #13434 from HyukjinKwon/accum.
      252417fa
    • Dongjoon Hyun's avatar
      [SPARK-15709][SQL] Prevent `freqItems` from raising `UnsupportedOperationException: empty.min` · b85d18f3
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      Currently, `freqItems` raises `UnsupportedOperationException` on `empty.min` usually when its `support` argument is high.
      ```scala
      scala> spark.createDataset(Seq(1, 2, 2, 3, 3, 3)).stat.freqItems(Seq("value"), 2)
      16/06/01 11:11:38 ERROR Executor: Exception in task 5.0 in stage 0.0 (TID 5)
      java.lang.UnsupportedOperationException: empty.min
      ...
      ```
      
      Also, the parameter checking message is wrong.
      ```
      require(support >= 1e-4, s"support ($support) must be greater than 1e-4.")
      ```
      
      This PR changes the logic to handle the `empty` case and also improves parameter checking.
      
      ## How was this patch tested?
      
      Pass the Jenkins tests (with a new testcase).
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #13449 from dongjoon-hyun/SPARK-15709.
      b85d18f3
    • Yanbo Liang's avatar
      [SPARK-15605][ML][EXAMPLES] Fix broken ML JavaDeveloperApiExample. · 4fe7c7bd
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      See [SPARK-15605](https://issues.apache.org/jira/browse/SPARK-15605) for the detail of this bug. This PR fix 2 major bugs in this example:
      * The java example class use Param ```maxIter```, it will fail when calling ```Param.shouldOwn```. We need add a public method which return the ```maxIter``` Object. Because ```Params.params``` use java reflection to list all public method whose return type is ```Param```, and invoke them to get all defined param objects in the instance.
      * The ```uid``` member defined in Java class will be initialized after Scala traits such as ```HasFeaturesCol```. So when ```HasFeaturesCol``` being constructed, they get ```uid``` with null, which will cause ```Param.shouldOwn``` check fail.
      
      so, here is my changes:
      * Add public method:
      ```public IntParam getMaxIterParam() {return maxIter;}```
      
      * Use Java anonymous class overriding ```uid()``` to defined the ```uid```, and it solve the second problem described above.
      * To make the ```getMaxIterParam ``` can be invoked using java reflection, we must make the two class (MyJavaLogisticRegression and MyJavaLogisticRegressionModel) public. so I make them become inner public static class.
      
      ## How was this patch tested?
      Offline tests.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #13353 from yanboliang/spark-15605.
      4fe7c7bd
    • Liwei Lin's avatar
      [SPARK-15208][WIP][CORE][STREAMING][DOCS] Update Spark examples with AccumulatorV2 · a0eec8e8
      Liwei Lin authored
      ## What changes were proposed in this pull request?
      
      The patch updates the codes & docs in the example module as well as the related doc module:
      
      - [ ] [docs] `streaming-programming-guide.md`
        - [x] scala code part
        - [ ] java code part
        - [ ] python code part
      - [x] [examples] `RecoverableNetworkWordCount.scala`
      - [ ] [examples] `JavaRecoverableNetworkWordCount.java`
      - [ ] [examples] `recoverable_network_wordcount.py`
      
      ## How was this patch tested?
      
      Ran the examples and verified results manually.
      
      Author: Liwei Lin <lwlin7@gmail.com>
      
      Closes #12981 from lw-lin/accumulatorV2-examples.
      a0eec8e8
    • Takeshi YAMAMURO's avatar
      [SPARK-13484][SQL] Prevent illegal NULL propagation when filtering outer-join results · 5eea3323
      Takeshi YAMAMURO authored
      ## What changes were proposed in this pull request?
      This PR add a rule at the end of analyzer to correct nullable fields of attributes in a logical plan by using nullable fields of the corresponding attributes in its children logical plans (these plans generate the input rows).
      
      This is another approach for addressing SPARK-13484 (the first approach is https://github.com/apache/spark/pull/11371).
      
      Close #113711
      
      Author: Takeshi YAMAMURO <linguin.m.s@gmail.com>
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #13290 from yhuai/SPARK-13484.
      5eea3323
  3. Jun 01, 2016
    • jerryshao's avatar
      [SPARK-15620][SQL] Fix transformed dataset attributes revolve failure · 8288e16a
      jerryshao authored
      ## What changes were proposed in this pull request?
      
      Join on transformed dataset has attributes conflicts, which make query execution failure, for example:
      
      ```
      val dataset = Seq(1, 2, 3).toDs
      val mappedDs = dataset.map(_ + 1)
      
      mappedDs.as("t1").joinWith(mappedDs.as("t2"), $"t1.value" === $"t2.value").show()
      ```
      
      will throw exception:
      
      ```
      org.apache.spark.sql.AnalysisException: cannot resolve '`t1.value`' given input columns: [value];
        at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
        at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:62)
        at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:59)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:287)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:287)
      ```
      
      ## How was this patch tested?
      
      Unit test.
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #13399 from jerryshao/SPARK-15620.
      8288e16a
    • Yin Huai's avatar
      [SPARK-15646][SQL] When spark.sql.hive.convertCTAS is true, the conversion... · 6dddb70c
      Yin Huai authored
      [SPARK-15646][SQL] When spark.sql.hive.convertCTAS is true, the conversion rule needs to respect TEXTFILE/SEQUENCEFILE format and the user-defined location
      
      ## What changes were proposed in this pull request?
      When `spark.sql.hive.convertCTAS` is true, for a CTAS statement, we will create a data source table using the default source (i.e. parquet) if the CTAS does not specify any Hive storage format. However, there are two issues with this conversion logic.
      1. First, we determine if a CTAS statement defines storage format by checking the serde. However, TEXTFILE/SEQUENCEFILE does not have a default serde. When we do the check, we have not set the default serde. So, a query like `CREATE TABLE abc STORED AS TEXTFILE AS SELECT ...` actually creates a data source parquet table.
      2. In the conversion logic, we are ignoring the user-specified location.
      
      This PR fixes the above two issues.
      
      Also, this PR makes the parser throws an exception when a CTAS statement has a PARTITIONED BY clause. This change is made because Hive's syntax does not allow it and our current implementation actually does not work for this case (the insert operation always throws an exception because the insertion does not pick up the partitioning info).
      
      ## How was this patch tested?
      I am adding new tests in SQLQuerySuite and HiveDDLCommandSuite.
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #13386 from yhuai/SPARK-14507.
      6dddb70c
    • Sean Zhong's avatar
      [SPARK-15692][SQL] Improves the explain output of several physical plans by... · c8fb776d
      Sean Zhong authored
      [SPARK-15692][SQL] Improves the explain output of several physical plans by displaying embedded logical plan in tree style
      
      ## What changes were proposed in this pull request?
      
      Improves the explain output of several physical plans by displaying embedded logical plan in tree style
      
      Some physical plan contains a embedded logical plan, for example, `cache tableName query` maps to:
      
      ```
      case class CacheTableCommand(
          tableName: String,
          plan: Option[LogicalPlan],
          isLazy: Boolean)
        extends RunnableCommand
      ```
      
      It is easier to read the explain output if we can display the `plan` in tree style.
      
      **Before change:**
      
      Everything is messed in one line.
      
      ```
      scala> Seq((1,2)).toDF().createOrReplaceTempView("testView")
      scala> spark.sql("cache table testView2 select * from testView").explain()
      == Physical Plan ==
      ExecutedCommand CacheTableCommand testView2, Some('Project [*]
      +- 'UnresolvedRelation `testView`, None
      ), false
      ```
      
      **After change:**
      
      ```
      scala> spark.sql("cache table testView2 select * from testView").explain()
      == Physical Plan ==
      ExecutedCommand
      :  +- CacheTableCommand testView2, false
      :     :  +- 'Project [*]
      :     :     +- 'UnresolvedRelation `testView`, None
      ```
      
      ## How was this patch tested?
      
      Manual test.
      
      Author: Sean Zhong <seanzhong@databricks.com>
      
      Closes #13433 from clockfly/verbose_breakdown_3_2.
      c8fb776d
    • Wenchen Fan's avatar
      [SPARK-15441][SQL] support null object in Dataset outer-join · 8640cdb8
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      Currently we can't encode top level null object into internal row, as Spark SQL doesn't allow row to be null, only its columns can be null.
      
      This is not a problem before, as we assume the input object is never null. However, for outer join, we do need the semantics of null object.
      
      This PR fixes this problem by making both join sides produce a single column, i.e. nest the logical plan output(by `CreateStruct`), so that we have an extra level to represent top level null obejct.
      
      ## How was this patch tested?
      
      new test in `DatasetSuite`
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #13425 from cloud-fan/outer-join2.
      8640cdb8
    • Cheng Lian's avatar
      [SPARK-15269][SQL] Removes unexpected empty table directories created while... · 7bb64aae
      Cheng Lian authored
      [SPARK-15269][SQL] Removes unexpected empty table directories created while creating external Spark SQL data sourcet tables.
      
      This PR is an alternative to #13120 authored by xwu0226.
      
      ## What changes were proposed in this pull request?
      
      When creating an external Spark SQL data source table and persisting its metadata to Hive metastore, we don't use the standard Hive `Table.dataLocation` field because Hive only allows directory paths as data locations while Spark SQL also allows file paths. However, if we don't set `Table.dataLocation`, Hive always creates an unexpected empty table directory under database location, but doesn't remove it while dropping the table (because the table is external).
      
      This PR works around this issue by explicitly setting `Table.dataLocation` and then manullay removing the created directory after creating the external table.
      
      Please refer to [this JIRA comment][1] for more details about why we chose this approach as a workaround.
      
      [1]: https://issues.apache.org/jira/browse/SPARK-15269?focusedCommentId=15297408&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15297408
      
      ## How was this patch tested?
      
      1. A new test case is added in `HiveQuerySuite` for this case
      2. Updated `ShowCreateTableSuite` to use the same table name in all test cases. (This is how I hit this issue at the first place.)
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #13270 from liancheng/spark-15269-unpleasant-fix.
      7bb64aae
    • Andrew Or's avatar
      [SPARK-15596][SPARK-15635][SQL] ALTER TABLE RENAME fixes · 9e2643b2
      Andrew Or authored
      ## What changes were proposed in this pull request?
      
      **SPARK-15596**: Even after we renamed a cached table, the plan would remain in the cache with the old table name. If I created a new table using the old name then the old table would return incorrect data. Note that this applies only to Hive tables.
      
      **SPARK-15635**: Renaming a datasource table would render the table not query-able. This is because we store the location of the table in a "path" property, which was not updated to reflect Hive's change in table location following a rename.
      
      ## How was this patch tested?
      
      DDLSuite
      
      Author: Andrew Or <andrew@databricks.com>
      
      Closes #13416 from andrewor14/rename-table.
      9e2643b2
    • Thomas Graves's avatar
      [SPARK-15671] performance regression CoalesceRDD.pickBin with large #… · 5b08ee63
      Thomas Graves authored
      I was running a 15TB join job with 202000 partitions. It looks like the changes I made to CoalesceRDD in pickBin() are really slow with that large of partitions. The array filter with that many elements just takes to long.
      It took about an hour for it to pickBins for all the partitions.
      original change:
      https://github.com/apache/spark/commit/83ee92f60345f016a390d61a82f1d924f64ddf90
      
      Just reverting the pickBin code back to get currpreflocs fixes the issue
      
      After reverting the pickBin code the coalesce takes about 10 seconds so for now it makes sense to revert those changes and we can look at further optimizations later.
      
      Tested this via RDDSuite unit test and manually testing the very large job.
      
      Author: Thomas Graves <tgraves@prevailsail.corp.gq1.yahoo.com>
      
      Closes #13443 from tgravescs/SPARK-15671.
      5b08ee63
    • WeichenXu's avatar
      [SPARK-15702][DOCUMENTATION] Update document programming-guide accumulator section · 2402b914
      WeichenXu authored
      ## What changes were proposed in this pull request?
      
      Update document programming-guide accumulator section (scala language)
      java and python version, because the API haven't done, so I do not modify them.
      
      ## How was this patch tested?
      
      N/A
      
      Author: WeichenXu <WeichenXu123@outlook.com>
      
      Closes #13441 from WeichenXu123/update_doc_accumulatorV2_clean.
      2402b914
    • Yanbo Liang's avatar
      [SPARK-15587][ML] ML 2.0 QA: Scala APIs audit for ml.feature · 07a98ca4
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      ML 2.0 QA: Scala APIs audit for ml.feature. Mainly include:
      * Remove seed for ```QuantileDiscretizer```, since we use ```approxQuantile``` to produce bins and ```seed``` is useless.
      * Scala API docs update.
      * Sync Scala and Python API docs for these changes.
      
      ## How was this patch tested?
      Exist tests.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #13410 from yanboliang/spark-15587.
      07a98ca4
Loading