Skip to content
Snippets Groups Projects
  1. Jul 05, 2016
    • Dongjoon Hyun's avatar
      [SPARK-16383][SQL] Remove `SessionState.executeSql` · 4db63fd2
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This PR removes `SessionState.executeSql` in favor of `SparkSession.sql`. We can remove this safely since the visibility `SessionState` is `private[sql]` and `executeSql` is only used in one **ignored** test, `test("Multiple Hive Instances")`.
      
      ## How was this patch tested?
      
      Pass the Jenkins tests.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #14055 from dongjoon-hyun/SPARK-16383.
      4db63fd2
    • cody koeninger's avatar
      [SPARK-16359][STREAMING][KAFKA] unidoc skip kafka 0.10 · 1f0d0213
      cody koeninger authored
      ## What changes were proposed in this pull request?
      during sbt unidoc task, skip the streamingKafka010 subproject and filter kafka 0.10 classes from the classpath, so that at least existing kafka 0.8 doc can be included in unidoc without error
      
      ## How was this patch tested?
      sbt spark/scalaunidoc:doc | grep -i error
      
      Author: cody koeninger <cody@koeninger.org>
      
      Closes #14041 from koeninger/SPARK-16359.
      1f0d0213
    • Cheng Hao's avatar
      [SPARK-15730][SQL] Respect the --hiveconf in the spark-sql command line · 920cb5fe
      Cheng Hao authored
      ## What changes were proposed in this pull request?
      This PR makes spark-sql (backed by SparkSQLCLIDriver) respects confs set by hiveconf, which is what we do in previous versions. The change is that when we start SparkSQLCLIDriver, we explicitly set confs set through --hiveconf to SQLContext's conf (basically treating those confs as a SparkSQL conf).
      
      ## How was this patch tested?
      A new test in CliSuite.
      
      Closes #13542
      
      Author: Cheng Hao <hao.cheng@intel.com>
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #14058 from yhuai/hiveConfThriftServer.
      920cb5fe
    • Reynold Xin's avatar
      [HOTFIX] Fix build break. · 5b7a1770
      Reynold Xin authored
      5b7a1770
    • cody koeninger's avatar
      [SPARK-16212][STREAMING][KAFKA] use random port for embedded kafka · 1fca9da9
      cody koeninger authored
      ## What changes were proposed in this pull request?
      
      Testing for 0.10 uncovered an issue with a fixed port number being used in KafkaTestUtils.  This is making a roughly equivalent fix for the 0.8 connector
      
      ## How was this patch tested?
      
      Unit tests, manual tests
      
      Author: cody koeninger <cody@koeninger.org>
      
      Closes #14018 from koeninger/kafka-0-8-test-port.
      1fca9da9
    • Reynold Xin's avatar
      [SPARK-16311][SQL] Metadata refresh should work on temporary views · 16a2a7d7
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch fixes the bug that the refresh command does not work on temporary views. This patch is based on https://github.com/apache/spark/pull/13989, but removes the public Dataset.refresh() API as well as improved test coverage.
      
      Note that I actually think the public refresh() API is very useful. We can in the future implement it by also invalidating the lazy vals in QueryExecution (or alternatively just create a new QueryExecution).
      
      ## How was this patch tested?
      Re-enabled a previously ignored test, and added a new test suite for Hive testing behavior of temporary views against MetastoreRelation.
      
      Author: Reynold Xin <rxin@databricks.com>
      Author: petermaxlee <petermaxlee@gmail.com>
      
      Closes #14009 from rxin/SPARK-16311.
      16a2a7d7
    • hyukjinkwon's avatar
      [SPARK-9876][SQL][FOLLOWUP] Enable string and binary tests for Parquet... · 07d9c532
      hyukjinkwon authored
      [SPARK-9876][SQL][FOLLOWUP] Enable string and binary tests for Parquet predicate pushdown and replace deprecated fromByteArray.
      
      ## What changes were proposed in this pull request?
      
      It seems Parquet has been upgraded to 1.8.1 by https://github.com/apache/spark/pull/13280. So,  this PR enables string and binary predicate push down which was disabled due to [SPARK-11153](https://issues.apache.org/jira/browse/SPARK-11153) and [PARQUET-251](https://issues.apache.org/jira/browse/PARQUET-251) and cleans up some comments unremoved (I think by mistake).
      
      This PR also replace the API, `fromByteArray()` deprecated in [PARQUET-251](https://issues.apache.org/jira/browse/PARQUET-251).
      
      ## How was this patch tested?
      
      Unit tests in `ParquetFilters`
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #13389 from HyukjinKwon/parquet-1.8-followup.
      07d9c532
    • Dongjoon Hyun's avatar
      [SPARK-16360][SQL] Speed up SQL query performance by removing redundant `executePlan` call · 7f7eb393
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      Currently, there are a few reports about Spark 2.0 query performance regression for large queries.
      
      This PR speeds up SQL query processing performance by removing redundant **consecutive `executePlan`** call in `Dataset.ofRows` function and `Dataset` instantiation. Specifically, this PR aims to reduce the overhead of SQL query execution plan generation, not real query execution. So, we can not see the result in the Spark Web UI. Please use the following query script. The result is **25.78 sec** -> **12.36 sec** as expected.
      
      **Sample Query**
      ```scala
      val n = 4000
      val values = (1 to n).map(_.toString).mkString(", ")
      val columns = (1 to n).map("column" + _).mkString(", ")
      val query =
        s"""
           |SELECT $columns
           |FROM VALUES ($values) T($columns)
           |WHERE 1=2 AND 1 IN ($columns)
           |GROUP BY $columns
           |ORDER BY $columns
           |""".stripMargin
      
      def time[R](block: => R): R = {
        val t0 = System.nanoTime()
        val result = block
        println("Elapsed time: " + ((System.nanoTime - t0) / 1e9) + "s")
        result
      }
      ```
      
      **Before**
      ```scala
      scala> time(sql(query))
      Elapsed time: 30.138142577s  // First query has a little overhead of initialization.
      res0: org.apache.spark.sql.DataFrame = [column1: int, column2: int ... 3998 more fields]
      scala> time(sql(query))
      Elapsed time: 25.787751452s  // Let's compare this one.
      res1: org.apache.spark.sql.DataFrame = [column1: int, column2: int ... 3998 more fields]
      ```
      
      **After**
      ```scala
      scala> time(sql(query))
      Elapsed time: 17.500279659s  // First query has a little overhead of initialization.
      res0: org.apache.spark.sql.DataFrame = [column1: int, column2: int ... 3998 more fields]
      scala> time(sql(query))
      Elapsed time: 12.364812255s  // This shows the real difference. The speed up is about 2 times.
      res1: org.apache.spark.sql.DataFrame = [column1: int, column2: int ... 3998 more fields]
      ```
      
      ## How was this patch tested?
      
      Manual by the above script.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #14044 from dongjoon-hyun/SPARK-16360.
      7f7eb393
    • hyukjinkwon's avatar
      [SPARK-15198][SQL] Support for pushing down filters for boolean types in ORC data source · 7742d9f1
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      It seems ORC supports all the types in  ([`PredicateLeaf.Type`](https://github.com/apache/hive/blob/e085b7e9bd059d91aaf013df0db4d71dca90ec6f/storage-api/src/java/org/apache/hadoop/hive/ql/io/sarg/PredicateLeaf.java#L50-L56)) which includes boolean types. So, this was tested first.
      
      This PR adds the support for pushing filters down for `BooleanType` in ORC data source.
      
      This PR also removes `OrcTableScan` class and the companion object, which is not used anymore.
      
      ## How was this patch tested?
      
      Unittest in `OrcFilterSuite` and `OrcQuerySuite`.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #12972 from HyukjinKwon/SPARK-15198.
      7742d9f1
  2. Jul 04, 2016
    • Michael Allman's avatar
      [SPARK-15968][SQL] Nonempty partitioned metastore tables are not cached · 8f6cf00c
      Michael Allman authored
      (Please note this is a revision of PR #13686, which has been closed in favor of this PR.)
      
      This PR addresses [SPARK-15968](https://issues.apache.org/jira/browse/SPARK-15968).
      
      ## What changes were proposed in this pull request?
      
      The `getCached` method of [HiveMetastoreCatalog](https://github.com/apache/spark/blob/master/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala) computes `pathsInMetastore` from the metastore relation's catalog table. This only returns the table base path, which is incomplete/inaccurate for a nonempty partitioned table. As a result, cached lookups on nonempty partitioned tables always miss.
      
      Rather than get `pathsInMetastore` from
      
          metastoreRelation.catalogTable.storage.locationUri.toSeq
      
      I modified the `getCached` method to take a `pathsInMetastore` argument. Calls to this method pass in the paths computed from calls to the Hive metastore. This is how `getCached` was implemented in Spark 1.5:
      
      https://github.com/apache/spark/blob/e0c3212a9b42e3e704b070da4ac25b68c584427f/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala#L444.
      
      I also added a call in `InsertIntoHiveTable.scala` to invalidate the table from the SQL session catalog.
      
      ## How was this patch tested?
      
      I've added a new unit test to `parquetSuites.scala`:
      
          SPARK-15968: nonempty partitioned metastore Parquet table lookup should use cached relation
      
      Note that the only difference between this new test and the one above it in the file is that the new test populates its partitioned table with a single value, while the existing test leaves the table empty. This reveals a subtle, unexpected hole in test coverage present before this patch.
      
      Note I also modified a different but related unit test in `parquetSuites.scala`:
      
          SPARK-15248: explicitly added partitions should be readable
      
      This unit test asserts that Spark SQL should return data from a table partition which has been placed there outside a metastore query immediately after it is added. I changed the test so that, instead of adding the data as a parquet file saved in the partition's location, the data is added through a SQL `INSERT` query. I made this change because I could find no way to efficiently support partitioned table caching without failing that test.
      
      In addition to my primary motivation, I can offer a few reasons I believe this is an acceptable weakening of that test. First, it still validates a fix for [SPARK-15248](https://issues.apache.org/jira/browse/SPARK-15248), the issue for which it was written. Second, the assertion made is stronger than that required for non-partitioned tables. If you write data to the storage location of a non-partitioned metastore table without using a proper SQL DML query, a subsequent call to show that data will not return it. I believe this is an intentional limitation put in place to make table caching feasible, but I'm only speculating.
      
      Building a large `HadoopFsRelation` requires `stat`-ing all of its data files. In our environment, where we have tables with 10's of thousands of partitions, the difference between using a cached relation versus a new one is a matter of seconds versus minutes. Caching partitioned table metadata vastly improves the usability of Spark SQL for these cases.
      
      Thanks.
      
      Author: Michael Allman <michael@videoamp.com>
      
      Closes #13818 from mallman/spark-15968.
      8f6cf00c
    • Michael Allman's avatar
      [SPARK-16353][BUILD][DOC] Missing javadoc options for java unidoc · 7dbffcdd
      Michael Allman authored
      Link to Jira issue: https://issues.apache.org/jira/browse/SPARK-16353
      
      ## What changes were proposed in this pull request?
      
      The javadoc options for the java unidoc generation are ignored when generating the java unidoc. For example, the generated `index.html` has the wrong HTML page title. This can be seen at http://spark.apache.org/docs/latest/api/java/index.html.
      
      I changed the relevant setting scope from `doc` to `(JavaUnidoc, unidoc)`.
      
      ## How was this patch tested?
      
      I ran `docs/jekyll build` and verified that the java unidoc `index.html` has the correct HTML page title.
      
      Author: Michael Allman <michael@videoamp.com>
      
      Closes #14031 from mallman/spark-16353.
      7dbffcdd
    • Sean Owen's avatar
      [MINOR][DOCS] Remove unused images; crush PNGs that could use it for good measure · 18fb57f5
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Coincidentally, I discovered that a couple images were unused in `docs/`, and then searched and found more, and then realized some PNGs were pretty big and could be crushed, and before I knew it, had done the same for the ASF site (not committed yet).
      
      No functional change at all, just less superfluous image data.
      
      ## How was this patch tested?
      
      `jekyll serve`
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #14029 from srowen/RemoveCompressImages.
      18fb57f5
    • wm624@hotmail.com's avatar
      [SPARK-16260][ML][EXAMPLE] PySpark ML Example Improvements and Cleanup · a539b724
      wm624@hotmail.com authored
      ## What changes were proposed in this pull request?
      1). Remove unused import in Scala example;
      
      2). Move spark session import outside example off;
      
      3). Change parameter setting the same as Scala;
      
      4). Change comment to be consistent;
      
      5). Make sure that Scala and python using the same data set;
      
      I did one pass and fixed the above issues. There are missing examples in python, which might be added later.
      
      TODO: For some examples, there are comments on how to run examples; But there are many missing. We can add them later.
      
      ## How was this patch tested?
      
      (Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
      
      Manually test them
      
      Author: wm624@hotmail.com <wm624@hotmail.com>
      
      Closes #14021 from wangmiao1981/ann.
      a539b724
    • gatorsmile's avatar
      [SPARK-16358][SQL] Remove InsertIntoHiveTable From Logical Plan · 26283339
      gatorsmile authored
      #### What changes were proposed in this pull request?
      LogicalPlan `InsertIntoHiveTable` is useless. Thus, we can remove it from the code base.
      
      #### How was this patch tested?
      The existing test cases
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #14037 from gatorsmile/InsertIntoHiveTable.
      26283339
  3. Jul 03, 2016
    • Koert Kuipers's avatar
      [SPARK-15204][SQL] improve nullability inference for Aggregator · 8cdb81fa
      Koert Kuipers authored
      ## What changes were proposed in this pull request?
      
      TypedAggregateExpression sets nullable based on the schema of the outputEncoder
      
      ## How was this patch tested?
      
      Add test in DatasetAggregatorSuite
      
      Author: Koert Kuipers <koert@tresata.com>
      
      Closes #13532 from koertkuipers/feat-aggregator-nullable.
      8cdb81fa
    • Dongjoon Hyun's avatar
      [SPARK-16288][SQL] Implement inline table generating function · 88134e73
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This PR implements `inline` table generating function.
      
      ## How was this patch tested?
      
      Pass the Jenkins tests with new testcase.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #13976 from dongjoon-hyun/SPARK-16288.
      88134e73
    • Dongjoon Hyun's avatar
      [SPARK-16278][SPARK-16279][SQL] Implement map_keys/map_values SQL functions · 54b27c17
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This PR adds `map_keys` and `map_values` SQL functions in order to remove Hive fallback.
      
      ## How was this patch tested?
      
      Pass the Jenkins tests including new testcases.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #13967 from dongjoon-hyun/SPARK-16278.
      54b27c17
    • gatorsmile's avatar
      [SPARK-16329][SQL] Star Expansion over Table Containing No Column · ea990f96
      gatorsmile authored
      #### What changes were proposed in this pull request?
      Star expansion over a table containing zero column does not work since 1.6. However, it works in Spark 1.5.1. This PR is to fix the issue in the master branch.
      
      For example,
      ```scala
      val rddNoCols = sqlContext.sparkContext.parallelize(1 to 10).map(_ => Row.empty)
      val dfNoCols = sqlContext.createDataFrame(rddNoCols, StructType(Seq.empty))
      dfNoCols.registerTempTable("temp_table_no_cols")
      sqlContext.sql("select * from temp_table_no_cols").show
      ```
      
      Without the fix, users will get the following the exception:
      ```
      java.lang.IllegalArgumentException: requirement failed
              at scala.Predef$.require(Predef.scala:221)
              at org.apache.spark.sql.catalyst.analysis.UnresolvedStar.expand(unresolved.scala:199)
      ```
      
      #### How was this patch tested?
      Tests are added
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #14007 from gatorsmile/starExpansionTableWithZeroColumn.
      ea990f96
  4. Jul 02, 2016
    • Dongjoon Hyun's avatar
      [MINOR][BUILD] Fix Java linter errors · 3000b4b2
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This PR fixes the minor Java linter errors like the following.
      ```
      -    public int read(char cbuf[], int off, int len) throws IOException {
      +    public int read(char[] cbuf, int off, int len) throws IOException {
      ```
      
      ## How was this patch tested?
      
      Manual.
      ```
      $ build/mvn -T 4 -q -DskipTests -Pyarn -Phadoop-2.3 -Pkinesis-asl -Phive -Phive-thriftserver install
      $ dev/lint-java
      Using `mvn` from path: /usr/local/bin/mvn
      Checkstyle checks passed.
      ```
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #14017 from dongjoon-hyun/minor_build_java_linter_error.
      3000b4b2
    • WeichenXu's avatar
      [SPARK-16345][DOCUMENTATION][EXAMPLES][GRAPHX] Extract graphx programming... · 0bd7cd18
      WeichenXu authored
      [SPARK-16345][DOCUMENTATION][EXAMPLES][GRAPHX] Extract graphx programming guide example snippets from source files instead of hard code them
      
      ## What changes were proposed in this pull request?
      
      I extract 6 example programs from GraphX programming guide and replace them with
      `include_example` label.
      
      The 6 example programs are:
      - AggregateMessagesExample.scala
      - SSSPExample.scala
      - TriangleCountingExample.scala
      - ConnectedComponentsExample.scala
      - ComprehensiveExample.scala
      - PageRankExample.scala
      
      All the example code can run using
      `bin/run-example graphx.EXAMPLE_NAME`
      
      ## How was this patch tested?
      
      Manual.
      
      Author: WeichenXu <WeichenXu123@outlook.com>
      
      Closes #14015 from WeichenXu123/graphx_example_plugin.
      0bd7cd18
    • WeichenXu's avatar
      [GRAPHX][EXAMPLES] move graphx test data directory and update graphx document · 192d1f9c
      WeichenXu authored
      ## What changes were proposed in this pull request?
      
      There are two test data files used for graphx examples existing in directory "graphx/data"
      I move it into "data/" directory because the "graphx" directory is used for code files and other test data files (such as mllib, streaming test data) are all in there.
      
      I also update the graphx document where reference the data files which I move place.
      
      ## How was this patch tested?
      
      N/A
      
      Author: WeichenXu <WeichenXu123@outlook.com>
      
      Closes #14010 from WeichenXu123/move_graphx_data_dir.
      192d1f9c
  5. Jul 01, 2016
    • peng.zhang's avatar
      [SPARK-16095][YARN] Yarn cluster mode should report correct state to SparkLauncher · bad0f7db
      peng.zhang authored
      ## What changes were proposed in this pull request?
      Yarn cluster mode should return correct state for SparkLauncher
      
      ## How was this patch tested?
      unit test
      
      Author: peng.zhang <peng.zhang@xiaomi.com>
      
      Closes #13962 from renozhang/SPARK-16095-spark-launcher-wrong-state.
      bad0f7db
    • Dongjoon Hyun's avatar
      [SPARK-16233][R][TEST] ORC test should be enabled only when HiveContext is available. · d17e5f2f
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      ORC test should be enabled only when HiveContext is available.
      
      ## How was this patch tested?
      
      Manual.
      ```
      $ R/run-tests.sh
      ...
      1. create DataFrame from RDD (test_sparkSQL.R#200) - Hive is not build with SparkSQL, skipped
      
      2. test HiveContext (test_sparkSQL.R#1021) - Hive is not build with SparkSQL, skipped
      
      3. read/write ORC files (test_sparkSQL.R#1728) - Hive is not build with SparkSQL, skipped
      
      4. enableHiveSupport on SparkSession (test_sparkSQL.R#2448) - Hive is not build with SparkSQL, skipped
      
      5. sparkJars tag in SparkContext (test_Windows.R#21) - This test is only for Windows, skipped
      
      DONE ===========================================================================
      Tests passed.
      ```
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #14019 from dongjoon-hyun/SPARK-16233.
      d17e5f2f
    • Reynold Xin's avatar
      [SPARK-16335][SQL] Structured streaming should fail if source directory does not exist · d601894c
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      In structured streaming, Spark does not report errors when the specified directory does not exist. This is a behavior different from the batch mode. This patch changes the behavior to fail if the directory does not exist (when the path is not a glob pattern).
      
      ## How was this patch tested?
      Updated unit tests to reflect the new behavior.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #14002 from rxin/SPARK-16335.
      d601894c
    • Sun Rui's avatar
      [SPARK-16299][SPARKR] Capture errors from R workers in daemon.R to avoid... · e4fa58c4
      Sun Rui authored
      [SPARK-16299][SPARKR] Capture errors from R workers in daemon.R to avoid deletion of R session temporary directory.
      
      ## What changes were proposed in this pull request?
      Capture errors from R workers in daemon.R to avoid deletion of R session temporary directory. See detailed description at https://issues.apache.org/jira/browse/SPARK-16299
      
      ## How was this patch tested?
      SparkR unit tests.
      
      Author: Sun Rui <sunrui2016@gmail.com>
      
      Closes #13975 from sun-rui/SPARK-16299.
      e4fa58c4
    • Narine Kokhlikyan's avatar
      [SPARK-16012][SPARKR] Implement gapplyCollect which will apply a R function on... · 26afb4ce
      Narine Kokhlikyan authored
      [SPARK-16012][SPARKR] Implement gapplyCollect which will apply a R function on each group similar to gapply and collect the result back to R data.frame
      
      ## What changes were proposed in this pull request?
      gapplyCollect() does gapply() on a SparkDataFrame and collect the result back to R. Compared to gapply() + collect(), gapplyCollect() offers performance optimization as well as programming convenience, as no schema is needed to be provided.
      
      This is similar to dapplyCollect().
      
      ## How was this patch tested?
      Added test cases for gapplyCollect similar to dapplyCollect
      
      Author: Narine Kokhlikyan <narine@slice.com>
      
      Closes #13760 from NarineK/gapplyCollect.
      26afb4ce
    • Dongjoon Hyun's avatar
      [SPARK-16208][SQL] Add `PropagateEmptyRelation` optimizer · c5539765
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This PR adds a new logical optimizer, `PropagateEmptyRelation`, to collapse a logical plans consisting of only empty LocalRelations.
      
      **Optimizer Targets**
      
      1. Binary(or Higher)-node Logical Plans
         - Union with all empty children.
         - Join with one or two empty children (including Intersect/Except).
      2. Unary-node Logical Plans
         - Project/Filter/Sample/Join/Limit/Repartition with all empty children.
         - Aggregate with all empty children and without AggregateFunction expressions, COUNT.
         - Generate with Explode because other UserDefinedGenerators like Hive UDTF returns results.
      
      **Sample Query**
      ```sql
      WITH t1 AS (SELECT a FROM VALUES 1 t(a)),
           t2 AS (SELECT b FROM VALUES 1 t(b) WHERE 1=2)
      SELECT a,b
      FROM t1, t2
      WHERE a=b
      GROUP BY a,b
      HAVING a>1
      ORDER BY a,b
      ```
      
      **Before**
      ```scala
      scala> sql("with t1 as (select a from values 1 t(a)), t2 as (select b from values 1 t(b) where 1=2) select a,b from t1, t2 where a=b group by a,b having a>1 order by a,b").explain
      == Physical Plan ==
      *Sort [a#0 ASC, b#1 ASC], true, 0
      +- Exchange rangepartitioning(a#0 ASC, b#1 ASC, 200)
         +- *HashAggregate(keys=[a#0, b#1], functions=[])
            +- Exchange hashpartitioning(a#0, b#1, 200)
               +- *HashAggregate(keys=[a#0, b#1], functions=[])
                  +- *BroadcastHashJoin [a#0], [b#1], Inner, BuildRight
                     :- *Filter (isnotnull(a#0) && (a#0 > 1))
                     :  +- LocalTableScan [a#0]
                     +- BroadcastExchange HashedRelationBroadcastMode(List(cast(input[0, int, false] as bigint)))
                        +- *Filter (isnotnull(b#1) && (b#1 > 1))
                           +- LocalTableScan <empty>, [b#1]
      ```
      
      **After**
      ```scala
      scala> sql("with t1 as (select a from values 1 t(a)), t2 as (select b from values 1 t(b) where 1=2) select a,b from t1, t2 where a=b group by a,b having a>1 order by a,b").explain
      == Physical Plan ==
      LocalTableScan <empty>, [a#0, b#1]
      ```
      
      ## How was this patch tested?
      
      Pass the Jenkins tests (including a new testsuite).
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #13906 from dongjoon-hyun/SPARK-16208.
      c5539765
    • gatorsmile's avatar
      [SPARK-16222][SQL] JDBC Sources - Handling illegal input values for `fetchsize` and `batchsize` · 0ad6ce7e
      gatorsmile authored
      #### What changes were proposed in this pull request?
      For JDBC data sources, users can specify `batchsize` for multi-row inserts and `fetchsize` for multi-row fetch. A few issues exist:
      
      - The property keys are case sensitive. Thus, the existing test cases for `fetchsize` use incorrect names, `fetchSize`. Basically, the test cases are broken.
      - No test case exists for `batchsize`.
      - We do not detect the illegal input values for `fetchsize` and `batchsize`.
      
      For example, when `batchsize` is zero, we got the following exception:
      ```
      Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.ArithmeticException: / by zero
      ```
      when `fetchsize` is less than zero, we got the exception from the underlying JDBC driver:
      ```
      Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): org.h2.jdbc.JdbcSQLException: Invalid value "-1" for parameter "rows" [90008-183]
      ```
      
      This PR fixes all the above issues, and issue the appropriate exceptions when detecting the illegal inputs for `fetchsize` and `batchsize`. Also update the function descriptions.
      
      #### How was this patch tested?
      Test cases are fixed and added.
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #13919 from gatorsmile/jdbcProperties.
      0ad6ce7e
    • MechCoder's avatar
      [SPARK-15761][MLLIB][PYSPARK] Load ipython when default python is Python3 · 66283ee0
      MechCoder authored
      ## What changes were proposed in this pull request?
      
      I would like to use IPython with Python 3.5. It is annoying when it fails with IPython requires Python 2.7+; please install python2.7 or set PYSPARK_PYTHON when I have a version greater than 2.7
      
      ## How was this patch tested
      It now works with IPython and Python3
      
      Author: MechCoder <mks542@nyu.edu>
      
      Closes #13503 from MechCoder/spark-15761.
      66283ee0
    • Sean Owen's avatar
      [SPARK-16182][CORE] Utils.scala -- terminateProcess() should call... · 2075bf8e
      Sean Owen authored
      [SPARK-16182][CORE] Utils.scala -- terminateProcess() should call Process.destroyForcibly() if and only if Process.destroy() fails
      
      ## What changes were proposed in this pull request?
      
      Utils.terminateProcess should `destroy()` first and only fall back to `destroyForcibly()` if it fails. It's kind of bad that we're force-killing executors -- and only in Java 8. See JIRA for an example of the impact: no shutdown
      
      While here: `Utils.waitForProcess` should use the Java 8 method if available instead of a custom implementation.
      
      ## How was this patch tested?
      
      Existing tests, which cover the force-kill case, and Amplab tests, which will cover both Java 7 and Java 8 eventually. However I tested locally on Java 8 and the PR builder will try Java 7 here.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #13973 from srowen/SPARK-16182.
      2075bf8e
    • cody koeninger's avatar
      [SPARK-12177][STREAMING][KAFKA] limit api surface area · fbfd0ab9
      cody koeninger authored
      ## What changes were proposed in this pull request?
      This is an alternative to the refactoring proposed by https://github.com/apache/spark/pull/13996
      
      ## How was this patch tested?
      
      unit tests
      also tested under scala 2.10 via
      mvn -Dscala-2.10
      
      Author: cody koeninger <cody@koeninger.org>
      
      Closes #13998 from koeninger/kafka-0-10-refactor.
      fbfd0ab9
  6. Jun 30, 2016
    • Hiroshi Inoue's avatar
      [SPARK-16331][SQL] Reduce code generation time · 14cf61e9
      Hiroshi Inoue authored
      ## What changes were proposed in this pull request?
      During the code generation, a `LocalRelation` often has a huge `Vector` object as `data`. In the simple example below, a `LocalRelation` has a Vector with 1000000 elements of `UnsafeRow`.
      
      ```
      val numRows = 1000000
      val ds = (1 to numRows).toDS().persist()
      benchmark.addCase("filter+reduce") { iter =>
        ds.filter(a => (a & 1) == 0).reduce(_ + _)
      }
      ```
      
      At `TreeNode.transformChildren`, all elements of the vector is unnecessarily iterated to check whether any children exist in the vector since `Vector` is Traversable. This part significantly increases code generation time.
      
      This patch avoids this overhead by checking the number of children before iterating all elements; `LocalRelation` does not have children since it extends `LeafNode`.
      
      The performance of the above example
      ```
      without this patch
      Java HotSpot(TM) 64-Bit Server VM 1.8.0_91-b14 on Mac OS X 10.11.5
      Intel(R) Core(TM) i5-5257U CPU  2.70GHz
      compilationTime:                         Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)   Relative
      ------------------------------------------------------------------------------------------------
      filter+reduce                                 4426 / 4533          0.2        4426.0       1.0X
      
      with this patch
      compilationTime:                         Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)   Relative
      ------------------------------------------------------------------------------------------------
      filter+reduce                                 3117 / 3391          0.3        3116.6       1.0X
      ```
      
      ## How was this patch tested?
      
      using existing unit tests
      
      Author: Hiroshi Inoue <inouehrs@jp.ibm.com>
      
      Closes #14000 from inouehrs/compilation-time-reduction.
      14cf61e9
    • Yuhao Yang's avatar
      [SPARK-14608][ML] transformSchema needs better documentation · aa6564f3
      Yuhao Yang authored
      ## What changes were proposed in this pull request?
      jira: https://issues.apache.org/jira/browse/SPARK-14608
      PipelineStage.transformSchema currently has minimal documentation. It should have more to explain it can:
      check schema
      check parameter interactions
      
      ## How was this patch tested?
      unit test
      
      Author: Yuhao Yang <hhbyyh@gmail.com>
      Author: Yuhao Yang <yuhao.yang@intel.com>
      
      Closes #12384 from hhbyyh/transformSchemaDoc.
      aa6564f3
    • Reynold Xin's avatar
      [SPARK-15954][SQL] Disable loading test tables in Python tests · 38f4d6f4
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch introduces a flag to disable loading test tables in TestHiveSparkSession and disables that in Python. This fixes an issue in which python/run-tests would fail due to failure to load test tables.
      
      Note that these test tables are not used outside of HiveCompatibilitySuite. In the long run we should probably decouple the loading of test tables from the test Hive setup.
      
      ## How was this patch tested?
      This is a test only change.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #14005 from rxin/SPARK-15954.
      38f4d6f4
    • Nick Pentreath's avatar
      [SPARK-15643][DOC][ML] Add breaking changes to ML migration guide · 4a981dc8
      Nick Pentreath authored
      This PR adds the breaking changes from [SPARK-14810](https://issues.apache.org/jira/browse/SPARK-14810) to the migration guide.
      
      ## How was this patch tested?
      
      Built docs locally.
      
      Author: Nick Pentreath <nickp@za.ibm.com>
      
      Closes #13924 from MLnick/SPARK-15643-migration-guide.
      4a981dc8
    • Nick Pentreath's avatar
      [SPARK-16328][ML][MLLIB][PYSPARK] Add 'asML' and 'fromML' conversion methods to PySpark linalg · dab10516
      Nick Pentreath authored
      The move to `ml.linalg` created `asML`/`fromML` utility methods in Scala/Java for converting between representations. These are missing in Python, this PR adds them.
      
      ## How was this patch tested?
      
      New doctests.
      
      Author: Nick Pentreath <nickp@za.ibm.com>
      
      Closes #13997 from MLnick/SPARK-16328-python-linalg-convert.
      dab10516
    • petermaxlee's avatar
      [SPARK-16276][SQL] Implement elt SQL function · 85f2303e
      petermaxlee authored
      ## What changes were proposed in this pull request?
      This patch implements the elt function, as it is implemented in Hive.
      
      ## How was this patch tested?
      Added expression unit test in StringExpressionsSuite and end-to-end test in StringFunctionsSuite.
      
      Author: petermaxlee <petermaxlee@gmail.com>
      
      Closes #13966 from petermaxlee/SPARK-16276.
      85f2303e
    • Reynold Xin's avatar
      [SPARK-16313][SQL] Spark should not silently drop exceptions in file listing · 3d75a5b2
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      Spark silently drops exceptions during file listing. This is a very bad behavior because it can mask legitimate errors and the resulting plan will silently have 0 rows. This patch changes it to not silently drop the errors.
      
      ## How was this patch tested?
      Manually verified.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #13987 from rxin/SPARK-16313.
      3d75a5b2
    • petermaxlee's avatar
      [SPARK-16336][SQL] Suggest doing table refresh upon FileNotFoundException · fb41670c
      petermaxlee authored
      ## What changes were proposed in this pull request?
      This patch appends a message to suggest users running refresh table or reloading data frames when Spark sees a FileNotFoundException due to stale, cached metadata.
      
      ## How was this patch tested?
      Added a unit test for this in MetadataCacheSuite.
      
      Author: petermaxlee <petermaxlee@gmail.com>
      
      Closes #14003 from petermaxlee/SPARK-16336.
      fb41670c
    • Tathagata Das's avatar
      [SPARK-16256][DOCS] Fix window operation diagram · 5d00a7bc
      Tathagata Das authored
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #14001 from tdas/SPARK-16256-2.
      5d00a7bc
Loading