Skip to content
Snippets Groups Projects
  1. Jun 14, 2016
    • Shixiong Zhu's avatar
      [SPARK-15935][PYSPARK] Fix a wrong format tag in the error message · 0ee9fd9e
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      A follow up PR for #13655 to fix a wrong format tag.
      
      ## How was this patch tested?
      
      Jenkins unit tests.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #13665 from zsxwing/fix.
      0ee9fd9e
    • Xiangrui Meng's avatar
      [SPARK-15945][MLLIB] Conversion between old/new vector columns in a DataFrame (Scala/Java) · 63e0aebe
      Xiangrui Meng authored
      ## What changes were proposed in this pull request?
      
      This PR provides conversion utils between old/new vector columns in a DataFrame. So users can use it to migrate their datasets and pipelines manually. The methods are implemented under `MLUtils` and called `convertVectorColumnsToML` and `convertVectorColumnsFromML`. Both take a DataFrame and a list of vector columns to be converted. It is a no-op on vector columns that are already converted. A warning message is logged if actual conversion happens.
      
      This is the first sub-task under SPARK-15944 to make it easier to migrate existing pipelines to Spark 2.0.
      
      ## How was this patch tested?
      
      Unit tests in Scala and Java.
      
      cc: yanboliang
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #13662 from mengxr/SPARK-15945.
      63e0aebe
    • bomeng's avatar
      [SPARK-15952][SQL] fix "show databases" ordering issue · 42a28caf
      bomeng authored
      ## What changes were proposed in this pull request?
      
      Two issues I've found for "show databases" command:
      
      1. The returned database name list was not sorted, it only works when "like" was used together; (HIVE will always return a sorted list)
      
      2. When it is used as sql("show databases").show, it will output a table with column named as "result", but for sql("show tables").show, it will output the column name as "tableName", so I think we should be consistent and use "databaseName" at least.
      
      ## How was this patch tested?
      
      Updated existing test case to test its ordering as well.
      
      Author: bomeng <bmeng@us.ibm.com>
      
      Closes #13671 from bomeng/SPARK-15952.
      42a28caf
    • Herman van Hovell's avatar
      [SPARK-15011][SQL] Re-enable 'analyze MetastoreRelations' in hive StatisticsSuite · 0bd86c0f
      Herman van Hovell authored
      ## What changes were proposed in this pull request?
      This test re-enables the `analyze MetastoreRelations` in `org.apache.spark.sql.hive.StatisticsSuite`.
      
      The flakiness of this test was traced back to a shared configuration option, `hive.exec.compress.output`, in `TestHive`. This property was set to `true` by the `HiveCompatibilitySuite`. I have added configuration resetting logic to `HiveComparisonTest`, in order to prevent such a thing from happening again.
      
      ## How was this patch tested?
      Is a test.
      
      Author: Herman van Hovell <hvanhovell@databricks.com>
      Author: Herman van Hovell <hvanhovell@questtec.nl>
      
      Closes #13498 from hvanhovell/SPARK-15011.
      0bd86c0f
    • Tathagata Das's avatar
      [SPARK-15933][SQL][STREAMING] Refactored DF reader-writer to use readStream... · 214adb14
      Tathagata Das authored
      [SPARK-15933][SQL][STREAMING] Refactored DF reader-writer to use readStream and writeStream for streaming DFs
      
      ## What changes were proposed in this pull request?
      Currently, the DataFrameReader/Writer has method that are needed for streaming and non-streaming DFs. This is quite awkward because each method in them through runtime exception for one case or the other. So rather having half the methods throw runtime exceptions, its just better to have a different reader/writer API for streams.
      
      - [x] Python API!!
      
      ## How was this patch tested?
      Existing unit tests + two sets of unit tests for DataFrameReader/Writer and DataStreamReader/Writer.
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #13653 from tdas/SPARK-15933.
      214adb14
    • Kay Ousterhout's avatar
      [SPARK-15927] Eliminate redundant DAGScheduler code. · 5d50d4f0
      Kay Ousterhout authored
      To try to eliminate redundant code to traverse the RDD dependency graph,
      this PR creates a new function getShuffleDependencies that returns
      shuffle dependencies that are immediate parents of a given RDD.  This
      new function is used by getParentStages and
      getAncestorShuffleDependencies.
      
      Author: Kay Ousterhout <kayousterhout@gmail.com>
      
      Closes #13646 from kayousterhout/SPARK-15927.
      5d50d4f0
    • Takeshi YAMAMURO's avatar
      [SPARK-15247][SQL] Set the default number of partitions for reading parquet schemas · dae4d5db
      Takeshi YAMAMURO authored
      ## What changes were proposed in this pull request?
      This pr sets the default number of partitions when reading parquet schemas.
      SQLContext#read#parquet currently yields at least n_executors * n_cores tasks even if parquet data consist of a  single small file. This issue could increase the latency for small jobs.
      
      ## How was this patch tested?
      Manually tested and checked.
      
      Author: Takeshi YAMAMURO <linguin.m.s@gmail.com>
      
      Closes #13137 from maropu/SPARK-15247.
      dae4d5db
    • Cheng Lian's avatar
      [SPARK-15895][SQL] Filters out metadata files while doing partition discovery · bd39ffe3
      Cheng Lian authored
      ## What changes were proposed in this pull request?
      
      Take the following directory layout as an example:
      
      ```
      dir/
      +- p0=0/
         |-_metadata
         +- p1=0/
            |-part-00001.parquet
            |-part-00002.parquet
            |-...
      ```
      
      The `_metadata` file under `p0=0` shouldn't fail partition discovery.
      
      This PR filters output all metadata files whose names start with `_` while doing partition discovery.
      
      ## How was this patch tested?
      
      New unit test added in `ParquetPartitionDiscoverySuite`.
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #13623 from liancheng/spark-15895-partition-disco-no-metafiles.
      bd39ffe3
    • gatorsmile's avatar
      [SPARK-15864][SQL] Fix Inconsistent Behaviors when Uncaching Non-cached Tables · df4ea661
      gatorsmile authored
      #### What changes were proposed in this pull request?
      To uncache a table, we have three different ways:
      - _SQL interface_: `UNCACHE TABLE`
      - _DataSet API_: `sparkSession.catalog.uncacheTable`
      - _DataSet API_: `sparkSession.table(tableName).unpersist()`
      
      When the table is not cached,
      - _SQL interface_: `UNCACHE TABLE non-cachedTable` -> **no error message**
      - _Dataset API_: `sparkSession.catalog.uncacheTable("non-cachedTable")` -> **report a strange error message:**
      ```requirement failed: Table [a: int] is not cached```
      - _Dataset API_: `sparkSession.table("non-cachedTable").unpersist()` -> **no error message**
      
      This PR will make them consistent. No operation if the table has already been uncached.
      
      In addition, this PR also removes `uncacheQuery` and renames `tryUncacheQuery` to `uncacheQuery`, and documents it that it's noop if the table has already been uncached
      
      #### How was this patch tested?
      Improved the existing test case for verifying the cases when the table has not been cached.
      Also added test cases for verifying the cases when the table does not exist
      
      Author: gatorsmile <gatorsmile@gmail.com>
      Author: xiaoli <lixiao1983@gmail.com>
      Author: Xiao Li <xiaoli@Xiaos-MacBook-Pro.local>
      
      Closes #13593 from gatorsmile/uncacheNonCachedTable.
      df4ea661
    • Takuya UESHIN's avatar
      [SPARK-15915][SQL] Logical plans should use canonicalized plan when override sameResult. · c5b73558
      Takuya UESHIN authored
      ## What changes were proposed in this pull request?
      
      `DataFrame` with plan overriding `sameResult` but not using canonicalized plan to compare can't cacheTable.
      
      The example is like:
      
      ```
          val localRelation = Seq(1, 2, 3).toDF()
          localRelation.createOrReplaceTempView("localRelation")
      
          spark.catalog.cacheTable("localRelation")
          assert(
            localRelation.queryExecution.withCachedData.collect {
              case i: InMemoryRelation => i
            }.size == 1)
      ```
      
      and this will fail as:
      
      ```
      ArrayBuffer() had size 0 instead of expected size 1
      ```
      
      The reason is that when do `spark.catalog.cacheTable("localRelation")`, `CacheManager` tries to cache for the plan wrapped by `SubqueryAlias` but when planning for the DataFrame `localRelation`, `CacheManager` tries to find cached table for the not-wrapped plan because the plan for DataFrame `localRelation` is not wrapped.
      Some plans like `LocalRelation`, `LogicalRDD`, etc. override `sameResult` method, but not use canonicalized plan to compare so the `CacheManager` can't detect the plans are the same.
      
      This pr modifies them to use canonicalized plan when override `sameResult` method.
      
      ## How was this patch tested?
      
      Added a test to check if DataFrame with plan overriding sameResult but not using canonicalized plan to compare can cacheTable.
      
      Author: Takuya UESHIN <ueshin@happy-camper.st>
      
      Closes #13638 from ueshin/issues/SPARK-15915.
      c5b73558
    • gatorsmile's avatar
      [SPARK-15655][SQL] Fix Wrong Partition Column Order when Fetching Partitioned Tables · bc02d011
      gatorsmile authored
      #### What changes were proposed in this pull request?
      When fetching the partitioned table, the output contains wrong results. The order of partition key values do not match the order of partition key columns in output schema. For example,
      
      ```SQL
      CREATE TABLE table_with_partition(c1 string) PARTITIONED BY (p1 string,p2 string,p3 string,p4 string,p5 string)
      
      INSERT OVERWRITE TABLE table_with_partition PARTITION (p1='a',p2='b',p3='c',p4='d',p5='e') SELECT 'blarr'
      
      SELECT p1, p2, p3, p4, p5, c1 FROM table_with_partition
      ```
      ```
      +---+---+---+---+---+-----+
      | p1| p2| p3| p4| p5|   c1|
      +---+---+---+---+---+-----+
      |  d|  e|  c|  b|  a|blarr|
      +---+---+---+---+---+-----+
      ```
      
      The expected result should be
      ```
      +---+---+---+---+---+-----+
      | p1| p2| p3| p4| p5|   c1|
      +---+---+---+---+---+-----+
      |  a|  b|  c|  d|  e|blarr|
      +---+---+---+---+---+-----+
      ```
      This PR is to fix this by enforcing the order matches the table partition definition.
      
      #### How was this patch tested?
      Added a test case into `SQLQuerySuite`
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #13400 from gatorsmile/partitionedTableFetch.
      bc02d011
    • Sean Owen's avatar
      [MINOR] Clean up several build warnings, mostly due to internal use of old accumulators · 6151d264
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Another PR to clean up recent build warnings. This particularly cleans up several instances of the old accumulator API usage in tests that are straightforward to update. I think this qualifies as "minor".
      
      ## How was this patch tested?
      
      Jenkins
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #13642 from srowen/BuildWarnings.
      6151d264
    • Sean Zhong's avatar
      [SPARK-15914][SQL] Add deprecated method back to SQLContext for backward source code compatibility · 6e8cdef0
      Sean Zhong authored
      ## What changes were proposed in this pull request?
      
      Revert partial changes in SPARK-12600, and add some deprecated method back to SQLContext for backward source code compatibility.
      
      ## How was this patch tested?
      
      Manual test.
      
      Author: Sean Zhong <seanzhong@databricks.com>
      
      Closes #13637 from clockfly/SPARK-15914.
      6e8cdef0
    • Jeff Zhang's avatar
      doc fix of HiveThriftServer · 53bb0308
      Jeff Zhang authored
      ## What changes were proposed in this pull request?
      
      Just minor doc fix.
      
      \cc yhuai
      
      Author: Jeff Zhang <zjffdu@apache.org>
      
      Closes #13659 from zjffdu/doc_fix.
      53bb0308
    • Adam Roberts's avatar
      [SPARK-15821][DOCS] Include parallel build info · a431e3f1
      Adam Roberts authored
      ## What changes were proposed in this pull request?
      
      We should mention that users can build Spark using multiple threads to decrease build times; either here or in "Building Spark"
      
      ## How was this patch tested?
      
      Built on machines with between one core to 192 cores using mvn -T 1C and observed faster build times with no loss in stability
      
      In response to the question here https://issues.apache.org/jira/browse/SPARK-15821 I think we should suggest this option as we know it works for Spark and can result in faster builds
      
      Author: Adam Roberts <aroberts@uk.ibm.com>
      
      Closes #13562 from a-roberts/patch-3.
      a431e3f1
    • Shixiong Zhu's avatar
      [SPARK-15935][PYSPARK] Enable test for sql/streaming.py and fix these tests · 96c3500c
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      This PR just enables tests for sql/streaming.py and also fixes the failures.
      
      ## How was this patch tested?
      
      Existing unit tests.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #13655 from zsxwing/python-streaming-test.
      96c3500c
    • Mortada Mehyar's avatar
      [DOCUMENTATION] fixed typos in python programming guide · a87a56f5
      Mortada Mehyar authored
      ## What changes were proposed in this pull request?
      
      minor typo
      
      ## How was this patch tested?
      
      minor typo in the doc, should be self explanatory
      
      Author: Mortada Mehyar <mortada.mehyar@gmail.com>
      
      Closes #13639 from mortada/typo.
      a87a56f5
    • Wenchen Fan's avatar
      [SPARK-15932][SQL][DOC] document the contract of encoder serializer expressions · 688b6ef9
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      In our encoder framework, we imply that serializer expressions should use `BoundReference` to refer to the input object, and a lot of codes depend on this contract(e.g. ExpressionEncoder.tuple).  This PR adds some document and assert in `ExpressionEncoder` to make it clearer.
      
      ## How was this patch tested?
      
      existing tests
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #13648 from cloud-fan/comment.
      688b6ef9
  2. Jun 13, 2016
    • Sandeep Singh's avatar
      [SPARK-15663][SQL] SparkSession.catalog.listFunctions shouldn't include the... · 1842cdd4
      Sandeep Singh authored
      [SPARK-15663][SQL] SparkSession.catalog.listFunctions shouldn't include the list of built-in functions
      
      ## What changes were proposed in this pull request?
      SparkSession.catalog.listFunctions currently returns all functions, including the list of built-in functions. This makes the method not as useful because anytime it is run the result set contains over 100 built-in functions.
      
      ## How was this patch tested?
      CatalogSuite
      
      Author: Sandeep Singh <sandeep@techaddict.me>
      
      Closes #13413 from techaddict/SPARK-15663.
      1842cdd4
    • Liang-Chi Hsieh's avatar
      [SPARK-15364][ML][PYSPARK] Implement PySpark picklers for ml.Vector and... · baa3e633
      Liang-Chi Hsieh authored
      [SPARK-15364][ML][PYSPARK] Implement PySpark picklers for ml.Vector and ml.Matrix under spark.ml.python
      
      ## What changes were proposed in this pull request?
      
      Now we have PySpark picklers for new and old vector/matrix, individually. However, they are all implemented under `PythonMLlibAPI`. To separate spark.mllib from spark.ml, we should implement the picklers of new vector/matrix under `spark.ml.python` instead.
      
      ## How was this patch tested?
      Existing tests.
      
      Author: Liang-Chi Hsieh <simonh@tw.ibm.com>
      
      Closes #13219 from viirya/pyspark-pickler-ml.
      baa3e633
    • gatorsmile's avatar
      [SPARK-15808][SQL] File Format Checking When Appending Data · 5827b65e
      gatorsmile authored
      #### What changes were proposed in this pull request?
      **Issue:** Got wrong results or strange errors when append data to a table with mismatched file format.
      
      _Example 1: PARQUET -> CSV_
      ```Scala
      createDF(0, 9).write.format("parquet").saveAsTable("appendParquetToOrc")
      createDF(10, 19).write.mode(SaveMode.Append).format("orc").saveAsTable("appendParquetToOrc")
      ```
      
      Error we got:
      ```
      Job aborted due to stage failure: Task 0 in stage 2.0 failed 1 times, most recent failure: Lost task 0.0 in stage 2.0 (TID 2, localhost): java.lang.RuntimeException: file:/private/var/folders/4b/sgmfldk15js406vk7lw5llzw0000gn/T/warehouse-bc8fedf2-aa6a-4002-a18b-524c6ac859d4/appendorctoparquet/part-r-00000-c0e3f365-1d46-4df5-a82c-b47d7af9feb9.snappy.orc is not a Parquet file. expected magic number at tail [80, 65, 82, 49] but found [79, 82, 67, 23]
      ```
      
      _Example 2: Json -> CSV_
      ```Scala
      createDF(0, 9).write.format("json").saveAsTable("appendJsonToCSV")
      createDF(10, 19).write.mode(SaveMode.Append).format("parquet").saveAsTable("appendJsonToCSV")
      ```
      
      No exception, but wrong results:
      ```
      +----+----+
      |  c1|  c2|
      +----+----+
      |null|null|
      |null|null|
      |null|null|
      |null|null|
      |   0|str0|
      |   1|str1|
      |   2|str2|
      |   3|str3|
      |   4|str4|
      |   5|str5|
      |   6|str6|
      |   7|str7|
      |   8|str8|
      |   9|str9|
      +----+----+
      ```
      _Example 3: Json -> Text_
      ```Scala
      createDF(0, 9).write.format("json").saveAsTable("appendJsonToText")
      createDF(10, 19).write.mode(SaveMode.Append).format("text").saveAsTable("appendJsonToText")
      ```
      
      Error we got:
      ```
      Text data source supports only a single column, and you have 2 columns.
      ```
      
      This PR is to issue an exception with appropriate error messages.
      
      #### How was this patch tested?
      Added test cases.
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #13546 from gatorsmile/fileFormatCheck.
      5827b65e
    • Sean Zhong's avatar
      [SPARK-15910][SQL] Check schema consistency when using Kryo encoder to convert DataFrame to Dataset · 7b9071ee
      Sean Zhong authored
      ## What changes were proposed in this pull request?
      
      This PR enforces schema check when converting DataFrame to Dataset using Kryo encoder. For example.
      
      **Before the change:**
      
      Schema is NOT checked when converting DataFrame to Dataset using kryo encoder.
      ```
      scala> case class B(b: Int)
      scala> implicit val encoder = Encoders.kryo[B]
      scala> val df = Seq((1)).toDF("b")
      scala> val ds = df.as[B] // Schema compatibility is NOT checked
      ```
      
      **After the change:**
      Report AnalysisException since the schema is NOT compatible.
      ```
      scala> val ds = Seq((1)).toDF("b").as[B]
      org.apache.spark.sql.AnalysisException: cannot resolve 'CAST(`b` AS BINARY)' due to data type mismatch: cannot cast IntegerType to BinaryType;
      ...
      ```
      
      ## How was this patch tested?
      
      Unit test.
      
      Author: Sean Zhong <seanzhong@databricks.com>
      
      Closes #13632 from clockfly/spark-15910.
      7b9071ee
    • Josh Rosen's avatar
      [SPARK-15929] Fix portability of DataFrameSuite path globbing tests · a6babca1
      Josh Rosen authored
      The DataFrameSuite regression tests for SPARK-13774 fail in my environment because they attempt to glob over all of `/mnt` and some of the subdirectories restrictive permissions which cause the test to fail.
      
      This patch rewrites those tests to remove all environment-specific assumptions; the tests now create their own unique temporary paths for use in the tests.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #13649 from JoshRosen/SPARK-15929.
      a6babca1
    • Cheng Lian's avatar
      [SPARK-15925][SQL][SPARKR] Replaces registerTempTable with createOrReplaceTempView · ced8d669
      Cheng Lian authored
      ## What changes were proposed in this pull request?
      
      This PR replaces `registerTempTable` with `createOrReplaceTempView` as a follow-up task of #12945.
      
      ## How was this patch tested?
      
      Existing SparkR tests.
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #13644 from liancheng/spark-15925-temp-view-for-r.
      ced8d669
    • Wenchen Fan's avatar
      [SPARK-15887][SQL] Bring back the hive-site.xml support for Spark 2.0 · c4b1ad02
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      Right now, Spark 2.0 does not load hive-site.xml. Based on users' feedback, it seems make sense to still load this conf file.
      
      This PR adds a `hadoopConf` API in `SharedState`, which is `sparkContext.hadoopConfiguration` by default. When users are under hive context, `SharedState.hadoopConf` will load hive-site.xml and append its configs to `sparkContext.hadoopConfiguration`.
      
      When we need to read hadoop config in spark sql, we should call `SessionState.newHadoopConf`, which contains `sparkContext.hadoopConfiguration`, hive-site.xml and sql configs.
      
      ## How was this patch tested?
      
      new test in `HiveDataFrameSuite`
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #13611 from cloud-fan/hive-site.
      c4b1ad02
    • Tathagata Das's avatar
      [SPARK-15889][SQL][STREAMING] Add a unique id to ContinuousQuery · c654ae21
      Tathagata Das authored
      ## What changes were proposed in this pull request?
      
      ContinuousQueries have names that are unique across all the active ones. However, when queries are rapidly restarted with same name, it causes races conditions with the listener. A listener event from a stopped query can arrive after the query has been restarted, leading to complexities in monitoring infrastructure.
      
      Along with this change, I have also consolidated all the messy code paths to start queries with different sinks.
      
      ## How was this patch tested?
      Added unit tests, and existing unit tests.
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #13613 from tdas/SPARK-15889.
      c654ae21
    • Takeshi YAMAMURO's avatar
      [SPARK-15530][SQL] Set #parallelism for file listing in listLeafFilesInParallel · 5ad4e32d
      Takeshi YAMAMURO authored
      ## What changes were proposed in this pull request?
      This pr is to set the number of parallelism to prevent file listing in `listLeafFilesInParallel` from generating many tasks in case of large #defaultParallelism.
      
      ## How was this patch tested?
      Manually checked
      
      Author: Takeshi YAMAMURO <linguin.m.s@gmail.com>
      
      Closes #13444 from maropu/SPARK-15530.
      5ad4e32d
    • gatorsmile's avatar
      [SPARK-15676][SQL] Disallow Column Names as Partition Columns For Hive Tables · 3b7fb84c
      gatorsmile authored
      #### What changes were proposed in this pull request?
      When creating a Hive Table (not data source tables), a common error users might make is to specify an existing column name as a partition column. Below is what Hive returns in this case:
      ```
      hive> CREATE TABLE partitioned (id bigint, data string) PARTITIONED BY (data string, part string);
      FAILED: SemanticException [Error 10035]: Column repeated in partitioning columns
      ```
      Currently, the error we issued is very confusing:
      ```
      org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:For direct MetaStore DB connections, we don't support retries at the client level.);
      ```
      This PR is to fix the above issue by capturing the usage error in `Parser`.
      
      #### How was this patch tested?
      Added a test case to `DDLCommandSuite`
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #13415 from gatorsmile/partitionColumnsInTableSchema.
      3b7fb84c
    • Tathagata Das's avatar
      [HOTFIX][MINOR][SQL] Revert " Standardize 'continuous queries' to 'streaming D… · a6a18a45
      Tathagata Das authored
      This reverts commit d32e2277.
      Broke build - https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/spark-branch-2.0-compile-maven-hadoop-2.3/326/console
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #13645 from tdas/build-break.
      a6a18a45
    • Liwei Lin's avatar
      [MINOR][SQL] Standardize 'continuous queries' to 'streaming Datasets/DataFrames' · d32e2277
      Liwei Lin authored
      ## What changes were proposed in this pull request?
      
      This patch does some replacing (as `streaming Datasets/DataFrames` is the term we've chosen in [SPARK-15593](https://github.com/apache/spark/commit/00c310133df4f3893dd90d801168c2ab9841b102)):
       - `continuous queries` -> `streaming Datasets/DataFrames`
       - `non-continuous queries` -> `non-streaming Datasets/DataFrames`
      
      This patch also adds `test("check foreach() can only be called on streaming Datasets/DataFrames")`.
      
      ## How was this patch tested?
      
      N/A
      
      Author: Liwei Lin <lwlin7@gmail.com>
      
      Closes #13595 from lw-lin/continuous-queries-to-streaming-dss-dfs.
      d32e2277
    • Prashant Sharma's avatar
      [SPARK-15697][REPL] Unblock some of the useful repl commands. · 4134653e
      Prashant Sharma authored
      ## What changes were proposed in this pull request?
      
      Unblock some of the useful repl commands. like, "implicits", "javap", "power", "type", "kind". As they are useful and fully functional and part of scala/scala project, I see no harm in having them either.
      
      Verbatim paste form JIRA description.
      "implicits", "javap", "power", "type", "kind" commands in repl are blocked. However, they work fine in all cases I have tried. It is clear we don't support them as they are part of the scala/scala repl project. What is the harm in unblocking them, given they are useful ?
      In previous versions of spark we disabled these commands because it was difficult to support them without customization and the associated maintenance. Since the code base of scala repl was actually ported and maintained under spark source. Now that is not the situation and one can benefit from these commands in Spark REPL as much as in scala repl.
      
      ## How was this patch tested?
      Existing tests and manual, by trying out all of the above commands.
      
      P.S. Symantics of reset are to be discussed in a separate issue.
      
      Author: Prashant Sharma <prashsh1@in.ibm.com>
      
      Closes #13437 from ScrapCodes/SPARK-15697/repl-unblock-commands.
      4134653e
    • Dongjoon Hyun's avatar
      [SPARK-15913][CORE] Dispatcher.stopped should be enclosed by synchronized block. · 938434dc
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      `Dispatcher.stopped` is guarded by `this`, but it is used without synchronization in `postMessage` function. This PR fixes this and also the exception message became more accurate.
      
      ## How was this patch tested?
      
      Pass the existing Jenkins tests.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #13634 from dongjoon-hyun/SPARK-15913.
      938434dc
    • Wenchen Fan's avatar
      [SPARK-15814][SQL] Aggregator can return null result · cd47e233
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      It's similar to the bug fixed in https://github.com/apache/spark/pull/13425, we should consider null object and wrap the `CreateStruct` with `If` to do null check.
      
      This PR also improves the test framework to test the objects of `Dataset[T]` directly, instead of calling `toDF` and compare the rows.
      
      ## How was this patch tested?
      
      new test in `DatasetAggregatorSuite`
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #13553 from cloud-fan/agg-null.
      cd47e233
    • Peter Ableda's avatar
      [SPARK-15813] Improve Canceling log message to make it less ambiguous · d681742b
      Peter Ableda authored
      ## What changes were proposed in this pull request?
      Add new desired executor number to make the log message less ambiguous.
      
      ## How was this patch tested?
      This is a trivial change
      
      Author: Peter Ableda <abledapeter@gmail.com>
      
      Closes #13552 from peterableda/patch-1.
      d681742b
  3. Jun 12, 2016
    • Wenchen Fan's avatar
      [SPARK-15898][SQL] DataFrameReader.text should return DataFrame · e2ab79d5
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      We want to maintain API compatibility for DataFrameReader.text, and will introduce a new API called DataFrameReader.textFile which returns Dataset[String].
      
      affected PRs:
      https://github.com/apache/spark/pull/11731
      https://github.com/apache/spark/pull/13104
      https://github.com/apache/spark/pull/13184
      
      ## How was this patch tested?
      
      N/A
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #13604 from cloud-fan/revert.
      e2ab79d5
    • Herman van Hövell tot Westerflier's avatar
      [SPARK-15370][SQL] Fix count bug · 1f8f2b5c
      Herman van Hövell tot Westerflier authored
      # What changes were proposed in this pull request?
      This pull request fixes the COUNT bug in the `RewriteCorrelatedScalarSubquery` rule.
      
      After this change, the rule tests the expression at the root of the correlated subquery to determine whether the expression returns `NULL` on empty input. If the expression does not return `NULL`, the rule generates additional logic in the `Project` operator above the rewritten subquery. This additional logic intercepts `NULL` values coming from the outer join and replaces them with the value that the subquery's expression would return on empty input.
      
      This PR takes over https://github.com/apache/spark/pull/13155. It only fixes an issue with `Literal` construction and style issues.  All credits should go frreiss.
      
      # How was this patch tested?
      Added regression tests to cover all branches of the updated rule (see changes to `SubquerySuite`).
      Ran all existing automated regression tests after merging with latest trunk.
      
      Author: frreiss <frreiss@us.ibm.com>
      Author: Herman van Hovell <hvanhovell@databricks.com>
      
      Closes #13629 from hvanhovell/SPARK-15370-cleanup.
      1f8f2b5c
    • Wenchen Fan's avatar
    • Takuya UESHIN's avatar
      [SPARK-15870][SQL] DataFrame can't execute after uncacheTable. · caebd7f2
      Takuya UESHIN authored
      ## What changes were proposed in this pull request?
      
      If a cached `DataFrame` executed more than once and then do `uncacheTable` like the following:
      
      ```
          val selectStar = sql("SELECT * FROM testData WHERE key = 1")
          selectStar.createOrReplaceTempView("selectStar")
      
          spark.catalog.cacheTable("selectStar")
          checkAnswer(
            selectStar,
            Seq(Row(1, "1")))
      
          spark.catalog.uncacheTable("selectStar")
          checkAnswer(
            selectStar,
            Seq(Row(1, "1")))
      ```
      
      , then the uncached `DataFrame` can't execute because of `Task not serializable` exception like:
      
      ```
      org.apache.spark.SparkException: Task not serializable
      	at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
      	at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
      	at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
      	at org.apache.spark.SparkContext.clean(SparkContext.scala:2038)
      	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1897)
      	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1912)
      	at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:884)
      	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
      	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
      	at org.apache.spark.rdd.RDD.withScope(RDD.scala:357)
      	at org.apache.spark.rdd.RDD.collect(RDD.scala:883)
      	at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:290)
      ...
      Caused by: java.lang.UnsupportedOperationException: Accumulator must be registered before send to executor
      	at org.apache.spark.util.AccumulatorV2.writeReplace(AccumulatorV2.scala:153)
      	at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
      	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      	at java.lang.reflect.Method.invoke(Method.java:498)
      	at java.io.ObjectStreamClass.invokeWriteReplace(ObjectStreamClass.java:1118)
      	at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1136)
      	at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
      	at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
      	at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
      ...
      ```
      
      Notice that `DataFrame` uncached with `DataFrame.unpersist()` works, but with `spark.catalog.uncacheTable` doesn't work.
      
      This pr reverts a part of cf38fe04 not to unregister `batchStats` accumulator, which is not needed to be unregistered here because it will be done by `ContextCleaner` after it is collected by GC.
      
      ## How was this patch tested?
      
      Added a test to check if DataFrame can execute after uncacheTable and other existing tests.
      But I made a test to check if the accumulator was cleared as `ignore` because the test would be flaky.
      
      Author: Takuya UESHIN <ueshin@happy-camper.st>
      
      Closes #13596 from ueshin/issues/SPARK-15870.
      caebd7f2
    • Herman van Hovell's avatar
      [SPARK-15370][SQL] Revert PR "Update RewriteCorrelatedSuquery rule" · 20b8f2c3
      Herman van Hovell authored
      This reverts commit 9770f6ee.
      
      Author: Herman van Hovell <hvanhovell@databricks.com>
      
      Closes #13626 from hvanhovell/SPARK-15370-revert.
      20b8f2c3
    • hyukjinkwon's avatar
      [SPARK-15892][ML] Incorrectly merged AFTAggregator with zero total count · e3554605
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      Currently, `AFTAggregator` is not being merged correctly. For example, if there is any single empty partition in the data, this creates an `AFTAggregator` with zero total count which causes the exception below:
      
      ```
      IllegalArgumentException: u'requirement failed: The number of instances should be greater than 0.0, but got 0.'
      ```
      
      Please see [AFTSurvivalRegression.scala#L573-L575](https://github.com/apache/spark/blob/6ecedf39b44c9acd58cdddf1a31cf11e8e24428c/mllib/src/main/scala/org/apache/spark/ml/regression/AFTSurvivalRegression.scala#L573-L575) as well.
      
      Just to be clear, the python example `aft_survival_regression.py` seems using 5 rows. So, if there exist partitions more than 5, it throws the exception above since it contains empty partitions which results in an incorrectly merged `AFTAggregator`.
      
      Executing `bin/spark-submit examples/src/main/python/ml/aft_survival_regression.py` on a machine with CPUs more than 5 is being failed because it creates tasks with some empty partitions with defualt  configurations (AFAIK, it sets the parallelism level to the number of CPU cores).
      
      ## How was this patch tested?
      
      An unit test in `AFTSurvivalRegressionSuite.scala` and manually tested by `bin/spark-submit examples/src/main/python/ml/aft_survival_regression.py`.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      Author: Hyukjin Kwon <gurwls223@gmail.com>
      
      Closes #13619 from HyukjinKwon/SPARK-15892.
      e3554605
Loading