Skip to content
Snippets Groups Projects
  1. Mar 23, 2017
    • Tyson Condie's avatar
      [SPARK-19876][SS][WIP] OneTime Trigger Executor · 746a558d
      Tyson Condie authored
      ## What changes were proposed in this pull request?
      
      An additional trigger and trigger executor that will execute a single trigger only. One can use this OneTime trigger to have more control over the scheduling of triggers.
      
      In addition, this patch requires an optimization to StreamExecution that logs a commit record at the end of successfully processing a batch. This new commit log will be used to determine the next batch (offsets) to process after a restart, instead of using the offset log itself to determine what batch to process next after restart; using the offset log to determine this would process the previously logged batch, always, thus not permitting a OneTime trigger feature.
      
      ## How was this patch tested?
      
      A number of existing tests have been revised. These tests all assumed that when restarting a stream, the last batch in the offset log is to be re-processed. Given that we now have a commit log that will tell us if that last batch was processed successfully, the results/assumptions of those tests needed to be revised accordingly.
      
      In addition, a OneTime trigger test was added to StreamingQuerySuite, which tests:
      - The semantics of OneTime trigger (i.e., on start, execute a single batch, then stop).
      - The case when the commit log was not able to successfully log the completion of a batch before restart, which would mean that we should fall back to what's in the offset log.
      - A OneTime trigger execution that results in an exception being thrown.
      
      marmbrus tdas zsxwing
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: Tyson Condie <tcondie@gmail.com>
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #17219 from tcondie/stream-commit.
      746a558d
    • Ye Yin's avatar
      Typo fixup in comment · b0ae6a38
      Ye Yin authored
      ## What changes were proposed in this pull request?
      
      Fixup typo in comment.
      
      ## How was this patch tested?
      
      Don't need.
      
      Author: Ye Yin <eyniy@qq.com>
      
      Closes #17396 from hustcat/fix.
      b0ae6a38
    • Sean Owen's avatar
      [INFRA] Close stale PRs · b70c03a4
      Sean Owen authored
      Closes #16819
      Closes #13467
      Closes #16083
      Closes #17135
      Closes #8785
      Closes #16278
      Closes #16997
      Closes #17073
      Closes #17220
      
      Added:
      Closes #12059
      Closes #12524
      Closes #12888
      Closes #16061
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #17386 from srowen/StalePRs.
      b70c03a4
    • hyukjinkwon's avatar
      [MINOR][BUILD] Fix javadoc8 break · aefe7989
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      Several javadoc8 breaks have been introduced. This PR proposes fix those instances so that we can build Scala/Java API docs.
      
      ```
      [error] .../spark/sql/core/target/java/org/apache/spark/sql/streaming/GroupState.java:6: error: reference not found
      [error]  * <code>flatMapGroupsWithState</code> operations on {link KeyValueGroupedDataset}.
      [error]                                                             ^
      [error] .../spark/sql/core/target/java/org/apache/spark/sql/streaming/GroupState.java:10: error: reference not found
      [error]  * Both, <code>mapGroupsWithState</code> and <code>flatMapGroupsWithState</code> in {link KeyValueGroupedDataset}
      [error]                                                                                            ^
      [error] .../spark/sql/core/target/java/org/apache/spark/sql/streaming/GroupState.java:51: error: reference not found
      [error]  *    {link GroupStateTimeout.ProcessingTimeTimeout}) or event time (i.e.
      [error]              ^
      [error] .../spark/sql/core/target/java/org/apache/spark/sql/streaming/GroupState.java:52: error: reference not found
      [error]  *    {link GroupStateTimeout.EventTimeTimeout}).
      [error]              ^
      [error] .../spark/sql/core/target/java/org/apache/spark/sql/streaming/GroupState.java:158: error: reference not found
      [error]  *           Spark SQL types (see {link Encoder} for more details).
      [error]                                          ^
      [error] .../spark/mllib/target/java/org/apache/spark/ml/fpm/FPGrowthParams.java:26: error: bad use of '>'
      [error]    * Number of partitions (>=1) used by parallel FP-growth. By default the param is not set, and
      [error]                            ^
      [error] .../spark/sql/core/src/main/java/org/apache/spark/api/java/function/FlatMapGroupsWithStateFunction.java:30: error: reference not found
      [error]  * {link org.apache.spark.sql.KeyValueGroupedDataset#flatMapGroupsWithState(
      [error]           ^
      [error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyValueGroupedDataset.java:211: error: reference not found
      [error]    * See {link GroupState} for more details.
      [error]                 ^
      [error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyValueGroupedDataset.java:232: error: reference not found
      [error]    * See {link GroupState} for more details.
      [error]                 ^
      [error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyValueGroupedDataset.java:254: error: reference not found
      [error]    * See {link GroupState} for more details.
      [error]                 ^
      [error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyValueGroupedDataset.java:277: error: reference not found
      [error]    * See {link GroupState} for more details.
      [error]                 ^
      [error] .../spark/core/target/java/org/apache/spark/TaskContextImpl.java:10: error: reference not found
      [error]  * {link TaskMetrics} &amp; {link MetricsSystem} objects are not thread safe.
      [error]           ^
      [error] .../spark/core/target/java/org/apache/spark/TaskContextImpl.java:10: error: reference not found
      [error]  * {link TaskMetrics} &amp; {link MetricsSystem} objects are not thread safe.
      [error]                                     ^
      [info] 13 errors
      ```
      
      ```
      jekyll 3.3.1 | Error:  Unidoc generation failed
      ```
      
      ## How was this patch tested?
      
      Manually via `jekyll build`
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #17389 from HyukjinKwon/minor-javadoc8-fix.
      aefe7989
    • hyukjinkwon's avatar
      [SPARK-18579][SQL] Use ignoreLeadingWhiteSpace and ignoreTrailingWhiteSpace options in CSV writing · 07c12c09
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR proposes to support _not_ trimming the white spaces when writing out. These are `false` by default in CSV reading path but these are `true` by default in CSV writing in univocity parser.
      
      Both `ignoreLeadingWhiteSpace` and `ignoreTrailingWhiteSpace` options are not being used for writing and therefore, we are always trimming the white spaces.
      
      It seems we should provide a way to keep this white spaces easily.
      
      WIth the data below:
      
      ```scala
      val df = spark.read.csv(Seq("a , b  , c").toDS)
      df.show()
      ```
      
      ```
      +---+----+---+
      |_c0| _c1|_c2|
      +---+----+---+
      | a | b  |  c|
      +---+----+---+
      ```
      
      **Before**
      
      ```scala
      df.write.csv("/tmp/text.csv")
      spark.read.text("/tmp/text.csv").show()
      ```
      
      ```
      +-----+
      |value|
      +-----+
      |a,b,c|
      +-----+
      ```
      
      It seems this can't be worked around via `quoteAll` too.
      
      ```scala
      df.write.option("quoteAll", true).csv("/tmp/text.csv")
      spark.read.text("/tmp/text.csv").show()
      ```
      ```
      +-----------+
      |      value|
      +-----------+
      |"a","b","c"|
      +-----------+
      ```
      
      **After**
      
      ```scala
      df.write.option("ignoreLeadingWhiteSpace", false).option("ignoreTrailingWhiteSpace", false).csv("/tmp/text.csv")
      spark.read.text("/tmp/text.csv").show()
      ```
      
      ```
      +----------+
      |     value|
      +----------+
      |a , b  , c|
      +----------+
      ```
      
      Note that this case is possible in R
      
      ```r
      > system("cat text.csv")
      f1,f2,f3
      a , b  , c
      > df <- read.csv(file="text.csv")
      > df
        f1   f2 f3
      1 a   b    c
      > write.csv(df, file="text1.csv", quote=F, row.names=F)
      > system("cat text1.csv")
      f1,f2,f3
      a , b  , c
      ```
      
      ## How was this patch tested?
      
      Unit tests in `CSVSuite` and manual tests for Python.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #17310 from HyukjinKwon/SPARK-18579.
      07c12c09
  2. Mar 22, 2017
    • Sameer Agarwal's avatar
      [BUILD][MINOR] Fix 2.10 build · 12cd0070
      Sameer Agarwal authored
      ## What changes were proposed in this pull request?
      
      https://github.com/apache/spark/pull/17385 breaks the 2.10 sbt/maven builds by hitting an empty-string interpolation bug (https://issues.scala-lang.org/browse/SI-7919).
      
      https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/spark-master-compile-sbt-scala-2.10/4072/
      https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/spark-master-compile-maven-scala-2.10/3987/
      
      ## How was this patch tested?
      
      Compiles
      
      Author: Sameer Agarwal <sameerag@cs.berkeley.edu>
      
      Closes #17391 from sameeragarwal/build-fix.
      12cd0070
    • Tathagata Das's avatar
      [SPARK-20057][SS] Renamed KeyedState to GroupState in mapGroupsWithState · 82b598b9
      Tathagata Das authored
      ## What changes were proposed in this pull request?
      
      Since the state is tied a "group" in the "mapGroupsWithState" operations, its better to call the state "GroupState" instead of a key. This would make it more general if you extends this operation to RelationGroupedDataset and python APIs.
      
      ## How was this patch tested?
      Existing unit tests.
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #17385 from tdas/SPARK-20057.
      82b598b9
    • hyukjinkwon's avatar
      [SPARK-20018][SQL] Pivot with timestamp and count should not print internal representation · 80fd0703
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      Currently, when we perform count with timestamp types, it prints the internal representation as the column name as below:
      
      ```scala
      Seq(new java.sql.Timestamp(1)).toDF("a").groupBy("a").pivot("a").count().show()
      ```
      
      ```
      +--------------------+----+
      |                   a|1000|
      +--------------------+----+
      |1969-12-31 16:00:...|   1|
      +--------------------+----+
      ```
      
      This PR proposes to use external Scala value instead of the internal representation in the column names as below:
      
      ```
      +--------------------+-----------------------+
      |                   a|1969-12-31 16:00:00.001|
      +--------------------+-----------------------+
      |1969-12-31 16:00:...|                      1|
      +--------------------+-----------------------+
      ```
      
      ## How was this patch tested?
      
      Unit test in `DataFramePivotSuite` and manual tests.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #17348 from HyukjinKwon/SPARK-20018.
      80fd0703
    • hyukjinkwon's avatar
      [SPARK-19949][SQL][FOLLOW-UP] Clean up parse modes and update related comments · 46581838
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR proposes to make `mode` options in both CSV and JSON to use `cass object` and fix some related comments related previous fix.
      
      Also, this PR modifies some tests related parse modes.
      
      ## How was this patch tested?
      
      Modified unit tests in both `CSVSuite.scala` and `JsonSuite.scala`.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #17377 from HyukjinKwon/SPARK-19949.
      46581838
    • Prashant Sharma's avatar
      [SPARK-20027][DOCS] Compilation fix in java docs. · 0caade63
      Prashant Sharma authored
      ## What changes were proposed in this pull request?
      
      During build/sbt publish-local, build breaks due to javadocs errors. This patch fixes those errors.
      
      ## How was this patch tested?
      
      Tested by running the sbt build.
      
      Author: Prashant Sharma <prashsh1@in.ibm.com>
      
      Closes #17358 from ScrapCodes/docs-fix.
      0caade63
    • uncleGen's avatar
      [SPARK-20021][PYSPARK] Miss backslash in python code · facfd608
      uncleGen authored
      ## What changes were proposed in this pull request?
      
      Add backslash for line continuation in python code.
      
      ## How was this patch tested?
      
      Jenkins.
      
      Author: uncleGen <hustyugm@gmail.com>
      Author: dylon <hustyugm@gmail.com>
      
      Closes #17352 from uncleGen/python-example-doc.
      facfd608
    • Xiao Li's avatar
      [SPARK-20023][SQL] Output table comment for DESC FORMATTED · 7343a094
      Xiao Li authored
      ### What changes were proposed in this pull request?
      Currently, `DESC FORMATTED` did not output the table comment, unlike what `DESC EXTENDED` does. This PR is to fix it.
      
      Also correct the following displayed names in `DESC FORMATTED`, for being consistent with `DESC EXTENDED`
      - `"Create Time:"` -> `"Created:"`
      - `"Last Access Time:"` -> `"Last Access:"`
      
      ### How was this patch tested?
      Added test cases in `describe.sql`
      
      Author: Xiao Li <gatorsmile@gmail.com>
      
      Closes #17381 from gatorsmile/descFormattedTableComment.
      7343a094
  3. Mar 21, 2017
    • Yanbo Liang's avatar
      [SPARK-19925][SPARKR] Fix SparkR spark.getSparkFiles fails when it was called on executors. · 478fbc86
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      SparkR ```spark.getSparkFiles``` fails when it was called on executors, see details at [SPARK-19925](https://issues.apache.org/jira/browse/SPARK-19925).
      
      ## How was this patch tested?
      Add unit tests, and verify this fix at standalone and yarn cluster.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #17274 from yanboliang/spark-19925.
      478fbc86
    • Tathagata Das's avatar
      [SPARK-20030][SS] Event-time-based timeout for MapGroupsWithState · c1e87e38
      Tathagata Das authored
      ## What changes were proposed in this pull request?
      
      Adding event time based timeout. The user sets the timeout timestamp directly using `KeyedState.setTimeoutTimestamp`. The keys times out when the watermark crosses the timeout timestamp.
      
      ## How was this patch tested?
      Unit tests
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #17361 from tdas/SPARK-20030.
      c1e87e38
    • Kunal Khamar's avatar
      [SPARK-20051][SS] Fix StreamSuite flaky test - recover from v2.1 checkpoint · 2d73fcce
      Kunal Khamar authored
      ## What changes were proposed in this pull request?
      
      There is a race condition between calling stop on a streaming query and deleting directories in `withTempDir` that causes test to fail, fixing to do lazy deletion using delete on shutdown JVM hook.
      
      ## How was this patch tested?
      
      - Unit test
        - repeated 300 runs with no failure
      
      Author: Kunal Khamar <kkhamar@outlook.com>
      
      Closes #17382 from kunalkhamar/partition-bugfix.
      2d73fcce
    • hyukjinkwon's avatar
      [SPARK-19919][SQL] Defer throwing the exception for empty paths in CSV datasource into `DataSource` · 9281a3d5
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR proposes to defer throwing the exception within `DataSource`.
      
      Currently, if other datasources fail to infer the schema, it returns `None` and then this is being validated in `DataSource` as below:
      
      ```
      scala> spark.read.json("emptydir")
      org.apache.spark.sql.AnalysisException: Unable to infer schema for JSON. It must be specified manually.;
      ```
      
      ```
      scala> spark.read.orc("emptydir")
      org.apache.spark.sql.AnalysisException: Unable to infer schema for ORC. It must be specified manually.;
      ```
      
      ```
      scala> spark.read.parquet("emptydir")
      org.apache.spark.sql.AnalysisException: Unable to infer schema for Parquet. It must be specified manually.;
      ```
      
      However, CSV it checks it within the datasource implementation and throws another exception message as below:
      
      ```
      scala> spark.read.csv("emptydir")
      java.lang.IllegalArgumentException: requirement failed: Cannot infer schema from an empty set of files
      ```
      
      We could remove this duplicated check and validate this in one place in the same way with the same message.
      
      ## How was this patch tested?
      
      Unit test in `CSVSuite` and manual test.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #17256 from HyukjinKwon/SPARK-19919.
      9281a3d5
    • Will Manning's avatar
      clarify array_contains function description · a04dcde8
      Will Manning authored
      ## What changes were proposed in this pull request?
      
      The description in the comment for array_contains is vague/incomplete (i.e., doesn't mention that it returns `null` if the array is `null`); this PR fixes that.
      
      ## How was this patch tested?
      
      No testing, since it merely changes a comment.
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: Will Manning <lwwmanning@gmail.com>
      
      Closes #17380 from lwwmanning/patch-1.
      a04dcde8
    • Felix Cheung's avatar
      [SPARK-19237][SPARKR][CORE] On Windows spark-submit should handle when java is not installed · a8877bdb
      Felix Cheung authored
      ## What changes were proposed in this pull request?
      
      When SparkR is installed as a R package there might not be any java runtime.
      If it is not there SparkR's `sparkR.session()` will block waiting for the connection timeout, hanging the R IDE/shell, without any notification or message.
      
      ## How was this patch tested?
      
      manually
      
      - [x] need to test on Windows
      
      Author: Felix Cheung <felixcheung_m@hotmail.com>
      
      Closes #16596 from felixcheung/rcheckjava.
      a8877bdb
    • zhaorongsheng's avatar
      [SPARK-20017][SQL] change the nullability of function 'StringToMap' from 'false' to 'true' · 7dbc162f
      zhaorongsheng authored
      ## What changes were proposed in this pull request?
      
      Change the nullability of function `StringToMap` from `false` to `true`.
      
      Author: zhaorongsheng <334362872@qq.com>
      
      Closes #17350 from zhaorongsheng/bug-fix_strToMap_NPE.
      7dbc162f
    • Joseph K. Bradley's avatar
      [SPARK-20039][ML] rename ChiSquare to ChiSquareTest · ae4b91d1
      Joseph K. Bradley authored
      ## What changes were proposed in this pull request?
      
      I realized that since ChiSquare is in the package stat, it's pretty unclear if it's the hypothesis test, distribution, or what. This PR renames it to ChiSquareTest to clarify this.
      
      ## How was this patch tested?
      
      Existing unit tests
      
      Author: Joseph K. Bradley <joseph@databricks.com>
      
      Closes #17368 from jkbradley/SPARK-20039.
      ae4b91d1
    • Xin Wu's avatar
      [SPARK-19261][SQL] Alter add columns for Hive serde and some datasource tables · 4c0ff5f5
      Xin Wu authored
      ## What changes were proposed in this pull request?
      Support` ALTER TABLE ADD COLUMNS (...) `syntax for Hive serde and some datasource tables.
      In this PR, we consider a few aspects:
      
      1. View is not supported for `ALTER ADD COLUMNS`
      
      2. Since tables created in SparkSQL with Hive DDL syntax will populate table properties with schema information, we need make sure the consistency of the schema before and after ALTER operation in order for future use.
      
      3. For embedded-schema type of format, such as `parquet`, we need to make sure that the predicate on the newly-added columns can be evaluated properly, or pushed down properly. In case of the data file does not have the columns for the newly-added columns, such predicates should return as if the column values are NULLs.
      
      4. For datasource table, this feature does not support the following:
      4.1 TEXT format, since there is only one default column `value` is inferred for text format data.
      4.2 ORC format, since SparkSQL native ORC reader does not support the difference between user-specified-schema and inferred schema from ORC files.
      4.3 Third party datasource types that implements RelationProvider, including the built-in JDBC format, since different implementations by the vendors may have different ways to dealing with schema.
      4.4 Other datasource types, such as `parquet`, `json`, `csv`, `hive` are supported.
      
      5. Column names being added can not be duplicate of any existing data column or partition column names. Case sensitivity is taken into consideration according to the sql configuration.
      
      6. This feature also supports In-Memory catalog, while Hive support is turned off.
      ## How was this patch tested?
      Add new test cases
      
      Author: Xin Wu <xinwu@us.ibm.com>
      
      Closes #16626 from xwu0226/alter_add_columns.
      4c0ff5f5
    • Zheng RuiFeng's avatar
      [SPARK-20041][DOC] Update docs for NaN handling in approxQuantile · 63f077fb
      Zheng RuiFeng authored
      ## What changes were proposed in this pull request?
      Update docs for NaN handling in approxQuantile.
      
      ## How was this patch tested?
      existing tests.
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #17369 from zhengruifeng/doc_quantiles_nan.
      63f077fb
    • wangzhenhua's avatar
      [SPARK-17080][SQL][FOLLOWUP] Improve documentation, change buildJoin method... · 14865d7f
      wangzhenhua authored
      [SPARK-17080][SQL][FOLLOWUP] Improve documentation, change buildJoin method structure and add a debug log
      
      ## What changes were proposed in this pull request?
      
      1. Improve documentation for class `Cost` and `JoinReorderDP` and method `buildJoin()`.
      2. Change code structure of `buildJoin()` to make the logic clearer.
      3. Add a debug-level log to record information for join reordering, including time cost, the number of items and the number of plans in memo.
      
      ## How was this patch tested?
      
      Not related.
      
      Author: wangzhenhua <wangzhenhua@huawei.com>
      
      Closes #17353 from wzhfy/reorderFollow.
      14865d7f
    • jianran.tfh's avatar
      [SPARK-19998][BLOCK MANAGER] Change the exception log to add RDD id of the related the block · 650d03cf
      jianran.tfh authored
      ## What changes were proposed in this pull request?
      
      "java.lang.Exception: Could not compute split, block $blockId not found" doesn't have the rdd id info, the "BlockManager: Removing RDD $id" has only the RDD id, so it couldn't find that the Exception's reason is the Removing; so it's better block not found Exception add RDD id info
      
      ## How was this patch tested?
      
      Existing tests
      
      Author: jianran.tfh <jianran.tfh@taobao.com>
      Author: jianran <tanfanhua1984@163.com>
      
      Closes #17334 from jianran/SPARK-19998.
      650d03cf
    • christopher snow's avatar
      [SPARK-20011][ML][DOCS] Clarify documentation for ALS 'rank' parameter · 7620aed8
      christopher snow authored
      ## What changes were proposed in this pull request?
      
      API documentation and collaborative filtering documentation page changes to clarify inconsistent description of ALS rank parameter.
      
       - [DOCS] was previously: "rank is the number of latent factors in the model."
       - [API] was previously:  "rank - number of features to use"
      
      This change describes rank in both places consistently as:
      
       - "Number of features to use (also referred to as the number of latent factors)"
      
      Author: Chris Snow <chris.snowuk.ibm.com>
      
      Author: christopher snow <chsnow123@gmail.com>
      
      Closes #17345 from snowch/SPARK-20011.
      7620aed8
    • Xiao Li's avatar
      [SPARK-20024][SQL][TEST-MAVEN] SessionCatalog reset need to set the current... · d2dcd679
      Xiao Li authored
      [SPARK-20024][SQL][TEST-MAVEN] SessionCatalog reset need to set the current database of ExternalCatalog
      
      ### What changes were proposed in this pull request?
      SessionCatalog API setCurrentDatabase does not set the current database of the underlying ExternalCatalog. Thus, weird errors could come in the test suites after we call reset. We need to fix it.
      
      So far, have not found the direct impact in the other code paths because we expect all the SessionCatalog APIs should always use the current database value we managed, unless some of code paths skip it. Thus, we fix it in the test-only function reset().
      
      ### How was this patch tested?
      Multiple test case failures are observed in mvn and add a test case in SessionCatalogSuite.
      
      Author: Xiao Li <gatorsmile@gmail.com>
      
      Closes #17354 from gatorsmile/useDB.
      d2dcd679
  4. Mar 20, 2017
    • Wenchen Fan's avatar
      [SPARK-19949][SQL] unify bad record handling in CSV and JSON · 68d65fae
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      Currently JSON and CSV have exactly the same logic about handling bad records, this PR tries to abstract it and put it in a upper level to reduce code duplication.
      
      The overall idea is, we make the JSON and CSV parser to throw a BadRecordException, then the upper level, FailureSafeParser, handles bad records according to the parse mode.
      
      Behavior changes:
      1. with PERMISSIVE mode, if the number of tokens doesn't match the schema, previously CSV parser will treat it as a legal record and parse as many tokens as possible. After this PR, we treat it as an illegal record, and put the raw record string in a special column, but we still parse as many tokens as possible.
      2. all logging is removed as they are not very useful in practice.
      
      ## How was this patch tested?
      
      existing tests
      
      Author: Wenchen Fan <wenchen@databricks.com>
      Author: hyukjinkwon <gurwls223@gmail.com>
      Author: Wenchen Fan <cloud0fan@gmail.com>
      
      Closes #17315 from cloud-fan/bad-record2.
      68d65fae
    • Dongjoon Hyun's avatar
      [SPARK-19912][SQL] String literals should be escaped for Hive metastore partition pruning · 21e366ae
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      Since current `HiveShim`'s `convertFilters` does not escape the string literals. There exists the following correctness issues. This PR aims to return the correct result and also shows the more clear exception message.
      
      **BEFORE**
      
      ```scala
      scala> Seq((1, "p1", "q1"), (2, "p1\" and q=\"q1", "q2")).toDF("a", "p", "q").write.partitionBy("p", "q").saveAsTable("t1")
      
      scala> spark.table("t1").filter($"p" === "p1\" and q=\"q1").select($"a").show
      +---+
      |  a|
      +---+
      +---+
      
      scala> spark.table("t1").filter($"p" === "'\"").select($"a").show
      java.lang.RuntimeException: Caught Hive MetaException attempting to get partition metadata by filter from ...
      ```
      
      **AFTER**
      
      ```scala
      scala> spark.table("t1").filter($"p" === "p1\" and q=\"q1").select($"a").show
      +---+
      |  a|
      +---+
      |  2|
      +---+
      
      scala> spark.table("t1").filter($"p" === "'\"").select($"a").show
      java.lang.UnsupportedOperationException: Partition filter cannot have both `"` and `'` characters
      ```
      
      ## How was this patch tested?
      
      Pass the Jenkins test with new test cases.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #17266 from dongjoon-hyun/SPARK-19912.
      21e366ae
    • Michael Allman's avatar
      [SPARK-17204][CORE] Fix replicated off heap storage · 7fa116f8
      Michael Allman authored
      (Jira: https://issues.apache.org/jira/browse/SPARK-17204)
      
      ## What changes were proposed in this pull request?
      
      There are a couple of bugs in the `BlockManager` with respect to support for replicated off-heap storage. First, the locally-stored off-heap byte buffer is disposed of when it is replicated. It should not be. Second, the replica byte buffers are stored as heap byte buffers instead of direct byte buffers even when the storage level memory mode is off-heap. This PR addresses both of these problems.
      
      ## How was this patch tested?
      
      `BlockManagerReplicationSuite` was enhanced to fill in the coverage gaps. It now fails if either of the bugs in this PR exist.
      
      Author: Michael Allman <michael@videoamp.com>
      
      Closes #16499 from mallman/spark-17204-replicated_off_heap_storage.
      7fa116f8
    • Takeshi Yamamuro's avatar
      [SPARK-19980][SQL] Add NULL checks in Bean serializer · 0ec1db54
      Takeshi Yamamuro authored
      ## What changes were proposed in this pull request?
      A Bean serializer in `ExpressionEncoder`  could change values when Beans having NULL. A concrete example is as follows;
      ```
      scala> :paste
      class Outer extends Serializable {
        private var cls: Inner = _
        def setCls(c: Inner): Unit = cls = c
        def getCls(): Inner = cls
      }
      
      class Inner extends Serializable {
        private var str: String = _
        def setStr(s: String): Unit = str = str
        def getStr(): String = str
      }
      
      scala> Seq("""{"cls":null}""", """{"cls": {"str":null}}""").toDF().write.text("data")
      scala> val encoder = Encoders.bean(classOf[Outer])
      scala> val schema = encoder.schema
      scala> val df = spark.read.schema(schema).json("data").as[Outer](encoder)
      scala> df.show
      +------+
      |   cls|
      +------+
      |[null]|
      |  null|
      +------+
      
      scala> df.map(x => x)(encoder).show()
      +------+
      |   cls|
      +------+
      |[null]|
      |[null]|     // <-- Value changed
      +------+
      ```
      
      This is because the Bean serializer does not have the NULL-check expressions that the serializer of Scala's product types has. Actually, this value change does not happen in Scala's product types;
      
      ```
      scala> :paste
      case class Outer(cls: Inner)
      case class Inner(str: String)
      
      scala> val encoder = Encoders.product[Outer]
      scala> val schema = encoder.schema
      scala> val df = spark.read.schema(schema).json("data").as[Outer](encoder)
      scala> df.show
      +------+
      |   cls|
      +------+
      |[null]|
      |  null|
      +------+
      
      scala> df.map(x => x)(encoder).show()
      +------+
      |   cls|
      +------+
      |[null]|
      |  null|
      +------+
      ```
      
      This pr added the NULL-check expressions in Bean serializer along with the serializer of Scala's product types.
      
      ## How was this patch tested?
      Added tests in `JavaDatasetSuite`.
      
      Author: Takeshi Yamamuro <yamamuro@apache.org>
      
      Closes #17347 from maropu/SPARK-19980.
      0ec1db54
    • wangzhenhua's avatar
      [SPARK-20010][SQL] Sort information is lost after sort merge join · e9c91bad
      wangzhenhua authored
      ## What changes were proposed in this pull request?
      
      After sort merge join for inner join, now we only keep left key ordering. However, after inner join, right key has the same value and order as left key. So if we need another smj on right key, we will unnecessarily add a sort which causes additional cost.
      
      As a more complicated example, A join B on A.key = B.key join C on B.key = C.key join D on A.key = D.key. We will unnecessarily add a sort on B.key when join {A, B} and C, and add a sort on A.key when join {A, B, C} and D.
      
      To fix this, we need to propagate all sorted information (equivalent expressions) from bottom up through `outputOrdering` and `SortOrder`.
      
      ## How was this patch tested?
      
      Test cases are added.
      
      Author: wangzhenhua <wangzhenhua@huawei.com>
      
      Closes #17339 from wzhfy/sortEnhance.
      e9c91bad
    • Zheng RuiFeng's avatar
      [SPARK-19573][SQL] Make NaN/null handling consistent in approxQuantile · 10691d36
      Zheng RuiFeng authored
      ## What changes were proposed in this pull request?
      update `StatFunctions.multipleApproxQuantiles` to handle NaN/null
      
      ## How was this patch tested?
      existing tests and added tests
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #16971 from zhengruifeng/quantiles_nan.
      10691d36
    • Tyson Condie's avatar
      [SPARK-19906][SS][DOCS] Documentation describing how to write queries to Kafka · c2d1761a
      Tyson Condie authored
      ## What changes were proposed in this pull request?
      
      Add documentation that describes how to write streaming and batch queries to Kafka.
      
      zsxwing tdas
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: Tyson Condie <tcondie@gmail.com>
      
      Closes #17246 from tcondie/kafka-write-docs.
      c2d1761a
    • zero323's avatar
      [SPARK-19899][ML] Replace featuresCol with itemsCol in ml.fpm.FPGrowth · bec6b16c
      zero323 authored
      ## What changes were proposed in this pull request?
      
      Replaces `featuresCol` `Param` with `itemsCol`. See [SPARK-19899](https://issues.apache.org/jira/browse/SPARK-19899).
      
      ## How was this patch tested?
      
      Manual tests. Existing unit tests.
      
      Author: zero323 <zero323@users.noreply.github.com>
      
      Closes #17321 from zero323/SPARK-19899.
      bec6b16c
    • Dongjoon Hyun's avatar
      [SPARK-19970][SQL] Table owner should be USER instead of PRINCIPAL in kerberized clusters · fc755459
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      In the kerberized hadoop cluster, when Spark creates tables, the owner of tables are filled with PRINCIPAL strings instead of USER names. This is inconsistent with Hive and causes problems when using [ROLE](https://cwiki.apache.org/confluence/display/Hive/SQL+Standard+Based+Hive+Authorization) in Hive. We had better to fix this.
      
      **BEFORE**
      ```scala
      scala> sql("create table t(a int)").show
      scala> sql("desc formatted t").show(false)
      ...
      |Owner:                      |sparkEXAMPLE.COM                                         |       |
      ```
      
      **AFTER**
      ```scala
      scala> sql("create table t(a int)").show
      scala> sql("desc formatted t").show(false)
      ...
      |Owner:                      |spark                                         |       |
      ```
      
      ## How was this patch tested?
      
      Manually do `create table` and `desc formatted` because this happens in Kerberized clusters.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #17311 from dongjoon-hyun/SPARK-19970.
      fc755459
    • windpiger's avatar
      [SPARK-19990][SQL][TEST-MAVEN] create a temp file for file in test.jar's... · 7ce30e00
      windpiger authored
      [SPARK-19990][SQL][TEST-MAVEN] create a temp file for file in test.jar's resource when run mvn test accross different modules
      
      ## What changes were proposed in this pull request?
      
      After we have merged the `HiveDDLSuite` and `DDLSuite` in [SPARK-19235](https://issues.apache.org/jira/browse/SPARK-19235), we have two subclasses of `DDLSuite`, that is `HiveCatalogedDDLSuite` and `InMemoryCatalogDDLSuite`.
      
      While `DDLSuite` is in `sql/core module`, and `HiveCatalogedDDLSuite` is in `sql/hive module`, if we mvn test
      `HiveCatalogedDDLSuite`, it will run the test in its parent class `DDLSuite`, this will cause some test case failed which will get and use the test file path in `sql/core module` 's `resource`.
      
      Because the test file path getted will start with 'jar:' like "jar:file:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.6/sql/core/target/spark-sql_2.11-2.2.0-SNAPSHOT-tests.jar!/test-data/cars.csv", which will failed when new Path() in datasource.scala
      
      This PR fix this by copy file from resource to  a temp dir.
      
      ## How was this patch tested?
      N/A
      
      Author: windpiger <songjun@outlook.com>
      
      Closes #17338 from windpiger/fixtestfailemvn.
      7ce30e00
    • Ioana Delaney's avatar
      [SPARK-17791][SQL] Join reordering using star schema detection · 81639115
      Ioana Delaney authored
      ## What changes were proposed in this pull request?
      
      Star schema consists of one or more fact tables referencing a number of dimension tables. In general, queries against star schema are expected to run fast because of the established RI constraints among the tables. This design proposes a join reordering based on natural, generally accepted heuristics for star schema queries:
      - Finds the star join with the largest fact table and places it on the driving arm of the left-deep join. This plan avoids large tables on the inner, and thus favors hash joins.
      - Applies the most selective dimensions early in the plan to reduce the amount of data flow.
      
      The design document was included in SPARK-17791.
      
      Link to the google doc: [StarSchemaDetection](https://docs.google.com/document/d/1UAfwbm_A6wo7goHlVZfYK99pqDMEZUumi7pubJXETEA/edit?usp=sharing)
      
      ## How was this patch tested?
      
      A new test suite StarJoinSuite.scala was implemented.
      
      Author: Ioana Delaney <ioanamdelaney@gmail.com>
      
      Closes #15363 from ioana-delaney/starJoinReord2.
      81639115
    • Felix Cheung's avatar
      [SPARK-20020][SPARKR][FOLLOWUP] DataFrame checkpoint API fix version tag · f14f81e9
      Felix Cheung authored
      ## What changes were proposed in this pull request?
      
      doc only change
      
      ## How was this patch tested?
      
      manual
      
      Author: Felix Cheung <felixcheung_m@hotmail.com>
      
      Closes #17356 from felixcheung/rdfcheckpoint2.
      f14f81e9
    • wangzhenhua's avatar
      [SPARK-19994][SQL] Wrong outputOrdering for right/full outer smj · 965a5abc
      wangzhenhua authored
      ## What changes were proposed in this pull request?
      
      For right outer join, values of the left key will be filled with nulls if it can't match the value of the right key, so `nullOrdering` of the left key can't be guaranteed. We should output right key order instead of left key order.
      
      For full outer join, neither left key nor right key guarantees `nullOrdering`. We should not output any ordering.
      
      In tests, besides adding three test cases for left/right/full outer sort merge join, this patch also reorganizes code in `PlannerSuite` by putting together tests for `Sort`, and also extracts common logic in Sort tests into a method.
      
      ## How was this patch tested?
      
      Corresponding test cases are added.
      
      Author: wangzhenhua <wangzhenhua@huawei.com>
      Author: Zhenhua Wang <wzh_zju@163.com>
      
      Closes #17331 from wzhfy/wrongOrdering.
      965a5abc
    • Felix Cheung's avatar
      [SPARK-20020][SPARKR] DataFrame checkpoint API · c4059772
      Felix Cheung authored
      ## What changes were proposed in this pull request?
      
      Add checkpoint, setCheckpointDir API to R
      
      ## How was this patch tested?
      
      unit tests, manual tests
      
      Author: Felix Cheung <felixcheung_m@hotmail.com>
      
      Closes #17351 from felixcheung/rdfcheckpoint.
      c4059772
Loading