Skip to content
Snippets Groups Projects
  1. Nov 19, 2016
    • hyukjinkwon's avatar
      [SPARK-18445][BUILD][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note... · d5b1d5fc
      hyukjinkwon authored
      [SPARK-18445][BUILD][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note that`/`'''Note:'''` across Scala/Java API documentation
      
      ## What changes were proposed in this pull request?
      
      It seems in Scala/Java,
      
      - `Note:`
      - `NOTE:`
      - `Note that`
      - `'''Note:'''`
      - `note`
      
      This PR proposes to fix those to `note` to be consistent.
      
      **Before**
      
      - Scala
        ![2016-11-17 6 16 39](https://cloud.githubusercontent.com/assets/6477701/20383180/1a7aed8c-acf2-11e6-9611-5eaf6d52c2e0.png)
      
      - Java
        ![2016-11-17 6 14 41](https://cloud.githubusercontent.com/assets/6477701/20383096/c8ffc680-acf1-11e6-914a-33460bf1401d.png)
      
      **After**
      
      - Scala
        ![2016-11-17 6 16 44](https://cloud.githubusercontent.com/assets/6477701/20383167/09940490-acf2-11e6-937a-0d5e1dc2cadf.png)
      
      - Java
        ![2016-11-17 6 13 39](https://cloud.githubusercontent.com/assets/6477701/20383132/e7c2a57e-acf1-11e6-9c47-b849674d4d88.png)
      
      ## How was this patch tested?
      
      The notes were found via
      
      ```bash
      grep -r "NOTE: " . | \ # Note:|NOTE:|Note that|'''Note:'''
      grep -v "// NOTE: " | \  # starting with // does not appear in API documentation.
      grep -E '.scala|.java' | \ # java/scala files
      grep -v Suite | \ # exclude tests
      grep -v Test | \ # exclude tests
      grep -e 'org.apache.spark.api.java' \ # packages appear in API documenation
      -e 'org.apache.spark.api.java.function' \ # note that this is a regular expression. So actual matches were mostly `org/apache/spark/api/java/functions ...`
      -e 'org.apache.spark.api.r' \
      ...
      ```
      
      ```bash
      grep -r "Note that " . | \ # Note:|NOTE:|Note that|'''Note:'''
      grep -v "// Note that " | \  # starting with // does not appear in API documentation.
      grep -E '.scala|.java' | \ # java/scala files
      grep -v Suite | \ # exclude tests
      grep -v Test | \ # exclude tests
      grep -e 'org.apache.spark.api.java' \ # packages appear in API documenation
      -e 'org.apache.spark.api.java.function' \
      -e 'org.apache.spark.api.r' \
      ...
      ```
      
      ```bash
      grep -r "Note: " . | \ # Note:|NOTE:|Note that|'''Note:'''
      grep -v "// Note: " | \  # starting with // does not appear in API documentation.
      grep -E '.scala|.java' | \ # java/scala files
      grep -v Suite | \ # exclude tests
      grep -v Test | \ # exclude tests
      grep -e 'org.apache.spark.api.java' \ # packages appear in API documenation
      -e 'org.apache.spark.api.java.function' \
      -e 'org.apache.spark.api.r' \
      ...
      ```
      
      ```bash
      grep -r "'''Note:'''" . | \ # Note:|NOTE:|Note that|'''Note:'''
      grep -v "// '''Note:''' " | \  # starting with // does not appear in API documentation.
      grep -E '.scala|.java' | \ # java/scala files
      grep -v Suite | \ # exclude tests
      grep -v Test | \ # exclude tests
      grep -e 'org.apache.spark.api.java' \ # packages appear in API documenation
      -e 'org.apache.spark.api.java.function' \
      -e 'org.apache.spark.api.r' \
      ...
      ```
      
      And then fixed one by one comparing with API documentation/access modifiers.
      
      After that, manually tested via `jekyll build`.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #15889 from HyukjinKwon/SPARK-18437.
      Unverified
      d5b1d5fc
    • Sean Owen's avatar
      [SPARK-18448][CORE] SparkSession should implement java.lang.AutoCloseable like JavaSparkContext · db9fb9ba
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Just adds `close()` + `Closeable` as a synonym for `stop()`. This makes it usable in Java in try-with-resources, as suggested by ash211  (`Closeable` extends `AutoCloseable` BTW)
      
      ## How was this patch tested?
      
      Existing tests
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #15932 from srowen/SPARK-18448.
      Unverified
      db9fb9ba
  2. Nov 18, 2016
    • Shixiong Zhu's avatar
      [SPARK-18497][SS] Make ForeachSink support watermark · 2a40de40
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      The issue in ForeachSink is the new created DataSet still uses the old QueryExecution. When `foreachPartition` is called, `QueryExecution.toString` will be called and then fail because it doesn't know how to plan EventTimeWatermark.
      
      This PR just replaces the QueryExecution with IncrementalExecution to fix the issue.
      
      ## How was this patch tested?
      
      `test("foreach with watermark")`.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #15934 from zsxwing/SPARK-18497.
      2a40de40
    • Reynold Xin's avatar
      [SPARK-18505][SQL] Simplify AnalyzeColumnCommand · 6f7ff750
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      I'm spending more time at the design & code level for cost-based optimizer now, and have found a number of issues related to maintainability and compatibility that I will like to address.
      
      This is a small pull request to clean up AnalyzeColumnCommand:
      
      1. Removed warning on duplicated columns. Warnings in log messages are useless since most users that run SQL don't see them.
      2. Removed the nested updateStats function, by just inlining the function.
      3. Renamed a few functions to better reflect what they do.
      4. Removed the factory apply method for ColumnStatStruct. It is a bad pattern to use a apply method that returns an instantiation of a class that is not of the same type (ColumnStatStruct.apply used to return CreateNamedStruct).
      5. Renamed ColumnStatStruct to just AnalyzeColumnCommand.
      6. Added more documentation explaining some of the non-obvious return types and code blocks.
      
      In follow-up pull requests, I'd like to address the following:
      
      1. Get rid of the Map[String, ColumnStat] map, since internally we should be using Attribute to reference columns, rather than strings.
      2. Decouple the fields exposed by ColumnStat and internals of Spark SQL's execution path. Currently the two are coupled because ColumnStat takes in an InternalRow.
      3. Correctness: Remove code path that stores statistics in the catalog using the base64 encoding of the UnsafeRow format, which is not stable across Spark versions.
      4. Clearly document the data representation stored in the catalog for statistics.
      
      ## How was this patch tested?
      Affected test cases have been updated.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #15933 from rxin/SPARK-18505.
      6f7ff750
    • Shixiong Zhu's avatar
      [SPARK-18477][SS] Enable interrupts for HDFS in HDFSMetadataLog · e5f5c29e
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      HDFS `write` may just hang until timeout if some network error happens. It's better to enable interrupts to allow stopping the query fast on HDFS.
      
      This PR just changes the logic to only disable interrupts for local file system, as HADOOP-10622 only happens for local file system.
      
      ## How was this patch tested?
      
      Jenkins
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #15911 from zsxwing/interrupt-on-dfs.
      e5f5c29e
    • hyukjinkwon's avatar
      [SPARK-18422][CORE] Fix wholeTextFiles test to pass on Windows in JavaAPISuite · 40d59ff5
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR fixes the test `wholeTextFiles` in `JavaAPISuite.java`. This is failed due to the different path format on Windows.
      
      For example, the path in `container` was
      
      ```
      C:\projects\spark\target\tmp\1478967560189-0/part-00000
      ```
      
      whereas `new URI(res._1()).getPath()` was as below:
      
      ```
      /C:/projects/spark/target/tmp/1478967560189-0/part-00000
      ```
      
      ## How was this patch tested?
      
      Tests in `JavaAPISuite.java`.
      
      Tested via AppVeyor.
      
      **Before**
      Build: https://ci.appveyor.com/project/spark-test/spark/build/63-JavaAPISuite-1
      Diff: https://github.com/apache/spark/compare/master...spark-test:JavaAPISuite-1
      
      ```
      [info] Test org.apache.spark.JavaAPISuite.wholeTextFiles started
      [error] Test org.apache.spark.JavaAPISuite.wholeTextFiles failed: java.lang.AssertionError: expected:<spark is easy to use.
      [error] > but was:<null>, took 0.578 sec
      [error]     at org.apache.spark.JavaAPISuite.wholeTextFiles(JavaAPISuite.java:1089)
      ...
      ```
      
      **After**
      Build started: [CORE] `org.apache.spark.JavaAPISuite` [![PR-15866](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=198DDA52-F201-4D2B-BE2F-244E0C1725B2&svg=true)](https://ci.appveyor.com/project/spark-test/spark/branch/198DDA52-F201-4D2B-BE2F-244E0C1725B2)
      Diff: https://github.com/apache/spark/compare/master...spark-test:198DDA52-F201-4D2B-BE2F-244E0C1725B2
      
      ```
      [info] Test org.apache.spark.JavaAPISuite.wholeTextFiles started
      ...
      ```
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #15866 from HyukjinKwon/SPARK-18422.
      Unverified
      40d59ff5
    • Andrew Ray's avatar
      [SPARK-18457][SQL] ORC and other columnar formats using HiveShim read all... · 795e9fc9
      Andrew Ray authored
      [SPARK-18457][SQL] ORC and other columnar formats using HiveShim read all columns when doing a simple count
      
      ## What changes were proposed in this pull request?
      
      When reading zero columns (e.g., count(*)) from ORC or any other format that uses HiveShim, actually set the read column list to empty for Hive to use.
      
      ## How was this patch tested?
      
      Query correctness is handled by existing unit tests. I'm happy to add more if anyone can point out some case that is not covered.
      
      Reduction in data read can be verified in the UI when built with a recent version of Hadoop say:
      ```
      build/mvn -Pyarn -Phadoop-2.7 -Dhadoop.version=2.7.0 -Phive -DskipTests clean package
      ```
      However the default Hadoop 2.2 that is used for unit tests does not report actual bytes read and instead just full file sizes (see FileScanRDD.scala line 80). Therefore I don't think there is a good way to add a unit test for this.
      
      I tested with the following setup using above build options
      ```
      case class OrcData(intField: Long, stringField: String)
      spark.range(1,1000000).map(i => OrcData(i, s"part-$i")).toDF().write.format("orc").save("orc_test")
      
      sql(
            s"""CREATE EXTERNAL TABLE orc_test(
               |  intField LONG,
               |  stringField STRING
               |)
               |STORED AS ORC
               |LOCATION '${System.getProperty("user.dir") + "/orc_test"}'
             """.stripMargin)
      ```
      
      ## Results
      
      query | Spark 2.0.2 | this PR
      ---|---|---
      `sql("select count(*) from orc_test").collect`|4.4 MB|199.4 KB
      `sql("select intField from orc_test").collect`|743.4 KB|743.4 KB
      `sql("select * from orc_test").collect`|4.4 MB|4.4 MB
      
      Author: Andrew Ray <ray.andrew@gmail.com>
      
      Closes #15898 from aray/sql-orc-no-col.
      795e9fc9
    • Tyson Condie's avatar
      [SPARK-18187][SQL] CompactibleFileStreamLog should not use "compactInterval"... · 51baca22
      Tyson Condie authored
      [SPARK-18187][SQL] CompactibleFileStreamLog should not use "compactInterval" direcly with user setting.
      
      ## What changes were proposed in this pull request?
      CompactibleFileStreamLog relys on "compactInterval" to detect a compaction batch. If the "compactInterval" is reset by user, CompactibleFileStreamLog will return wrong answer, resulting data loss. This PR procides a way to check the validity of 'compactInterval', and calculate an appropriate value.
      
      ## How was this patch tested?
      When restart a stream, we change the 'spark.sql.streaming.fileSource.log.compactInterval' different with the former one.
      
      The primary solution to this issue was given by uncleGen
      Added extensions include an additional metadata field in OffsetSeq and CompactibleFileStreamLog APIs. zsxwing
      
      Author: Tyson Condie <tcondie@gmail.com>
      Author: genmao.ygm <genmao.ygm@genmaoygmdeMacBook-Air.local>
      
      Closes #15852 from tcondie/spark-18187.
      51baca22
  3. Nov 17, 2016
    • Josh Rosen's avatar
      [SPARK-18462] Fix ClassCastException in SparkListenerDriverAccumUpdates event · d9dd979d
      Josh Rosen authored
      ## What changes were proposed in this pull request?
      
      This patch fixes a `ClassCastException: java.lang.Integer cannot be cast to java.lang.Long` error which could occur in the HistoryServer while trying to process a deserialized `SparkListenerDriverAccumUpdates` event.
      
      The problem stems from how `jackson-module-scala` handles primitive type parameters (see https://github.com/FasterXML/jackson-module-scala/wiki/FAQ#deserializing-optionint-and-other-primitive-challenges for more details). This was causing a problem where our code expected a field to be deserialized as a `(Long, Long)` tuple but we got an `(Int, Int)` tuple instead.
      
      This patch hacks around this issue by registering a custom `Converter` with Jackson in order to deserialize the tuples as `(Object, Object)` and perform the appropriate casting.
      
      ## How was this patch tested?
      
      New regression tests in `SQLListenerSuite`.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #15922 from JoshRosen/SPARK-18462.
      d9dd979d
    • Wenchen Fan's avatar
      [SPARK-18360][SQL] default table path of tables in default database should... · ce13c267
      Wenchen Fan authored
      [SPARK-18360][SQL] default table path of tables in default database should depend on the location of default database
      
      ## What changes were proposed in this pull request?
      
      The current semantic of the warehouse config:
      
      1. it's a static config, which means you can't change it once your spark application is launched.
      2. Once a database is created, its location won't change even the warehouse path config is changed.
      3. default database is a special case, although its location is fixed, but the locations of tables created in it are not. If a Spark app starts with warehouse path B(while the location of default database is A), then users create a table `tbl` in default database, its location will be `B/tbl` instead of `A/tbl`. If uses change the warehouse path config to C, and create another table `tbl2`, its location will still be `B/tbl2` instead of `C/tbl2`.
      
      rule 3 doesn't make sense and I think we made it by mistake, not intentionally. Data source tables don't follow rule 3 and treat default database like normal ones.
      
      This PR fixes hive serde tables to make it consistent with data source tables.
      
      ## How was this patch tested?
      
      HiveSparkSubmitSuite
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #15812 from cloud-fan/default-db.
      ce13c267
    • root's avatar
      [SPARK-18490][SQL] duplication nodename extrainfo for ShuffleExchange · b0aa1aa1
      root authored
      ## What changes were proposed in this pull request?
      
         In ShuffleExchange, the nodename's extraInfo are the same when exchangeCoordinator.isEstimated
       is true or false.
      
      Merge the two situation in the PR.
      
      Author: root <root@iZbp1gsnrlfzjxh82cz80vZ.(none)>
      
      Closes #15920 from windpiger/DupNodeNameShuffleExchange.
      Unverified
      b0aa1aa1
    • Zheng RuiFeng's avatar
      [SPARK-18480][DOCS] Fix wrong links for ML guide docs · cdaf4ce9
      Zheng RuiFeng authored
      ## What changes were proposed in this pull request?
      1, There are two `[Graph.partitionBy]` in `graphx-programming-guide.md`, the first one had no effert.
      2, `DataFrame`, `Transformer`, `Pipeline` and `Parameter`  in `ml-pipeline.md` were linked to `ml-guide.html` by mistake.
      3, `PythonMLLibAPI` in `mllib-linear-methods.md` was not accessable, because class `PythonMLLibAPI` is private.
      4, Other link updates.
      ## How was this patch tested?
       manual tests
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #15912 from zhengruifeng/md_fix.
      Unverified
      cdaf4ce9
    • VinceShieh's avatar
      [SPARK-17462][MLLIB]use VersionUtils to parse Spark version strings · de77c677
      VinceShieh authored
      ## What changes were proposed in this pull request?
      
      Several places in MLlib use custom regexes or other approaches to parse Spark versions.
      Those should be fixed to use the VersionUtils. This PR replaces custom regexes with
      VersionUtils to get Spark version numbers.
      ## How was this patch tested?
      
      Existing tests.
      
      Signed-off-by: VinceShieh vincent.xieintel.com
      
      Author: VinceShieh <vincent.xie@intel.com>
      
      Closes #15055 from VinceShieh/SPARK-17462.
      Unverified
      de77c677
    • anabranch's avatar
      [SPARK-18365][DOCS] Improve Sample Method Documentation · 49b6f456
      anabranch authored
      ## What changes were proposed in this pull request?
      
      I found the documentation for the sample method to be confusing, this adds more clarification across all languages.
      
      - [x] Scala
      - [x] Python
      - [x] R
      - [x] RDD Scala
      - [ ] RDD Python with SEED
      - [X] RDD Java
      - [x] RDD Java with SEED
      - [x] RDD Python
      
      ## How was this patch tested?
      
      NA
      
      Please review https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark before opening a pull request.
      
      Author: anabranch <wac.chambers@gmail.com>
      Author: Bill Chambers <bill@databricks.com>
      
      Closes #15815 from anabranch/SPARK-18365.
      Unverified
      49b6f456
    • Weiqing Yang's avatar
      [YARN][DOC] Remove non-Yarn specific configurations from running-on-yarn.md · a3cac7bd
      Weiqing Yang authored
      ## What changes were proposed in this pull request?
      
      Remove `spark.driver.memory`, `spark.executor.memory`,  `spark.driver.cores`, and `spark.executor.cores` from `running-on-yarn.md` as they are not Yarn-specific, and they are also defined in`configuration.md`.
      
      ## How was this patch tested?
      Build passed & Manually check.
      
      Author: Weiqing Yang <yangweiqing001@gmail.com>
      
      Closes #15869 from weiqingy/yarnDoc.
      Unverified
      a3cac7bd
    • Wenchen Fan's avatar
      [SPARK-18464][SQL] support old table which doesn't store schema in metastore · 07b3f045
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      Before Spark 2.1, users can create an external data source table without schema, and we will infer the table schema at runtime. In Spark 2.1, we decided to infer the schema when the table was created, so that we don't need to infer it again and again at runtime.
      
      This is a good improvement, but we should still respect and support old tables which doesn't store table schema in metastore.
      
      ## How was this patch tested?
      
      regression test.
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #15900 from cloud-fan/hive-catalog.
      07b3f045
  4. Nov 16, 2016
    • Takuya UESHIN's avatar
      [SPARK-18442][SQL] Fix nullability of WrapOption. · 170eeb34
      Takuya UESHIN authored
      ## What changes were proposed in this pull request?
      
      The nullability of `WrapOption` should be `false`.
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: Takuya UESHIN <ueshin@happy-camper.st>
      
      Closes #15887 from ueshin/issues/SPARK-18442.
      170eeb34
    • Artur Sukhenko's avatar
      [YARN][DOC] Increasing NodeManager's heap size with External Shuffle Service · 55589987
      Artur Sukhenko authored
      ## What changes were proposed in this pull request?
      
      Suggest users to increase `NodeManager's` heap size if `External Shuffle Service` is enabled as
      `NM` can spend a lot of time doing GC resulting in  shuffle operations being a bottleneck due to `Shuffle Read blocked time` bumped up.
      Also because of GC  `NodeManager` can use an enormous amount of CPU and cluster performance will suffer.
      I have seen NodeManager using 5-13G RAM and up to 2700% CPU with `spark_shuffle` service on.
      
      ## How was this patch tested?
      
      #### Added step 5:
      ![shuffle_service](https://cloud.githubusercontent.com/assets/15244468/20355499/2fec0fde-ac2a-11e6-8f8b-1c80daf71be1.png)
      
      Author: Artur Sukhenko <artur.sukhenko@gmail.com>
      
      Closes #15906 from Devian-ua/nmHeapSize.
      55589987
    • Cheng Lian's avatar
      [SPARK-18186] Migrate HiveUDAFFunction to TypedImperativeAggregate for partial aggregation support · 2ca8ae9a
      Cheng Lian authored
      ## What changes were proposed in this pull request?
      
      While being evaluated in Spark SQL, Hive UDAFs don't support partial aggregation. This PR migrates `HiveUDAFFunction`s to `TypedImperativeAggregate`, which already provides partial aggregation support for aggregate functions that may use arbitrary Java objects as aggregation states.
      
      The following snippet shows the effect of this PR:
      
      ```scala
      import org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMax
      sql(s"CREATE FUNCTION hive_max AS '${classOf[GenericUDAFMax].getName}'")
      
      spark.range(100).createOrReplaceTempView("t")
      
      // A query using both Spark SQL native `max` and Hive `max`
      sql(s"SELECT max(id), hive_max(id) FROM t").explain()
      ```
      
      Before this PR:
      
      ```
      == Physical Plan ==
      SortAggregate(key=[], functions=[max(id#1L), default.hive_max(default.hive_max, HiveFunctionWrapper(org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMax,org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMax7475f57e), id#1L, false, 0, 0)])
      +- Exchange SinglePartition
         +- *Range (0, 100, step=1, splits=Some(1))
      ```
      
      After this PR:
      
      ```
      == Physical Plan ==
      SortAggregate(key=[], functions=[max(id#1L), default.hive_max(default.hive_max, HiveFunctionWrapper(org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMax,org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMax5e18a6a7), id#1L, false, 0, 0)])
      +- Exchange SinglePartition
         +- SortAggregate(key=[], functions=[partial_max(id#1L), partial_default.hive_max(default.hive_max, HiveFunctionWrapper(org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMax,org.apache.hadoop.hive.ql.udf.generic.GenericUDAFMax5e18a6a7), id#1L, false, 0, 0)])
            +- *Range (0, 100, step=1, splits=Some(1))
      ```
      
      The tricky part of the PR is mostly about updating and passing around aggregation states of `HiveUDAFFunction`s since the aggregation state of a Hive UDAF may appear in three different forms. Let's take a look at the testing `MockUDAF` added in this PR as an example. This UDAF computes the count of non-null values together with the count of nulls of a given column. Its aggregation state may appear as the following forms at different time:
      
      1. A `MockUDAFBuffer`, which is a concrete subclass of `GenericUDAFEvaluator.AggregationBuffer`
      
         The form used by Hive UDAF API. This form is required by the following scenarios:
      
         - Calling `GenericUDAFEvaluator.iterate()` to update an existing aggregation state with new input values.
         - Calling `GenericUDAFEvaluator.terminate()` to get the final aggregated value from an existing aggregation state.
         - Calling `GenericUDAFEvaluator.merge()` to merge other aggregation states into an existing aggregation state.
      
           The existing aggregation state to be updated must be in this form.
      
         Conversions:
      
         - To form 2:
      
           `GenericUDAFEvaluator.terminatePartial()`
      
         - To form 3:
      
           Convert to form 2 first, and then to 3.
      
      2. An `Object[]` array containing two `java.lang.Long` values.
      
         The form used to interact with Hive's `ObjectInspector`s. This form is required by the following scenarios:
      
         - Calling `GenericUDAFEvaluator.terminatePartial()` to convert an existing aggregation state in form 1 to form 2.
         - Calling `GenericUDAFEvaluator.merge()` to merge other aggregation states into an existing aggregation state.
      
           The input aggregation state must be in this form.
      
         Conversions:
      
         - To form 1:
      
           No direct method. Have to create an empty `AggregationBuffer` and merge it into the empty buffer.
      
         - To form 3:
      
           `unwrapperFor()`/`unwrap()` method of `HiveInspectors`
      
      3. The byte array that holds data of an `UnsafeRow` with two `LongType` fields.
      
         The form used by Spark SQL to shuffle partial aggregation results. This form is required because `TypedImperativeAggregate` always asks its subclasses to serialize their aggregation states into a byte array.
      
         Conversions:
      
         - To form 1:
      
           Convert to form 2 first, and then to 1.
      
         - To form 2:
      
           `wrapperFor()`/`wrap()` method of `HiveInspectors`
      
      Here're some micro-benchmark results produced by the most recent master and this PR branch.
      
      Master:
      
      ```
      Java HotSpot(TM) 64-Bit Server VM 1.8.0_92-b14 on Mac OS X 10.10.5
      Intel(R) Core(TM) i7-4960HQ CPU  2.60GHz
      
      hive udaf vs spark af:                   Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)   Relative
      ------------------------------------------------------------------------------------------------
      w/o groupBy                                    339 /  372          3.1         323.2       1.0X
      w/ groupBy                                     503 /  529          2.1         479.7       0.7X
      ```
      
      This PR:
      
      ```
      Java HotSpot(TM) 64-Bit Server VM 1.8.0_92-b14 on Mac OS X 10.10.5
      Intel(R) Core(TM) i7-4960HQ CPU  2.60GHz
      
      hive udaf vs spark af:                   Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)   Relative
      ------------------------------------------------------------------------------------------------
      w/o groupBy                                    116 /  126          9.0         110.8       1.0X
      w/ groupBy                                     151 /  159          6.9         144.0       0.8X
      ```
      
      Benchmark code snippet:
      
      ```scala
        test("Hive UDAF benchmark") {
          val N = 1 << 20
      
          sparkSession.sql(s"CREATE TEMPORARY FUNCTION hive_max AS '${classOf[GenericUDAFMax].getName}'")
      
          val benchmark = new Benchmark(
            name = "hive udaf vs spark af",
            valuesPerIteration = N,
            minNumIters = 5,
            warmupTime = 5.seconds,
            minTime = 5.seconds,
            outputPerIteration = true
          )
      
          benchmark.addCase("w/o groupBy") { _ =>
            sparkSession.range(N).agg("id" -> "hive_max").collect()
          }
      
          benchmark.addCase("w/ groupBy") { _ =>
            sparkSession.range(N).groupBy($"id" % 10).agg("id" -> "hive_max").collect()
          }
      
          benchmark.run()
      
          sparkSession.sql(s"DROP TEMPORARY FUNCTION IF EXISTS hive_max")
        }
      ```
      
      ## How was this patch tested?
      
      New test suite `HiveUDAFSuite` is added.
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #15703 from liancheng/partial-agg-hive-udaf.
      2ca8ae9a
    • Holden Karau's avatar
      [SPARK-1267][SPARK-18129] Allow PySpark to be pip installed · a36a76ac
      Holden Karau authored
      ## What changes were proposed in this pull request?
      
      This PR aims to provide a pip installable PySpark package. This does a bunch of work to copy the jars over and package them with the Python code (to prevent challenges from trying to use different versions of the Python code with different versions of the JAR). It does not currently publish to PyPI but that is the natural follow up (SPARK-18129).
      
      Done:
      - pip installable on conda [manual tested]
      - setup.py installed on a non-pip managed system (RHEL) with YARN [manual tested]
      - Automated testing of this (virtualenv)
      - packaging and signing with release-build*
      
      Possible follow up work:
      - release-build update to publish to PyPI (SPARK-18128)
      - figure out who owns the pyspark package name on prod PyPI (is it someone with in the project or should we ask PyPI or should we choose a different name to publish with like ApachePySpark?)
      - Windows support and or testing ( SPARK-18136 )
      - investigate details of wheel caching and see if we can avoid cleaning the wheel cache during our test
      - consider how we want to number our dev/snapshot versions
      
      Explicitly out of scope:
      - Using pip installed PySpark to start a standalone cluster
      - Using pip installed PySpark for non-Python Spark programs
      
      *I've done some work to test release-build locally but as a non-committer I've just done local testing.
      ## How was this patch tested?
      
      Automated testing with virtualenv, manual testing with conda, a system wide install, and YARN integration.
      
      release-build changes tested locally as a non-committer (no testing of upload artifacts to Apache staging websites)
      
      Author: Holden Karau <holden@us.ibm.com>
      Author: Juliet Hougland <juliet@cloudera.com>
      Author: Juliet Hougland <not@myemail.com>
      
      Closes #15659 from holdenk/SPARK-1267-pip-install-pyspark.
      a36a76ac
    • Tathagata Das's avatar
      [SPARK-18461][DOCS][STRUCTUREDSTREAMING] Added more information about monitoring streaming queries · bb6cdfd9
      Tathagata Das authored
      ## What changes were proposed in this pull request?
      <img width="941" alt="screen shot 2016-11-15 at 6 27 32 pm" src="https://cloud.githubusercontent.com/assets/663212/20332521/4190b858-ab61-11e6-93a6-4bdc05105ed9.png">
      <img width="940" alt="screen shot 2016-11-15 at 6 27 45 pm" src="https://cloud.githubusercontent.com/assets/663212/20332525/44a0d01e-ab61-11e6-8668-47f925490d4f.png">
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #15897 from tdas/SPARK-18461.
      bb6cdfd9
    • Tathagata Das's avatar
      [SPARK-18459][SPARK-18460][STRUCTUREDSTREAMING] Rename triggerId to batchId... · 0048ce7c
      Tathagata Das authored
      [SPARK-18459][SPARK-18460][STRUCTUREDSTREAMING] Rename triggerId to batchId and add triggerDetails to json in StreamingQueryStatus
      
      ## What changes were proposed in this pull request?
      
      SPARK-18459: triggerId seems like a number that should be increasing with each trigger, whether or not there is data in it. However, actually, triggerId increases only where there is a batch of data in a trigger. So its better to rename it to batchId.
      
      SPARK-18460: triggerDetails was missing from json representation. Fixed it.
      
      ## How was this patch tested?
      Updated existing unit tests.
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #15895 from tdas/SPARK-18459.
      0048ce7c
    • gatorsmile's avatar
      [SPARK-18415][SQL] Weird Plan Output when CTE used in RunnableCommand · 608ecc51
      gatorsmile authored
      ### What changes were proposed in this pull request?
      Currently, when CTE is used in RunnableCommand, the Analyzer does not replace the logical node `With`. The child plan of RunnableCommand is not resolved. Thus, the output of the `With` plan node looks very confusing.
      For example,
      ```
      sql(
        """
          |CREATE VIEW cte_view AS
          |WITH w AS (SELECT 1 AS n), cte1 (select 2), cte2 as (select 3)
          |SELECT n FROM w
        """.stripMargin).explain()
      ```
      The output is like
      ```
      ExecutedCommand
         +- CreateViewCommand `cte_view`, WITH w AS (SELECT 1 AS n), cte1 (select 2), cte2 as (select 3)
      SELECT n FROM w, false, false, PersistedView
               +- 'With [(w,SubqueryAlias w
      +- Project [1 AS n#16]
         +- OneRowRelation$
      ), (cte1,'SubqueryAlias cte1
      +- 'Project [unresolvedalias(2, None)]
         +- OneRowRelation$
      ), (cte2,'SubqueryAlias cte2
      +- 'Project [unresolvedalias(3, None)]
         +- OneRowRelation$
      )]
                  +- 'Project ['n]
                     +- 'UnresolvedRelation `w`
      ```
      After the fix, the output is as shown below.
      ```
      ExecutedCommand
         +- CreateViewCommand `cte_view`, WITH w AS (SELECT 1 AS n), cte1 (select 2), cte2 as (select 3)
      SELECT n FROM w, false, false, PersistedView
               +- CTE [w, cte1, cte2]
                  :  :- SubqueryAlias w
                  :  :  +- Project [1 AS n#16]
                  :  :     +- OneRowRelation$
                  :  :- 'SubqueryAlias cte1
                  :  :  +- 'Project [unresolvedalias(2, None)]
                  :  :     +- OneRowRelation$
                  :  +- 'SubqueryAlias cte2
                  :     +- 'Project [unresolvedalias(3, None)]
                  :        +- OneRowRelation$
                  +- 'Project ['n]
                     +- 'UnresolvedRelation `w`
      ```
      
      BTW, this PR also fixes the output of the view type.
      
      ### How was this patch tested?
      Manual
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #15854 from gatorsmile/cteName.
      608ecc51
    • Xianyang Liu's avatar
      [SPARK-18420][BUILD] Fix the errors caused by lint check in Java · 7569cf6c
      Xianyang Liu authored
      ## What changes were proposed in this pull request?
      
      Small fix, fix the errors caused by lint check in Java
      
      - Clear unused objects and `UnusedImports`.
      - Add comments around the method `finalize` of `NioBufferedFileInputStream`to turn off checkstyle.
      - Cut the line which is longer than 100 characters into two lines.
      
      ## How was this patch tested?
      Travis CI.
      ```
      $ build/mvn -T 4 -q -DskipTests -Pyarn -Phadoop-2.3 -Pkinesis-asl -Phive -Phive-thriftserver install
      $ dev/lint-java
      ```
      Before:
      ```
      Checkstyle checks failed at following occurrences:
      [ERROR] src/main/java/org/apache/spark/network/util/TransportConf.java:[21,8] (imports) UnusedImports: Unused import - org.apache.commons.crypto.cipher.CryptoCipherFactory.
      [ERROR] src/test/java/org/apache/spark/network/sasl/SparkSaslSuite.java:[516,5] (modifier) RedundantModifier: Redundant 'public' modifier.
      [ERROR] src/main/java/org/apache/spark/io/NioBufferedFileInputStream.java:[133] (coding) NoFinalizer: Avoid using finalizer method.
      [ERROR] src/main/java/org/apache/spark/sql/catalyst/expressions/UnsafeMapData.java:[71] (sizes) LineLength: Line is longer than 100 characters (found 113).
      [ERROR] src/main/java/org/apache/spark/sql/catalyst/expressions/UnsafeArrayData.java:[112] (sizes) LineLength: Line is longer than 100 characters (found 110).
      [ERROR] src/test/java/org/apache/spark/sql/catalyst/expressions/HiveHasherSuite.java:[31,17] (modifier) ModifierOrder: 'static' modifier out of order with the JLS suggestions.
      [ERROR]src/main/java/org/apache/spark/examples/ml/JavaLogisticRegressionWithElasticNetExample.java:[64] (sizes) LineLength: Line is longer than 100 characters (found 103).
      [ERROR] src/main/java/org/apache/spark/examples/ml/JavaInteractionExample.java:[22,8] (imports) UnusedImports: Unused import - org.apache.spark.ml.linalg.Vectors.
      [ERROR] src/main/java/org/apache/spark/examples/ml/JavaInteractionExample.java:[51] (regexp) RegexpSingleline: No trailing whitespace allowed.
      ```
      
      After:
      ```
      $ build/mvn -T 4 -q -DskipTests -Pyarn -Phadoop-2.3 -Pkinesis-asl -Phive -Phive-thriftserver install
      $ dev/lint-java
      Using `mvn` from path: /home/travis/build/ConeyLiu/spark/build/apache-maven-3.3.9/bin/mvn
      Checkstyle checks passed.
      ```
      
      Author: Xianyang Liu <xyliu0530@icloud.com>
      
      Closes #15865 from ConeyLiu/master.
      Unverified
      7569cf6c
    • Zheng RuiFeng's avatar
      [SPARK-18446][ML][DOCS] Add links to API docs for ML algos · a75e3fe9
      Zheng RuiFeng authored
      ## What changes were proposed in this pull request?
      Add links to API docs for ML algos
      ## How was this patch tested?
      Manual checking for the API links
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #15890 from zhengruifeng/algo_link.
      Unverified
      a75e3fe9
    • Zheng RuiFeng's avatar
      [SPARK-18434][ML] Add missing ParamValidations for ML algos · c68f1a38
      Zheng RuiFeng authored
      ## What changes were proposed in this pull request?
      Add missing ParamValidations for ML algos
      ## How was this patch tested?
      existing tests
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #15881 from zhengruifeng/arg_checking.
      c68f1a38
    • Weiqing Yang's avatar
      [MINOR][DOC] Fix typos in the 'configuration', 'monitoring' and... · 241e04bc
      Weiqing Yang authored
      [MINOR][DOC] Fix typos in the 'configuration', 'monitoring' and 'sql-programming-guide' documentation
      
      ## What changes were proposed in this pull request?
      
      Fix typos in the 'configuration', 'monitoring' and 'sql-programming-guide' documentation.
      
      ## How was this patch tested?
      Manually.
      
      Author: Weiqing Yang <yangweiqing001@gmail.com>
      
      Closes #15886 from weiqingy/fixTypo.
      Unverified
      241e04bc
    • uncleGen's avatar
      [SPARK-18410][STREAMING] Add structured kafka example · e6145772
      uncleGen authored
      ## What changes were proposed in this pull request?
      
      This PR provides structured kafka wordcount examples
      
      ## How was this patch tested?
      
      Author: uncleGen <hustyugm@gmail.com>
      
      Closes #15849 from uncleGen/SPARK-18410.
      Unverified
      e6145772
    • Sean Owen's avatar
      [SPARK-18400][STREAMING] NPE when resharding Kinesis Stream · 43a26899
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Avoid NPE in KinesisRecordProcessor when shutdown happens without successful init
      
      ## How was this patch tested?
      
      Existing tests
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #15882 from srowen/SPARK-18400.
      Unverified
      43a26899
    • Liwei Lin's avatar
    • Dongjoon Hyun's avatar
      [SPARK-18433][SQL] Improve DataSource option keys to be more case-insensitive · 74f5c217
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This PR aims to improve DataSource option keys to be more case-insensitive
      
      DataSource partially use CaseInsensitiveMap in code-path. For example, the following fails to find url.
      
      ```scala
      val df = spark.createDataFrame(sparkContext.parallelize(arr2x2), schema2)
      df.write.format("jdbc")
          .option("UrL", url1)
          .option("dbtable", "TEST.SAVETEST")
          .options(properties.asScala)
          .save()
      ```
      
      This PR makes DataSource options to use CaseInsensitiveMap internally and also makes DataSource to use CaseInsensitiveMap generally except `InMemoryFileIndex` and `InsertIntoHadoopFsRelationCommand`. We can not pass them CaseInsensitiveMap because they creates new case-sensitive HadoopConfs by calling newHadoopConfWithOptions(options) inside.
      
      ## How was this patch tested?
      
      Pass the Jenkins test with newly added test cases.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #15884 from dongjoon-hyun/SPARK-18433.
      74f5c217
    • Yanbo Liang's avatar
      [SPARK-18438][SPARKR][ML] spark.mlp should support RFormula. · 95eb06bd
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      ```spark.mlp``` should support ```RFormula``` like other ML algorithm wrappers.
      BTW, I did some cleanup and improvement for ```spark.mlp```.
      
      ## How was this patch tested?
      Unit tests.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #15883 from yanboliang/spark-18438.
      95eb06bd
  5. Nov 15, 2016
    • Wenchen Fan's avatar
      [SPARK-18377][SQL] warehouse path should be a static conf · 4ac9759f
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      it's weird that every session can set its own warehouse path at runtime, we should forbid it and make it a static conf.
      
      ## How was this patch tested?
      
      existing tests.
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #15825 from cloud-fan/warehouse.
      4ac9759f
    • Herman van Hovell's avatar
      [SPARK-18300][SQL] Fix scala 2.10 build for FoldablePropagation · 4b35d13b
      Herman van Hovell authored
      ## What changes were proposed in this pull request?
      Commit https://github.com/apache/spark/commit/f14ae4900ad0ed66ba36108b7792d56cd6767a69 broke the scala 2.10 build. This PR fixes this by simplifying the used pattern match.
      
      ## How was this patch tested?
      Tested building manually. Ran `build/sbt -Dscala-2.10 -Pscala-2.10 package`.
      
      Author: Herman van Hovell <hvanhovell@databricks.com>
      
      Closes #15891 from hvanhovell/SPARK-18300-scala-2.10.
      4b35d13b
    • Dongjoon Hyun's avatar
      [SPARK-17732][SQL] ALTER TABLE DROP PARTITION should support comparators · 3ce057d0
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This PR aims to support `comparators`, e.g. '<', '<=', '>', '>=', again in Apache Spark 2.0 for backward compatibility.
      
      **Spark 1.6**
      
      ``` scala
      scala> sql("CREATE TABLE sales(id INT) PARTITIONED BY (country STRING, quarter STRING)")
      res0: org.apache.spark.sql.DataFrame = [result: string]
      
      scala> sql("ALTER TABLE sales DROP PARTITION (country < 'KR')")
      res1: org.apache.spark.sql.DataFrame = [result: string]
      ```
      
      **Spark 2.0**
      
      ``` scala
      scala> sql("CREATE TABLE sales(id INT) PARTITIONED BY (country STRING, quarter STRING)")
      res0: org.apache.spark.sql.DataFrame = []
      
      scala> sql("ALTER TABLE sales DROP PARTITION (country < 'KR')")
      org.apache.spark.sql.catalyst.parser.ParseException:
      mismatched input '<' expecting {')', ','}(line 1, pos 42)
      ```
      
      After this PR, it's supported.
      
      ## How was this patch tested?
      
      Pass the Jenkins test with a newly added testcase.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #15704 from dongjoon-hyun/SPARK-17732-2.
      3ce057d0
    • hyukjinkwon's avatar
      [SPARK-18423][STREAMING] ReceiverTracker should close checkpoint dir when... · 503378f1
      hyukjinkwon authored
      [SPARK-18423][STREAMING] ReceiverTracker should close checkpoint dir when stopped even if it was not started
      
      ## What changes were proposed in this pull request?
      
      Several tests are being failed on Windows due to the failure of removing the checkpoint dir between each tests.
      
      This is caused by not closed file in `ReceiverTracker`. When it is not started, it does not close it even if `stop()` is called.
      
      ```
      Test org.apache.spark.streaming.JavaAPISuite.testCheckpointMasterRecovery started
      Test org.apache.spark.streaming.JavaAPISuite.testCheckpointMasterRecovery failed: java.io.IOException: Failed to delete: C:\projects\spark\target\tmp\1478983663710-0, took 3.828 sec
          at org.apache.spark.util.Utils$.deleteRecursively(Utils.scala:1010)
          at org.apache.spark.util.Utils.deleteRecursively(Utils.scala)
          at org.apache.spark.streaming.JavaAPISuite.testCheckpointMasterRecovery(JavaAPISuite.java:1809)
          ...
      ```
      
      ```
      - mapWithState - basic operations with simple API (7 seconds, 640 milliseconds)
      Exception encountered when attempting to run a suite with class name: org.apache.spark.streaming.MapWithStateSuite *** ABORTED *** (12 seconds, 688 milliseconds)
        java.io.IOException: Failed to delete: C:\projects\spark\streaming\checkpoint\spark-b8486e2b-6468-4e6f-bb24-88277d2c033c
        ...
      ```
      
      ## How was this patch tested?
      
      Tests in `JavaAPISuite` and `MapWithStateSuite`.
      
      Manually tested via AppVeyor:
      
      **Before**
      
      - `org.apache.spark.streaming.JavaAPISuite`
        Build: https://ci.appveyor.com/project/spark-test/spark/build/71-MapWithStateSuite-1
        Diff: https://github.com/apache/spark/compare/master...spark-test:188c828e682ec45b75d15c3dfc782bcdc8ce024c
      
      - `org.apache.spark.streaming.MapWithStateSuite`
        Build: https://ci.appveyor.com/project/spark-test/spark/build/72-MapWithStateSuite-1
        Diff: https://github.com/apache/spark/compare/master...spark-test:8f6945d0ccde022a23d3848f6b7fe6da1e7c902e
      
      **After**
      
      - `org.apache.spark.streaming.JavaAPISuite`
        Build started: [Streaming] `org.apache.spark.streaming.JavaAPISuite` [![PR-15867](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=3D74F2D5-B0D5-4E1D-874C-685AE694FD37&svg=true)](https://ci.appveyor.com/project/spark-test/spark/branch/3D74F2D5-B0D5-4E1D-874C-685AE694FD37)
        Diff: https://github.com/apache/spark/compare/master...spark-test:3D74F2D5-B0D5-4E1D-874C-685AE694FD37
      
      - `org.apache.spark.streaming.MapWithStateSuite`
        Build started: [Streaming] `org.apache.spark.streaming.MapWithStateSuite` [![PR-15867](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=C8E88B64-49F0-4157-9AFA-FC3ACC442351&svg=true)](https://ci.appveyor.com/project/spark-test/spark/branch/C8E88B64-49F0-4157-9AFA-FC3ACC442351)
        Diff: https://github.com/apache/spark/compare/master...spark-test:C8E88B64-49F0-4157-9AFA-FC3ACC442351
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #15867 from HyukjinKwon/SPARK-18423.
      503378f1
    • Tathagata Das's avatar
      [SPARK-18440][STRUCTURED STREAMING] Pass correct query execution to FileFormatWriter · 1ae4652b
      Tathagata Das authored
      ## What changes were proposed in this pull request?
      
      SPARK-18012 refactored the file write path in FileStreamSink using FileFormatWriter which always uses the default non-streaming QueryExecution to perform the writes. This is wrong for FileStreamSink, because the streaming QueryExecution (i.e. IncrementalExecution) should be used for correctly incrementalizing aggregation. The addition of watermarks in SPARK-18124, file stream sink should logically supports aggregation + watermark + append mode. But actually it fails with
      ```
      16:23:07.389 ERROR org.apache.spark.sql.execution.streaming.StreamExecution: Query query-0 terminated with error
      java.lang.AssertionError: assertion failed: No plan for EventTimeWatermark timestamp#7: timestamp, interval 10 seconds
      +- LocalRelation [timestamp#7]
      
      	at scala.Predef$.assert(Predef.scala:170)
      	at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:92)
      	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:77)
      	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:74)
      	at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
      	at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
      	at scala.collection.Iterator$class.foreach(Iterator.scala:893)
      	at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
      	at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
      	at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
      	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:74)
      	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:66)
      	at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
      	at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
      	at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:92)
      	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:77)
      	at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:74)
      ```
      
      This PR fixes it by passing the correct query execution.
      
      ## How was this patch tested?
      New unit test
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #15885 from tdas/SPARK-18440.
      1ae4652b
    • Weiqing Yang's avatar
      [SPARK-18417][YARN] Define 'spark.yarn.am.port' in yarn config object · 5bcb9a7f
      Weiqing Yang authored
      ## What changes were proposed in this pull request?
      This PR is to define 'spark.yarn.am.port' in yarn config.scala just like other Yarn configurations. That makes code easier to maintain.
      
      ## How was this patch tested?
      Build passed & tested some Yarn unit tests.
      
      Author: Weiqing Yang <yangweiqing001@gmail.com>
      
      Closes #15858 from weiqingy/yarn.
      5bcb9a7f
    • Burak Yavuz's avatar
      [SPARK-18337] Complete mode memory sinks should be able to recover from checkpoints · 2afdaa98
      Burak Yavuz authored
      ## What changes were proposed in this pull request?
      
      It would be nice if memory sinks can also recover from checkpoints. For correctness reasons, the only time we should support it is in `Complete` OutputMode. We can support this in CompleteMode, because the output of the StateStore is already persisted in the checkpoint directory.
      
      ## How was this patch tested?
      
      Unit test
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #15801 from brkyvz/mem-stream.
      2afdaa98
    • Aaditya Ramesh's avatar
      [SPARK-13027][STREAMING] Added batch time as a parameter to updateStateByKey · 6f9e598c
      Aaditya Ramesh authored
      Added RDD batch time as an input parameter to the update function in updateStateByKey.
      
      Author: Aaditya Ramesh <aramesh@conviva.com>
      
      Closes #11122 from aramesh117/SPARK-13027.
      6f9e598c
Loading