Skip to content
Snippets Groups Projects
  1. Nov 22, 2016
  2. Nov 21, 2016
  3. Nov 20, 2016
    • Takuya UESHIN's avatar
      [SPARK-18467][SQL] Extracts method for preparing arguments from StaticInvoke,... · fb4e6359
      Takuya UESHIN authored
      [SPARK-18467][SQL] Extracts method for preparing arguments from StaticInvoke, Invoke and NewInstance and modify to short circuit if arguments have null when `needNullCheck == true`.
      
      ## What changes were proposed in this pull request?
      
      This pr extracts method for preparing arguments from `StaticInvoke`, `Invoke` and `NewInstance` and modify to short circuit if arguments have `null` when `propageteNull == true`.
      
      The steps are as follows:
      
      1. Introduce `InvokeLike` to extract common logic from `StaticInvoke`, `Invoke` and `NewInstance` to prepare arguments.
      `StaticInvoke` and `Invoke` had a risk to exceed 64kb JVM limit to prepare arguments but after this patch they can handle them because they share the preparing code of NewInstance, which handles the limit well.
      
      2. Remove unneeded null checking and fix nullability of `NewInstance`.
      Avoid some of nullabilty checking which are not needed because the expression is not nullable.
      
      3. Modify to short circuit if arguments have `null` when `needNullCheck == true`.
      If `needNullCheck == true`, preparing arguments can be skipped if we found one of them is `null`, so modified to short circuit in the case.
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: Takuya UESHIN <ueshin@happy-camper.st>
      
      Closes #15901 from ueshin/issues/SPARK-18467.
      
      (cherry picked from commit 65854797)
      Signed-off-by: default avatarWenchen Fan <wenchen@databricks.com>
      fb4e6359
    • Reynold Xin's avatar
      [HOTFIX][SQL] Fix DDLSuite failure. · f8662db7
      Reynold Xin authored
      
      (cherry picked from commit b625a36e)
      Signed-off-by: default avatarReynold Xin <rxin@databricks.com>
      f8662db7
    • Herman van Hovell's avatar
      [SPARK-17732][SQL] Revert ALTER TABLE DROP PARTITION should support comparators · cffaf503
      Herman van Hovell authored
      This reverts commit 1126c319.
      
      Author: Herman van Hovell <hvanhovell@databricks.com>
      
      Closes #15948 from hvanhovell/SPARK-17732.
      cffaf503
    • hyukjinkwon's avatar
      [SPARK-3359][BUILD][DOCS] Print examples and disable group and tparam tags in javadoc · bc3e7b3b
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR proposes/fixes two things.
      
      - Remove many errors to generate javadoc with Java8 from unrecognisable tags, `tparam` and `group`.
      
        ```
        [error] .../spark/mllib/target/java/org/apache/spark/ml/classification/Classifier.java:18: error: unknown tag: group
        [error]   /** group setParam */
        [error]       ^
        [error] .../spark/mllib/target/java/org/apache/spark/ml/classification/Classifier.java:8: error: unknown tag: tparam
        [error]  * tparam FeaturesType  Type of input features.  E.g., <code>Vector</code>
        [error]    ^
        ...
        ```
      
        It does not fully resolve the problem but remove many errors. It seems both `group` and `tparam` are unrecognisable in javadoc. It seems we can't print them pretty in javadoc in a way of `example` here because they appear differently (both examples can be found in http://spark.apache.org/docs/2.0.2/api/scala/index.html#org.apache.spark.ml.classification.Classifier).
      
      - Print `example` in javadoc.
        Currently, there are few `example` tag in several places.
      
        ```
        ./graphx/src/main/scala/org/apache/spark/graphx/Graph.scala:   * example This operation might be used to evaluate a graph
        ./graphx/src/main/scala/org/apache/spark/graphx/Graph.scala:   * example We might use this operation to change the vertex values
        ./graphx/src/main/scala/org/apache/spark/graphx/Graph.scala:   * example This function might be used to initialize edge
        ./graphx/src/main/scala/org/apache/spark/graphx/Graph.scala:   * example This function might be used to initialize edge
        ./graphx/src/main/scala/org/apache/spark/graphx/Graph.scala:   * example This function might be used to initialize edge
        ./graphx/src/main/scala/org/apache/spark/graphx/Graph.scala:   * example We can use this function to compute the in-degree of each
        ./graphx/src/main/scala/org/apache/spark/graphx/Graph.scala:   * example This function is used to update the vertices with new values based on external data.
        ./graphx/src/main/scala/org/apache/spark/graphx/GraphLoader.scala:   * example Loads a file in the following format:
        ./graphx/src/main/scala/org/apache/spark/graphx/GraphOps.scala:   * example This function is used to update the vertices with new
        ./graphx/src/main/scala/org/apache/spark/graphx/GraphOps.scala:   * example This function can be used to filter the graph based on some property, without
        ./graphx/src/main/scala/org/apache/spark/graphx/Pregel.scala: * example We can use the Pregel abstraction to implement PageRank:
        ./graphx/src/main/scala/org/apache/spark/graphx/VertexRDD.scala: * example Construct a `VertexRDD` from a plain RDD:
        ./repl/scala-2.10/src/main/scala/org/apache/spark/repl/SparkCommandLine.scala: * example new SparkCommandLine(Nil).settings
        ./repl/scala-2.10/src/main/scala/org/apache/spark/repl/SparkIMain.scala:   * example addImports("org.apache.spark.SparkContext")
        ./sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralGenerator.scala: * example {{{
        ```
      
      **Before**
      
        <img width="505" alt="2016-11-20 2 43 23" src="https://cloud.githubusercontent.com/assets/6477701/20457285/26f07e1c-aecb-11e6-9ae9-d9dee66845f4.png">
      
      **After**
        <img width="499" alt="2016-11-20 1 27 17" src="https://cloud.githubusercontent.com/assets/6477701/20457240/409124e4-aeca-11e6-9a91-0ba514148b52.png
      
      ">
      
      ## How was this patch tested?
      
      Maunally tested by `jekyll build` with Java 7 and 8
      
      ```
      java version "1.7.0_80"
      Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
      Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
      ```
      
      ```
      java version "1.8.0_45"
      Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
      Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
      ```
      
      Note: this does not make sbt unidoc suceed with Java 8 yet but it reduces the number of errors with Java 8.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #15939 from HyukjinKwon/SPARK-3359-javadoc.
      
      (cherry picked from commit c528812c)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      Unverified
      bc3e7b3b
  4. Nov 19, 2016
    • Reynold Xin's avatar
      [SQL] Fix documentation for Concat and ConcatWs · 063da0c8
      Reynold Xin authored
      
      (cherry picked from commit a64f25d8)
      Signed-off-by: default avatarReynold Xin <rxin@databricks.com>
      063da0c8
    • Reynold Xin's avatar
      [SPARK-18508][SQL] Fix documentation error for DateDiff · 94a9eed1
      Reynold Xin authored
      
      ## What changes were proposed in this pull request?
      The previous documentation and example for DateDiff was wrong.
      
      ## How was this patch tested?
      Doc only change.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #15937 from rxin/datediff-doc.
      
      (cherry picked from commit bce9a036)
      Signed-off-by: default avatarReynold Xin <rxin@databricks.com>
      94a9eed1
    • Kazuaki Ishizaki's avatar
      [SPARK-18458][CORE] Fix signed integer overflow problem at an expression in RadixSort.java · b0b2f108
      Kazuaki Ishizaki authored
      
      ## What changes were proposed in this pull request?
      
      This PR avoids that a result of an expression is negative due to signed integer overflow (e.g. 0x10?????? * 8 < 0). This PR casts each operand to `long` before executing a calculation. Since the result is interpreted as long, the result of the expression is positive.
      
      ## How was this patch tested?
      
      Manually executed query82 of TPC-DS with 100TB
      
      Author: Kazuaki Ishizaki <ishizaki@jp.ibm.com>
      
      Closes #15907 from kiszk/SPARK-18458.
      
      (cherry picked from commit d93b6552)
      Signed-off-by: default avatarReynold Xin <rxin@databricks.com>
      b0b2f108
    • sethah's avatar
      [SPARK-18456][ML][FOLLOWUP] Use matrix abstraction for coefficients in LogisticRegression training · 15eb86c2
      sethah authored
      ## What changes were proposed in this pull request?
      
      This is a follow up to some of the discussion [here](https://github.com/apache/spark/pull/15593
      
      ). During LogisticRegression training, we store the coefficients combined with intercepts as a flat vector, but a more natural abstraction is a matrix. Here, we refactor the code to use matrix where possible, which makes the code more readable and greatly simplifies the indexing.
      
      Note: We do not use a Breeze matrix for the cost function as was mentioned in the linked PR. This is because LBFGS/OWLQN require an implicit `MutableInnerProductModule[DenseMatrix[Double], Double]` which is not natively defined in Breeze. We would need to extend Breeze in Spark to define it ourselves. Also, we do not modify the `regParamL1Fun` because OWLQN in Breeze requires a `MutableEnumeratedCoordinateField[(Int, Int), DenseVector[Double]]` (since we still use a dense vector for coefficients). Here again we would have to extend Breeze inside Spark.
      
      ## How was this patch tested?
      
      This is internal code refactoring - the current unit tests passing show us that the change did not break anything. No added functionality in this patch.
      
      Author: sethah <seth.hendrickson16@gmail.com>
      
      Closes #15893 from sethah/logreg_refactor.
      
      (cherry picked from commit 856e0042)
      Signed-off-by: default avatarDB Tsai <dbtsai@dbtsai.com>
      Unverified
      15eb86c2
    • Sean Owen's avatar
      [SPARK-18448][CORE] Fix @since 2.1.0 on new SparkSession.close() method · 15ad3a31
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Fix since 2.1.0 on new SparkSession.close() method. I goofed in https://github.com/apache/spark/pull/15932
      
       because it was back-ported to 2.1 instead of just master as originally planned.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #15938 from srowen/SPARK-18448.2.
      
      (cherry picked from commit ded5fefb)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      Unverified
      15ad3a31
    • Sean Owen's avatar
      [SPARK-18353][CORE] spark.rpc.askTimeout defalut value is not 120s · 30a6fbbb
      Sean Owen authored
      
      ## What changes were proposed in this pull request?
      
      Avoid hard-coding spark.rpc.askTimeout to non-default in Client; fix doc about spark.rpc.askTimeout default
      
      ## How was this patch tested?
      
      Existing tests
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #15833 from srowen/SPARK-18353.
      
      (cherry picked from commit 8b1e1088)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      Unverified
      30a6fbbb
    • hyukjinkwon's avatar
      [SPARK-18445][BUILD][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note... · 4b396a65
      hyukjinkwon authored
      [SPARK-18445][BUILD][DOCS] Fix the markdown for `Note:`/`NOTE:`/`Note that`/`'''Note:'''` across Scala/Java API documentation
      
      It seems in Scala/Java,
      
      - `Note:`
      - `NOTE:`
      - `Note that`
      - `'''Note:'''`
      - `note`
      
      This PR proposes to fix those to `note` to be consistent.
      
      **Before**
      
      - Scala
        ![2016-11-17 6 16 39](https://cloud.githubusercontent.com/assets/6477701/20383180/1a7aed8c-acf2-11e6-9611-5eaf6d52c2e0.png)
      
      - Java
        ![2016-11-17 6 14 41](https://cloud.githubusercontent.com/assets/6477701/20383096/c8ffc680-acf1-11e6-914a-33460bf1401d.png)
      
      **After**
      
      - Scala
        ![2016-11-17 6 16 44](https://cloud.githubusercontent.com/assets/6477701/20383167/09940490-acf2-11e6-937a-0d5e1dc2cadf.png)
      
      - Java
        ![2016-11-17 6 13 39](https://cloud.githubusercontent.com/assets/6477701/20383132/e7c2a57e-acf1-11e6-9c47-b849674d4d88.png
      
      )
      
      The notes were found via
      
      ```bash
      grep -r "NOTE: " . | \ # Note:|NOTE:|Note that|'''Note:'''
      grep -v "// NOTE: " | \  # starting with // does not appear in API documentation.
      grep -E '.scala|.java' | \ # java/scala files
      grep -v Suite | \ # exclude tests
      grep -v Test | \ # exclude tests
      grep -e 'org.apache.spark.api.java' \ # packages appear in API documenation
      -e 'org.apache.spark.api.java.function' \ # note that this is a regular expression. So actual matches were mostly `org/apache/spark/api/java/functions ...`
      -e 'org.apache.spark.api.r' \
      ...
      ```
      
      ```bash
      grep -r "Note that " . | \ # Note:|NOTE:|Note that|'''Note:'''
      grep -v "// Note that " | \  # starting with // does not appear in API documentation.
      grep -E '.scala|.java' | \ # java/scala files
      grep -v Suite | \ # exclude tests
      grep -v Test | \ # exclude tests
      grep -e 'org.apache.spark.api.java' \ # packages appear in API documenation
      -e 'org.apache.spark.api.java.function' \
      -e 'org.apache.spark.api.r' \
      ...
      ```
      
      ```bash
      grep -r "Note: " . | \ # Note:|NOTE:|Note that|'''Note:'''
      grep -v "// Note: " | \  # starting with // does not appear in API documentation.
      grep -E '.scala|.java' | \ # java/scala files
      grep -v Suite | \ # exclude tests
      grep -v Test | \ # exclude tests
      grep -e 'org.apache.spark.api.java' \ # packages appear in API documenation
      -e 'org.apache.spark.api.java.function' \
      -e 'org.apache.spark.api.r' \
      ...
      ```
      
      ```bash
      grep -r "'''Note:'''" . | \ # Note:|NOTE:|Note that|'''Note:'''
      grep -v "// '''Note:''' " | \  # starting with // does not appear in API documentation.
      grep -E '.scala|.java' | \ # java/scala files
      grep -v Suite | \ # exclude tests
      grep -v Test | \ # exclude tests
      grep -e 'org.apache.spark.api.java' \ # packages appear in API documenation
      -e 'org.apache.spark.api.java.function' \
      -e 'org.apache.spark.api.r' \
      ...
      ```
      
      And then fixed one by one comparing with API documentation/access modifiers.
      
      After that, manually tested via `jekyll build`.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #15889 from HyukjinKwon/SPARK-18437.
      
      (cherry picked from commit d5b1d5fc)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      Unverified
      4b396a65
    • Sean Owen's avatar
      [SPARK-18448][CORE] SparkSession should implement java.lang.AutoCloseable like JavaSparkContext · 693401be
      Sean Owen authored
      
      ## What changes were proposed in this pull request?
      
      Just adds `close()` + `Closeable` as a synonym for `stop()`. This makes it usable in Java in try-with-resources, as suggested by ash211  (`Closeable` extends `AutoCloseable` BTW)
      
      ## How was this patch tested?
      
      Existing tests
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #15932 from srowen/SPARK-18448.
      
      (cherry picked from commit db9fb9ba)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      Unverified
      693401be
  5. Nov 18, 2016
    • Shixiong Zhu's avatar
      [SPARK-18497][SS] Make ForeachSink support watermark · b4bad04c
      Shixiong Zhu authored
      
      ## What changes were proposed in this pull request?
      
      The issue in ForeachSink is the new created DataSet still uses the old QueryExecution. When `foreachPartition` is called, `QueryExecution.toString` will be called and then fail because it doesn't know how to plan EventTimeWatermark.
      
      This PR just replaces the QueryExecution with IncrementalExecution to fix the issue.
      
      ## How was this patch tested?
      
      `test("foreach with watermark")`.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #15934 from zsxwing/SPARK-18497.
      
      (cherry picked from commit 2a40de40)
      Signed-off-by: default avatarTathagata Das <tathagata.das1565@gmail.com>
      b4bad04c
    • Reynold Xin's avatar
      [SPARK-18505][SQL] Simplify AnalyzeColumnCommand · 4b1df0e8
      Reynold Xin authored
      
      ## What changes were proposed in this pull request?
      I'm spending more time at the design & code level for cost-based optimizer now, and have found a number of issues related to maintainability and compatibility that I will like to address.
      
      This is a small pull request to clean up AnalyzeColumnCommand:
      
      1. Removed warning on duplicated columns. Warnings in log messages are useless since most users that run SQL don't see them.
      2. Removed the nested updateStats function, by just inlining the function.
      3. Renamed a few functions to better reflect what they do.
      4. Removed the factory apply method for ColumnStatStruct. It is a bad pattern to use a apply method that returns an instantiation of a class that is not of the same type (ColumnStatStruct.apply used to return CreateNamedStruct).
      5. Renamed ColumnStatStruct to just AnalyzeColumnCommand.
      6. Added more documentation explaining some of the non-obvious return types and code blocks.
      
      In follow-up pull requests, I'd like to address the following:
      
      1. Get rid of the Map[String, ColumnStat] map, since internally we should be using Attribute to reference columns, rather than strings.
      2. Decouple the fields exposed by ColumnStat and internals of Spark SQL's execution path. Currently the two are coupled because ColumnStat takes in an InternalRow.
      3. Correctness: Remove code path that stores statistics in the catalog using the base64 encoding of the UnsafeRow format, which is not stable across Spark versions.
      4. Clearly document the data representation stored in the catalog for statistics.
      
      ## How was this patch tested?
      Affected test cases have been updated.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #15933 from rxin/SPARK-18505.
      
      (cherry picked from commit 6f7ff750)
      Signed-off-by: default avatarReynold Xin <rxin@databricks.com>
      4b1df0e8
    • Shixiong Zhu's avatar
      [SPARK-18477][SS] Enable interrupts for HDFS in HDFSMetadataLog · 136f687c
      Shixiong Zhu authored
      
      ## What changes were proposed in this pull request?
      
      HDFS `write` may just hang until timeout if some network error happens. It's better to enable interrupts to allow stopping the query fast on HDFS.
      
      This PR just changes the logic to only disable interrupts for local file system, as HADOOP-10622 only happens for local file system.
      
      ## How was this patch tested?
      
      Jenkins
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #15911 from zsxwing/interrupt-on-dfs.
      
      (cherry picked from commit e5f5c29e)
      Signed-off-by: default avatarTathagata Das <tathagata.das1565@gmail.com>
      136f687c
    • hyukjinkwon's avatar
      [SPARK-18422][CORE] Fix wholeTextFiles test to pass on Windows in JavaAPISuite · 6717981e
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR fixes the test `wholeTextFiles` in `JavaAPISuite.java`. This is failed due to the different path format on Windows.
      
      For example, the path in `container` was
      
      ```
      C:\projects\spark\target\tmp\1478967560189-0/part-00000
      ```
      
      whereas `new URI(res._1()).getPath()` was as below:
      
      ```
      /C:/projects/spark/target/tmp/1478967560189-0/part-00000
      ```
      
      ## How was this patch tested?
      
      Tests in `JavaAPISuite.java`.
      
      Tested via AppVeyor.
      
      **Before**
      Build: https://ci.appveyor.com/project/spark-test/spark/build/63-JavaAPISuite-1
      Diff: https://github.com/apache/spark/compare/master...spark-test:JavaAPISuite-1
      
      ```
      [info] Test org.apache.spark.JavaAPISuite.wholeTextFiles started
      [error] Test org.apache.spark.JavaAPISuite.wholeTextFiles failed: java.lang.AssertionError: expected:<spark is easy to use.
      [error] > but was:<null>, took 0.578 sec
      [error]     at org.apache.spark.JavaAPISuite.wholeTextFiles(JavaAPISuite.java:1089)
      ...
      ```
      
      **After**
      Build started: [CORE] `org.apache.spark.JavaAPISuite` [![PR-15866](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=198DDA52-F201-4D2B-BE2F-244E0C1725B2&svg=true)](https://ci.appveyor.com/project/spark-test/spark/branch/198DDA52-F201-4D2B-BE2F-244E0C1725B2)
      Diff: https://github.com/apache/spark/compare/master...spark-test:198DDA52-F201-4D2B-BE2F-244E0C1725B2
      
      
      
      ```
      [info] Test org.apache.spark.JavaAPISuite.wholeTextFiles started
      ...
      ```
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #15866 from HyukjinKwon/SPARK-18422.
      
      (cherry picked from commit 40d59ff5)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      Unverified
      6717981e
    • Andrew Ray's avatar
      [SPARK-18457][SQL] ORC and other columnar formats using HiveShim read all... · ec622eb7
      Andrew Ray authored
      [SPARK-18457][SQL] ORC and other columnar formats using HiveShim read all columns when doing a simple count
      
      ## What changes were proposed in this pull request?
      
      When reading zero columns (e.g., count(*)) from ORC or any other format that uses HiveShim, actually set the read column list to empty for Hive to use.
      
      ## How was this patch tested?
      
      Query correctness is handled by existing unit tests. I'm happy to add more if anyone can point out some case that is not covered.
      
      Reduction in data read can be verified in the UI when built with a recent version of Hadoop say:
      ```
      build/mvn -Pyarn -Phadoop-2.7 -Dhadoop.version=2.7.0 -Phive -DskipTests clean package
      ```
      However the default Hadoop 2.2 that is used for unit tests does not report actual bytes read and instead just full file sizes (see FileScanRDD.scala line 80). Therefore I don't think there is a good way to add a unit test for this.
      
      I tested with the following setup using above build options
      ```
      case class OrcData(intField: Long, stringField: String)
      spark.range(1,1000000).map(i => OrcData(i, s"part-$i")).toDF().write.format("orc").save("orc_test")
      
      sql(
            s"""CREATE EXTERNAL TABLE orc_test(
               |  intField LONG,
               |  stringField STRING
               |)
               |STORED AS ORC
               |LOCATION '${System.getProperty("user.dir") + "/orc_test"}'
             """.stripMargin)
      ```
      
      ## Results
      
      query | Spark 2.0.2 | this PR
      ---|---|---
      `sql("select count(*) from orc_test").collect`|4.4 MB|199.4 KB
      `sql("select intField from orc_test").collect`|743.4 KB|743.4 KB
      `sql("select * from orc_test").collect`|4.4 MB|4.4 MB
      
      Author: Andrew Ray <ray.andrew@gmail.com>
      
      Closes #15898 from aray/sql-orc-no-col.
      
      (cherry picked from commit 795e9fc9)
      Signed-off-by: default avatarReynold Xin <rxin@databricks.com>
      ec622eb7
    • Tyson Condie's avatar
      [SPARK-18187][SQL] CompactibleFileStreamLog should not use "compactInterval"... · 5912c19e
      Tyson Condie authored
      [SPARK-18187][SQL] CompactibleFileStreamLog should not use "compactInterval" direcly with user setting.
      
      ## What changes were proposed in this pull request?
      CompactibleFileStreamLog relys on "compactInterval" to detect a compaction batch. If the "compactInterval" is reset by user, CompactibleFileStreamLog will return wrong answer, resulting data loss. This PR procides a way to check the validity of 'compactInterval', and calculate an appropriate value.
      
      ## How was this patch tested?
      When restart a stream, we change the 'spark.sql.streaming.fileSource.log.compactInterval' different with the former one.
      
      The primary solution to this issue was given by uncleGen
      Added extensions include an additional metadata field in OffsetSeq and CompactibleFileStreamLog APIs. zsxwing
      
      Author: Tyson Condie <tcondie@gmail.com>
      Author: genmao.ygm <genmao.ygm@genmaoygmdeMacBook-Air.local>
      
      Closes #15852 from tcondie/spark-18187.
      
      (cherry picked from commit 51baca22)
      Signed-off-by: default avatarShixiong Zhu <shixiong@databricks.com>
      5912c19e
  6. Nov 17, 2016
    • Josh Rosen's avatar
      [SPARK-18462] Fix ClassCastException in SparkListenerDriverAccumUpdates event · e8b1955e
      Josh Rosen authored
      ## What changes were proposed in this pull request?
      
      This patch fixes a `ClassCastException: java.lang.Integer cannot be cast to java.lang.Long` error which could occur in the HistoryServer while trying to process a deserialized `SparkListenerDriverAccumUpdates` event.
      
      The problem stems from how `jackson-module-scala` handles primitive type parameters (see https://github.com/FasterXML/jackson-module-scala/wiki/FAQ#deserializing-optionint-and-other-primitive-challenges
      
       for more details). This was causing a problem where our code expected a field to be deserialized as a `(Long, Long)` tuple but we got an `(Int, Int)` tuple instead.
      
      This patch hacks around this issue by registering a custom `Converter` with Jackson in order to deserialize the tuples as `(Object, Object)` and perform the appropriate casting.
      
      ## How was this patch tested?
      
      New regression tests in `SQLListenerSuite`.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #15922 from JoshRosen/SPARK-18462.
      
      (cherry picked from commit d9dd979d)
      Signed-off-by: default avatarReynold Xin <rxin@databricks.com>
      e8b1955e
    • Wenchen Fan's avatar
      [SPARK-18360][SQL] default table path of tables in default database should... · fc466be4
      Wenchen Fan authored
      [SPARK-18360][SQL] default table path of tables in default database should depend on the location of default database
      
      ## What changes were proposed in this pull request?
      
      The current semantic of the warehouse config:
      
      1. it's a static config, which means you can't change it once your spark application is launched.
      2. Once a database is created, its location won't change even the warehouse path config is changed.
      3. default database is a special case, although its location is fixed, but the locations of tables created in it are not. If a Spark app starts with warehouse path B(while the location of default database is A), then users create a table `tbl` in default database, its location will be `B/tbl` instead of `A/tbl`. If uses change the warehouse path config to C, and create another table `tbl2`, its location will still be `B/tbl2` instead of `C/tbl2`.
      
      rule 3 doesn't make sense and I think we made it by mistake, not intentionally. Data source tables don't follow rule 3 and treat default database like normal ones.
      
      This PR fixes hive serde tables to make it consistent with data source tables.
      
      ## How was this patch tested?
      
      HiveSparkSubmitSuite
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #15812 from cloud-fan/default-db.
      
      (cherry picked from commit ce13c267)
      Signed-off-by: default avatarYin Huai <yhuai@databricks.com>
      fc466be4
    • root's avatar
      [SPARK-18490][SQL] duplication nodename extrainfo for ShuffleExchange · 97879888
      root authored
      
      ## What changes were proposed in this pull request?
      
         In ShuffleExchange, the nodename's extraInfo are the same when exchangeCoordinator.isEstimated
       is true or false.
      
      Merge the two situation in the PR.
      
      Author: root <root@iZbp1gsnrlfzjxh82cz80vZ.(none)>
      
      Closes #15920 from windpiger/DupNodeNameShuffleExchange.
      
      (cherry picked from commit b0aa1aa1)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      Unverified
      97879888
    • Zheng RuiFeng's avatar
      [SPARK-18480][DOCS] Fix wrong links for ML guide docs · 536a2159
      Zheng RuiFeng authored
      
      ## What changes were proposed in this pull request?
      1, There are two `[Graph.partitionBy]` in `graphx-programming-guide.md`, the first one had no effert.
      2, `DataFrame`, `Transformer`, `Pipeline` and `Parameter`  in `ml-pipeline.md` were linked to `ml-guide.html` by mistake.
      3, `PythonMLLibAPI` in `mllib-linear-methods.md` was not accessable, because class `PythonMLLibAPI` is private.
      4, Other link updates.
      ## How was this patch tested?
       manual tests
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #15912 from zhengruifeng/md_fix.
      
      (cherry picked from commit cdaf4ce9)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      Unverified
      536a2159
    • VinceShieh's avatar
      [SPARK-17462][MLLIB]use VersionUtils to parse Spark version strings · 42777b1b
      VinceShieh authored
      
      ## What changes were proposed in this pull request?
      
      Several places in MLlib use custom regexes or other approaches to parse Spark versions.
      Those should be fixed to use the VersionUtils. This PR replaces custom regexes with
      VersionUtils to get Spark version numbers.
      ## How was this patch tested?
      
      Existing tests.
      
      Signed-off-by: VinceShieh vincent.xieintel.com
      
      Author: VinceShieh <vincent.xie@intel.com>
      
      Closes #15055 from VinceShieh/SPARK-17462.
      
      (cherry picked from commit de77c677)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      Unverified
      42777b1b
    • anabranch's avatar
      [SPARK-18365][DOCS] Improve Sample Method Documentation · 4fcecb4c
      anabranch authored
      ## What changes were proposed in this pull request?
      
      I found the documentation for the sample method to be confusing, this adds more clarification across all languages.
      
      - [x] Scala
      - [x] Python
      - [x] R
      - [x] RDD Scala
      - [ ] RDD Python with SEED
      - [X] RDD Java
      - [x] RDD Java with SEED
      - [x] RDD Python
      
      ## How was this patch tested?
      
      NA
      
      Please review https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark
      
       before opening a pull request.
      
      Author: anabranch <wac.chambers@gmail.com>
      Author: Bill Chambers <bill@databricks.com>
      
      Closes #15815 from anabranch/SPARK-18365.
      
      (cherry picked from commit 49b6f456)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      Unverified
      4fcecb4c
    • Weiqing Yang's avatar
      [YARN][DOC] Remove non-Yarn specific configurations from running-on-yarn.md · 2ee4fc88
      Weiqing Yang authored
      
      ## What changes were proposed in this pull request?
      
      Remove `spark.driver.memory`, `spark.executor.memory`,  `spark.driver.cores`, and `spark.executor.cores` from `running-on-yarn.md` as they are not Yarn-specific, and they are also defined in`configuration.md`.
      
      ## How was this patch tested?
      Build passed & Manually check.
      
      Author: Weiqing Yang <yangweiqing001@gmail.com>
      
      Closes #15869 from weiqingy/yarnDoc.
      
      (cherry picked from commit a3cac7bd)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      Unverified
      2ee4fc88
    • Wenchen Fan's avatar
      [SPARK-18464][SQL] support old table which doesn't store schema in metastore · 014fceee
      Wenchen Fan authored
      
      ## What changes were proposed in this pull request?
      
      Before Spark 2.1, users can create an external data source table without schema, and we will infer the table schema at runtime. In Spark 2.1, we decided to infer the schema when the table was created, so that we don't need to infer it again and again at runtime.
      
      This is a good improvement, but we should still respect and support old tables which doesn't store table schema in metastore.
      
      ## How was this patch tested?
      
      regression test.
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #15900 from cloud-fan/hive-catalog.
      
      (cherry picked from commit 07b3f045)
      Signed-off-by: default avatarReynold Xin <rxin@databricks.com>
      014fceee
Loading