Skip to content
Snippets Groups Projects
  1. Oct 01, 2016
    • Herman van Hovell's avatar
      [SPARK-17717][SQL] Add Exist/find methods to Catalog [FOLLOW-UP] · af6ece33
      Herman van Hovell authored
      ## What changes were proposed in this pull request?
      We added find and exists methods for Databases, Tables and Functions to the user facing Catalog in PR https://github.com/apache/spark/pull/15301. However, it was brought up that the semantics of the  `find` methods are more in line a `get` method (get an object or else fail). So we rename these in this PR.
      
      ## How was this patch tested?
      Existing tests.
      
      Author: Herman van Hovell <hvanhovell@databricks.com>
      
      Closes #15308 from hvanhovell/SPARK-17717-2.
      af6ece33
    • Eric Liang's avatar
      [SPARK-17740] Spark tests should mock / interpose HDFS to ensure that streams are closed · 4bcd9b72
      Eric Liang authored
      ## What changes were proposed in this pull request?
      
      As a followup to SPARK-17666, ensure filesystem connections are not leaked at least in unit tests. This is done here by intercepting filesystem calls as suggested by JoshRosen . At the end of each test, we assert no filesystem streams are left open.
      
      This applies to all tests using SharedSQLContext or SharedSparkContext.
      
      ## How was this patch tested?
      
      I verified that tests in sql and core are indeed using the filesystem backend, and fixed the detected leaks. I also checked that reverting https://github.com/apache/spark/pull/15245 causes many actual test failures due to connection leaks.
      
      Author: Eric Liang <ekl@databricks.com>
      Author: Eric Liang <ekhliang@gmail.com>
      
      Closes #15306 from ericl/sc-4672.
      4bcd9b72
    • Dongjoon Hyun's avatar
      [MINOR][DOC] Add an up-to-date description for default serialization during shuffling · 15e9bbb4
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This PR aims to make the doc up-to-date. The documentation is generally correct, but after https://issues.apache.org/jira/browse/SPARK-13926, Spark starts to choose Kyro as a default serialization library during shuffling of simple types, arrays of simple types, or string type.
      
      ## How was this patch tested?
      
      This is a documentation update.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #15315 from dongjoon-hyun/SPARK-DOC-SERIALIZER.
      15e9bbb4
  2. Sep 30, 2016
    • Dongjoon Hyun's avatar
      [SPARK-17739][SQL] Collapse adjacent similar Window operators · aef506e3
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      Currently, Spark does not collapse adjacent windows with the same partitioning and sorting. This PR implements `CollapseWindow` optimizer to do the followings.
      
      1. If the partition specs and order specs are the same, collapse into the parent.
      2. If the partition specs are the same and one order spec is a prefix of the other, collapse to the more specific one.
      
      For example:
      ```scala
      val df = spark.range(1000).select($"id" % 100 as "grp", $"id", rand() as "col1", rand() as "col2")
      
      // Add summary statistics for all columns
      import org.apache.spark.sql.expressions.Window
      val cols = Seq("id", "col1", "col2")
      val window = Window.partitionBy($"grp").orderBy($"id")
      val result = cols.foldLeft(df) { (base, name) =>
        base.withColumn(s"${name}_avg", avg(col(name)).over(window))
            .withColumn(s"${name}_stddev", stddev(col(name)).over(window))
            .withColumn(s"${name}_min", min(col(name)).over(window))
            .withColumn(s"${name}_max", max(col(name)).over(window))
      }
      ```
      
      **Before**
      ```scala
      scala> result.explain
      == Physical Plan ==
      Window [max(col2#19) windowspecdefinition(grp#17L, id#14L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS col2_max#234], [grp#17L], [id#14L ASC NULLS FIRST]
      +- Window [min(col2#19) windowspecdefinition(grp#17L, id#14L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS col2_min#216], [grp#17L], [id#14L ASC NULLS FIRST]
         +- Window [stddev_samp(col2#19) windowspecdefinition(grp#17L, id#14L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS col2_stddev#191], [grp#17L], [id#14L ASC NULLS FIRST]
            +- Window [avg(col2#19) windowspecdefinition(grp#17L, id#14L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS col2_avg#167], [grp#17L], [id#14L ASC NULLS FIRST]
               +- Window [max(col1#18) windowspecdefinition(grp#17L, id#14L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS col1_max#152], [grp#17L], [id#14L ASC NULLS FIRST]
                  +- Window [min(col1#18) windowspecdefinition(grp#17L, id#14L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS col1_min#138], [grp#17L], [id#14L ASC NULLS FIRST]
                     +- Window [stddev_samp(col1#18) windowspecdefinition(grp#17L, id#14L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS col1_stddev#117], [grp#17L], [id#14L ASC NULLS FIRST]
                        +- Window [avg(col1#18) windowspecdefinition(grp#17L, id#14L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS col1_avg#97], [grp#17L], [id#14L ASC NULLS FIRST]
                           +- Window [max(id#14L) windowspecdefinition(grp#17L, id#14L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS id_max#86L], [grp#17L], [id#14L ASC NULLS FIRST]
                              +- Window [min(id#14L) windowspecdefinition(grp#17L, id#14L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS id_min#76L], [grp#17L], [id#14L ASC NULLS FIRST]
                                 +- *Project [grp#17L, id#14L, col1#18, col2#19, id_avg#26, id_stddev#42]
                                    +- Window [stddev_samp(_w0#59) windowspecdefinition(grp#17L, id#14L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS id_stddev#42], [grp#17L], [id#14L ASC NULLS FIRST]
                                       +- *Project [grp#17L, id#14L, col1#18, col2#19, id_avg#26, cast(id#14L as double) AS _w0#59]
                                          +- Window [avg(id#14L) windowspecdefinition(grp#17L, id#14L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS id_avg#26], [grp#17L], [id#14L ASC NULLS FIRST]
                                             +- *Sort [grp#17L ASC NULLS FIRST, id#14L ASC NULLS FIRST], false, 0
                                                +- Exchange hashpartitioning(grp#17L, 200)
                                                   +- *Project [(id#14L % 100) AS grp#17L, id#14L, rand(-6329949029880411066) AS col1#18, rand(-7251358484380073081) AS col2#19]
                                                      +- *Range (0, 1000, step=1, splits=Some(8))
      ```
      
      **After**
      ```scala
      scala> result.explain
      == Physical Plan ==
      Window [max(col2#5) windowspecdefinition(grp#3L, id#0L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS col2_max#220, min(col2#5) windowspecdefinition(grp#3L, id#0L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS col2_min#202, stddev_samp(col2#5) windowspecdefinition(grp#3L, id#0L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS col2_stddev#177, avg(col2#5) windowspecdefinition(grp#3L, id#0L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS col2_avg#153, max(col1#4) windowspecdefinition(grp#3L, id#0L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS col1_max#138, min(col1#4) windowspecdefinition(grp#3L, id#0L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS col1_min#124, stddev_samp(col1#4) windowspecdefinition(grp#3L, id#0L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS col1_stddev#103, avg(col1#4) windowspecdefinition(grp#3L, id#0L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS col1_avg#83, max(id#0L) windowspecdefinition(grp#3L, id#0L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS id_max#72L, min(id#0L) windowspecdefinition(grp#3L, id#0L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS id_min#62L], [grp#3L], [id#0L ASC NULLS FIRST]
      +- *Project [grp#3L, id#0L, col1#4, col2#5, id_avg#12, id_stddev#28]
         +- Window [stddev_samp(_w0#45) windowspecdefinition(grp#3L, id#0L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS id_stddev#28], [grp#3L], [id#0L ASC NULLS FIRST]
            +- *Project [grp#3L, id#0L, col1#4, col2#5, id_avg#12, cast(id#0L as double) AS _w0#45]
               +- Window [avg(id#0L) windowspecdefinition(grp#3L, id#0L ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS id_avg#12], [grp#3L], [id#0L ASC NULLS FIRST]
                  +- *Sort [grp#3L ASC NULLS FIRST, id#0L ASC NULLS FIRST], false, 0
                     +- Exchange hashpartitioning(grp#3L, 200)
                        +- *Project [(id#0L % 100) AS grp#3L, id#0L, rand(6537478539664068821) AS col1#4, rand(-8961093871295252795) AS col2#5]
                           +- *Range (0, 1000, step=1, splits=Some(8))
      ```
      
      ## How was this patch tested?
      
      Pass the Jenkins tests with a newly added testsuite.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #15317 from dongjoon-hyun/SPARK-17739.
      aef506e3
    • Shubham Chopra's avatar
      [SPARK-15353][CORE] Making peer selection for block replication pluggable · a26afd52
      Shubham Chopra authored
      ## What changes were proposed in this pull request?
      
      This PR makes block replication strategies pluggable. It provides two trait that can be implemented, one that maps a host to its topology and is used in the master, and the second that helps prioritize a list of peers for block replication and would run in the executors.
      
      This patch contains default implementations of these traits that make sure current Spark behavior is unchanged.
      
      ## How was this patch tested?
      
      This patch should not change Spark behavior in any way, and was tested with unit tests for storage.
      
      Author: Shubham Chopra <schopra31@bloomberg.net>
      
      Closes #13152 from shubhamchopra/RackAwareBlockReplication.
      a26afd52
    • Takuya UESHIN's avatar
      [SPARK-17703][SQL] Add unnamed version of addReferenceObj for minor objects. · 81455a9c
      Takuya UESHIN authored
      ## What changes were proposed in this pull request?
      
      There are many minor objects in references, which are extracted to the generated class field, e.g. `errMsg` in `GetExternalRowField` or `ValidateExternalType`, but number of fields in class is limited so we should reduce the number.
      This pr adds unnamed version of `addReferenceObj` for these minor objects not to store the object into field but refer it from the `references` field at the time of use.
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: Takuya UESHIN <ueshin@happy-camper.st>
      
      Closes #15276 from ueshin/issues/SPARK-17703.
      81455a9c
    • Davies Liu's avatar
      [SPARK-17738] [SQL] fix ARRAY/MAP in columnar cache · f327e168
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      The actualSize() of array and map is different from the actual size, the header is Int, rather than Long.
      
      ## How was this patch tested?
      
      The flaky test should be fixed.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #15305 from davies/fix_MAP.
      f327e168
    • Zheng RuiFeng's avatar
      [SPARK-14077][ML][FOLLOW-UP] Revert change for NB Model's Load to maintain... · 8e491af5
      Zheng RuiFeng authored
      [SPARK-14077][ML][FOLLOW-UP] Revert change for NB Model's Load to maintain compatibility with the model stored before 2.0
      
      ## What changes were proposed in this pull request?
      Revert change for NB Model's Load to maintain compatibility with the model stored before 2.0
      
      ## How was this patch tested?
      local build
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #15313 from zhengruifeng/revert_save_load.
      8e491af5
    • Zheng RuiFeng's avatar
      [SPARK-14077][ML] Refactor NaiveBayes to support weighted instances · 1fad5596
      Zheng RuiFeng authored
      ## What changes were proposed in this pull request?
      1,support weighted data
      2,use dataset/dataframe instead of rdd
      3,make mllib as a wrapper to call ml
      
      ## How was this patch tested?
      local manual tests in spark-shell
      unit tests
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #12819 from zhengruifeng/weighted_nb.
      1fad5596
  3. Sep 29, 2016
    • Herman van Hovell's avatar
      [SPARK-17717][SQL] Add exist/find methods to Catalog. · 74ac1c43
      Herman van Hovell authored
      ## What changes were proposed in this pull request?
      The current user facing catalog does not implement methods for checking object existence or finding objects. You could theoretically do this using the `list*` commands, but this is rather cumbersome and can actually be costly when there are many objects. This PR adds `exists*` and `find*` methods for Databases, Table and Functions.
      
      ## How was this patch tested?
      Added tests to `org.apache.spark.sql.internal.CatalogSuite`
      
      Author: Herman van Hovell <hvanhovell@databricks.com>
      
      Closes #15301 from hvanhovell/SPARK-17717.
      74ac1c43
    • Bryan Cutler's avatar
      [SPARK-17697][ML] Fixed bug in summary calculations that pattern match against... · 2f739567
      Bryan Cutler authored
      [SPARK-17697][ML] Fixed bug in summary calculations that pattern match against label without casting
      
      ## What changes were proposed in this pull request?
      In calling LogisticRegression.evaluate and GeneralizedLinearRegression.evaluate using a Dataset where the Label is not of a double type, calculations pattern match against a double and throw a MatchError.  This fix casts the Label column to a DoubleType to ensure there is no MatchError.
      
      ## How was this patch tested?
      Added unit tests to call evaluate with a dataset that has Label as other numeric types.
      
      Author: Bryan Cutler <cutlerb@gmail.com>
      
      Closes #15288 from BryanCutler/binaryLOR-numericCheck-SPARK-17697.
      2f739567
    • Dongjoon Hyun's avatar
      [SPARK-17412][DOC] All test should not be run by `root` or any admin user · 39eb3bb1
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      `FsHistoryProviderSuite` fails if `root` user runs it. The test case **SPARK-3697: ignore directories that cannot be read** depends on `setReadable(false, false)` to make test data files and expects the number of accessible files is 1. But, `root` can access all files, so it returns 2.
      
      This PR adds the assumption explicitly on doc. `building-spark.md`.
      
      ## How was this patch tested?
      
      This is a documentation change.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #15291 from dongjoon-hyun/SPARK-17412.
      39eb3bb1
    • Imran Rashid's avatar
      [SPARK-17676][CORE] FsHistoryProvider should ignore hidden files · 3993ebca
      Imran Rashid authored
      ## What changes were proposed in this pull request?
      
      FsHistoryProvider was writing a hidden file (to check the fs's clock).
      Even though it deleted the file immediately, sometimes another thread
      would try to scan the files on the fs in-between, and then there would
      be an error msg logged which was very misleading for the end-user.
      (The logged error was harmless, though.)
      
      ## How was this patch tested?
      
      I added one unit test, but to be clear, that test was passing before.  The actual change in behavior in that test is just logging (after the change, there is no more logged error), which I just manually verified.
      
      Author: Imran Rashid <irashid@cloudera.com>
      
      Closes #15250 from squito/SPARK-17676.
      3993ebca
    • Bjarne Fruergaard's avatar
      [SPARK-17721][MLLIB][ML] Fix for multiplying transposed SparseMatrix with SparseVector · 29396e7d
      Bjarne Fruergaard authored
      ## What changes were proposed in this pull request?
      
      * changes the implementation of gemv with transposed SparseMatrix and SparseVector both in mllib-local and mllib (identical)
      * adds a test that was failing before this change, but succeeds with these changes.
      
      The problem in the previous implementation was that it only increments `i`, that is enumerating the columns of a row in the SparseMatrix, when the row-index of the vector matches the column-index of the SparseMatrix. In cases where a particular row of the SparseMatrix has non-zero values at column-indices lower than corresponding non-zero row-indices of the SparseVector, the non-zero values of the SparseVector are enumerated without ever matching the column-index at index `i` and the remaining column-indices i+1,...,indEnd-1 are never attempted. The test cases in this PR illustrate this issue.
      
      ## How was this patch tested?
      
      I have run the specific `gemv` tests in both mllib-local and mllib. I am currently still running `./dev/run-tests`.
      
      ## ___
      As per instructions, I hereby state that this is my original work and that I license the work to the project (Apache Spark) under the project's open source license.
      
      Mentioning dbtsai, viirya and brkyvz whom I can see have worked/authored on these parts before.
      
      Author: Bjarne Fruergaard <bwahlgreen@gmail.com>
      
      Closes #15296 from bwahlgreen/bugfix-spark-17721.
      29396e7d
    • Dongjoon Hyun's avatar
      [SPARK-17612][SQL] Support `DESCRIBE table PARTITION` SQL syntax · 4ecc648a
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This PR implements `DESCRIBE table PARTITION` SQL Syntax again. It was supported until Spark 1.6.2, but was dropped since 2.0.0.
      
      **Spark 1.6.2**
      ```scala
      scala> sql("CREATE TABLE partitioned_table (a STRING, b INT) PARTITIONED BY (c STRING, d STRING)")
      res1: org.apache.spark.sql.DataFrame = [result: string]
      
      scala> sql("ALTER TABLE partitioned_table ADD PARTITION (c='Us', d=1)")
      res2: org.apache.spark.sql.DataFrame = [result: string]
      
      scala> sql("DESC partitioned_table PARTITION (c='Us', d=1)").show(false)
      +----------------------------------------------------------------+
      |result                                                          |
      +----------------------------------------------------------------+
      |a                      string                                   |
      |b                      int                                      |
      |c                      string                                   |
      |d                      string                                   |
      |                                                                |
      |# Partition Information                                         |
      |# col_name             data_type               comment          |
      |                                                                |
      |c                      string                                   |
      |d                      string                                   |
      +----------------------------------------------------------------+
      ```
      
      **Spark 2.0**
      - **Before**
      ```scala
      scala> sql("CREATE TABLE partitioned_table (a STRING, b INT) PARTITIONED BY (c STRING, d STRING)")
      res0: org.apache.spark.sql.DataFrame = []
      
      scala> sql("ALTER TABLE partitioned_table ADD PARTITION (c='Us', d=1)")
      res1: org.apache.spark.sql.DataFrame = []
      
      scala> sql("DESC partitioned_table PARTITION (c='Us', d=1)").show(false)
      org.apache.spark.sql.catalyst.parser.ParseException:
      Unsupported SQL statement
      ```
      
      - **After**
      ```scala
      scala> sql("CREATE TABLE partitioned_table (a STRING, b INT) PARTITIONED BY (c STRING, d STRING)")
      res0: org.apache.spark.sql.DataFrame = []
      
      scala> sql("ALTER TABLE partitioned_table ADD PARTITION (c='Us', d=1)")
      res1: org.apache.spark.sql.DataFrame = []
      
      scala> sql("DESC partitioned_table PARTITION (c='Us', d=1)").show(false)
      +-----------------------+---------+-------+
      |col_name               |data_type|comment|
      +-----------------------+---------+-------+
      |a                      |string   |null   |
      |b                      |int      |null   |
      |c                      |string   |null   |
      |d                      |string   |null   |
      |# Partition Information|         |       |
      |# col_name             |data_type|comment|
      |c                      |string   |null   |
      |d                      |string   |null   |
      +-----------------------+---------+-------+
      
      scala> sql("DESC EXTENDED partitioned_table PARTITION (c='Us', d=1)").show(100,false)
      +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------+-------+
      |col_name                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |data_type|comment|
      +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------+-------+
      |a                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |string   |null   |
      |b                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |int      |null   |
      |c                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |string   |null   |
      |d                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |string   |null   |
      |# Partition Information                                                                                                                                                                                                                                                                                                                                                                                                                                                            |         |       |
      |# col_name                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |data_type|comment|
      |c                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |string   |null   |
      |d                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |string   |null   |
      |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |         |       |
      |Detailed Partition Information CatalogPartition(
              Partition Values: [Us, 1]
              Storage(Location: file:/Users/dhyun/SPARK-17612-DESC-PARTITION/spark-warehouse/partitioned_table/c=Us/d=1, InputFormat: org.apache.hadoop.mapred.TextInputFormat, OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, Serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, Properties: [serialization.format=1])
              Partition Parameters:{transient_lastDdlTime=1475001066})|         |       |
      +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------+-------+
      
      scala> sql("DESC FORMATTED partitioned_table PARTITION (c='Us', d=1)").show(100,false)
      +--------------------------------+---------------------------------------------------------------------------------------+-------+
      |col_name                        |data_type                                                                              |comment|
      +--------------------------------+---------------------------------------------------------------------------------------+-------+
      |a                               |string                                                                                 |null   |
      |b                               |int                                                                                    |null   |
      |c                               |string                                                                                 |null   |
      |d                               |string                                                                                 |null   |
      |# Partition Information         |                                                                                       |       |
      |# col_name                      |data_type                                                                              |comment|
      |c                               |string                                                                                 |null   |
      |d                               |string                                                                                 |null   |
      |                                |                                                                                       |       |
      |# Detailed Partition Information|                                                                                       |       |
      |Partition Value:                |[Us, 1]                                                                                |       |
      |Database:                       |default                                                                                |       |
      |Table:                          |partitioned_table                                                                      |       |
      |Location:                       |file:/Users/dhyun/SPARK-17612-DESC-PARTITION/spark-warehouse/partitioned_table/c=Us/d=1|       |
      |Partition Parameters:           |                                                                                       |       |
      |  transient_lastDdlTime         |1475001066                                                                             |       |
      |                                |                                                                                       |       |
      |# Storage Information           |                                                                                       |       |
      |SerDe Library:                  |org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe                                     |       |
      |InputFormat:                    |org.apache.hadoop.mapred.TextInputFormat                                               |       |
      |OutputFormat:                   |org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat                             |       |
      |Compressed:                     |No                                                                                     |       |
      |Storage Desc Parameters:        |                                                                                       |       |
      |  serialization.format          |1                                                                                      |       |
      +--------------------------------+---------------------------------------------------------------------------------------+-------+
      ```
      
      ## How was this patch tested?
      
      Pass the Jenkins tests with a new testcase.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #15168 from dongjoon-hyun/SPARK-17612.
      4ecc648a
    • Liang-Chi Hsieh's avatar
      [SPARK-17653][SQL] Remove unnecessary distincts in multiple unions · 566d7f28
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      Currently for `Union [Distinct]`, a `Distinct` operator is necessary to be on the top of `Union`. Once there are adjacent `Union [Distinct]`,  there will be multiple `Distinct` in the query plan.
      
      E.g.,
      
      For a query like: select 1 a union select 2 b union select 3 c
      
      Before this patch, its physical plan looks like:
      
          *HashAggregate(keys=[a#13], functions=[])
          +- Exchange hashpartitioning(a#13, 200)
             +- *HashAggregate(keys=[a#13], functions=[])
                +- Union
                   :- *HashAggregate(keys=[a#13], functions=[])
                   :  +- Exchange hashpartitioning(a#13, 200)
                   :     +- *HashAggregate(keys=[a#13], functions=[])
                   :        +- Union
                   :           :- *Project [1 AS a#13]
                   :           :  +- Scan OneRowRelation[]
                   :           +- *Project [2 AS b#14]
                   :              +- Scan OneRowRelation[]
                   +- *Project [3 AS c#15]
                      +- Scan OneRowRelation[]
      
      Only the top distinct should be necessary.
      
      After this patch, the physical plan looks like:
      
          *HashAggregate(keys=[a#221], functions=[], output=[a#221])
          +- Exchange hashpartitioning(a#221, 5)
             +- *HashAggregate(keys=[a#221], functions=[], output=[a#221])
                +- Union
                   :- *Project [1 AS a#221]
                   :  +- Scan OneRowRelation[]
                   :- *Project [2 AS b#222]
                   :  +- Scan OneRowRelation[]
                   +- *Project [3 AS c#223]
                      +- Scan OneRowRelation[]
      
      ## How was this patch tested?
      
      Jenkins tests.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #15238 from viirya/remove-extra-distinct-union.
      566d7f28
    • Michael Armbrust's avatar
      [SPARK-17699] Support for parsing JSON string columns · fe33121a
      Michael Armbrust authored
      Spark SQL has great support for reading text files that contain JSON data.  However, in many cases the JSON data is just one column amongst others.  This is particularly true when reading from sources such as Kafka.  This PR adds a new functions `from_json` that converts a string column into a nested `StructType` with a user specified schema.
      
      Example usage:
      ```scala
      val df = Seq("""{"a": 1}""").toDS()
      val schema = new StructType().add("a", IntegerType)
      
      df.select(from_json($"value", schema) as 'json) // => [json: <a: int>]
      ```
      
      This PR adds support for java, scala and python.  I leveraged our existing JSON parsing support by moving it into catalyst (so that we could define expressions using it).  I left SQL out for now, because I'm not sure how users would specify a schema.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #15274 from marmbrus/jsonParser.
      fe33121a
    • Brian Cho's avatar
      [SPARK-17715][SCHEDULER] Make task launch logs DEBUG · 027dea8f
      Brian Cho authored
      ## What changes were proposed in this pull request?
      
      Ramp down the task launch logs from INFO to DEBUG. Task launches can happen orders of magnitude more than executor registration so it makes the logs easier to handle if they are different log levels. For larger jobs, there can be 100,000s of task launches which makes the driver log huge.
      
      ## How was this patch tested?
      
      No tests, as this is a trivial change.
      
      Author: Brian Cho <bcho@fb.com>
      
      Closes #15290 from dafrista/ramp-down-task-logging.
      027dea8f
    • Gang Wu's avatar
      [SPARK-17672] Spark 2.0 history server web Ui takes too long for a single application · cb87b3ce
      Gang Wu authored
      Added a new API getApplicationInfo(appId: String) in class ApplicationHistoryProvider and class SparkUI to get app info. In this change, FsHistoryProvider can directly fetch one app info in O(1) time complexity compared to O(n) before the change which used an Iterator.find() interface.
      
      Both ApplicationCache and OneApplicationResource classes adopt this new api.
      
       manual tests
      
      Author: Gang Wu <wgtmac@uber.com>
      
      Closes #15247 from wgtmac/SPARK-17671.
      cb87b3ce
    • Imran Rashid's avatar
      [SPARK-17648][CORE] TaskScheduler really needs offers to be an IndexedSeq · 7f779e74
      Imran Rashid authored
      ## What changes were proposed in this pull request?
      
      The Seq[WorkerOffer] is accessed by index, so it really should be an
      IndexedSeq, otherwise an O(n) operation becomes O(n^2).  In practice
      this hasn't been an issue b/c where these offers are generated, the call
      to `.toSeq` just happens to create an IndexedSeq anyway.I got bitten by
      this in performance tests I was doing, and its better for the types to be
      more precise so eg. a change in Scala doesn't destroy performance.
      
      ## How was this patch tested?
      
      Unit tests via jenkins.
      
      Author: Imran Rashid <irashid@cloudera.com>
      
      Closes #15221 from squito/SPARK-17648.
      7f779e74
    • José Hiram Soltren's avatar
      [DOCS] Reorganize explanation of Accumulators and Broadcast Variables · 95820049
      José Hiram Soltren authored
      ## What changes were proposed in this pull request?
      
      The discussion of the interaction of Accumulators and Broadcast Variables should logically follow the discussion on Checkpointing. As currently written, this section discusses Checkpointing before it is formally introduced. To remedy this:
      
       - Rename this section to "Accumulators, Broadcast Variables, and Checkpoints", and
       - Move this section after "Checkpointing".
      
      ## How was this patch tested?
      
      Testing: ran
      
      $ SKIP_API=1 jekyll build
      
      , and verified changes in a Web browser pointed at docs/_site/index.html.
      
      Author: José Hiram Soltren <jose@cloudera.com>
      
      Closes #15281 from jsoltren/doc-changes.
      95820049
    • Takeshi YAMAMURO's avatar
      [MINOR][DOCS] Fix th doc. of spark-streaming with kinesis · b2e9731c
      Takeshi YAMAMURO authored
      ## What changes were proposed in this pull request?
      This pr is just to fix the document of `spark-kinesis-integration`.
      Since `SPARK-17418` prevented all the kinesis stuffs (including kinesis example code)
      from publishing,  `bin/run-example streaming.KinesisWordCountASL` and `bin/run-example streaming.JavaKinesisWordCountASL` does not work.
      Instead, it fetches the kinesis jar from the Spark Package.
      
      Author: Takeshi YAMAMURO <linguin.m.s@gmail.com>
      
      Closes #15260 from maropu/DocFixKinesis.
      Unverified
      b2e9731c
    • Sean Owen's avatar
      [SPARK-17614][SQL] sparkSession.read() .jdbc(***) use the sql syntax "where... · b35b0dbb
      Sean Owen authored
      [SPARK-17614][SQL] sparkSession.read() .jdbc(***) use the sql syntax "where 1=0" that Cassandra does not support
      
      ## What changes were proposed in this pull request?
      
      Use dialect's table-exists query rather than hard-coded WHERE 1=0 query
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #15196 from srowen/SPARK-17614.
      Unverified
      b35b0dbb
    • Yanbo Liang's avatar
      [SPARK-17704][ML][MLLIB] ChiSqSelector performance improvement. · f7082ac1
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      Several performance improvement for ```ChiSqSelector```:
      1, Keep ```selectedFeatures``` ordered ascendent.
      ```ChiSqSelectorModel.transform``` need ```selectedFeatures``` ordered to make prediction. We should sort it when training model rather than making prediction, since users usually train model once and use the model to do prediction multiple times.
      2, When training ```fpr``` type ```ChiSqSelectorModel```, it's not necessary to sort the ChiSq test result by statistic.
      
      ## How was this patch tested?
      Existing unit tests.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #15277 from yanboliang/spark-17704.
      f7082ac1
    • Yanbo Liang's avatar
      [SPARK-16356][FOLLOW-UP][ML] Enforce ML test of exception for local/distributed Dataset. · a19a1bb5
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      #14035 added ```testImplicits``` to ML unit tests and promoted ```toDF()```, but left one minor issue at ```VectorIndexerSuite```. If we create the DataFrame by ```Seq(...).toDF()```, it will throw different error/exception compared with ```sc.parallelize(Seq(...)).toDF()``` for one of the test cases.
      After in-depth study, I found it was caused by different behavior of local and distributed Dataset if the UDF failed at ```assert```. If the data is local Dataset, it throws ```AssertionError``` directly; If the data is distributed Dataset, it throws ```SparkException``` which is the wrapper of ```AssertionError```. I think we should enforce this test to cover both case.
      
      ## How was this patch tested?
      Unit test.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #15261 from yanboliang/spark-16356.
      a19a1bb5
  4. Sep 28, 2016
  5. Sep 27, 2016
    • hyukjinkwon's avatar
      [SPARK-17499][SPARKR][FOLLOWUP] Check null first for layers in spark.mlp to... · 4a833956
      hyukjinkwon authored
      [SPARK-17499][SPARKR][FOLLOWUP] Check null first for layers in spark.mlp to avoid warnings in test results
      
      ## What changes were proposed in this pull request?
      
      Some tests in `test_mllib.r` are as below:
      
      ```r
      expect_error(spark.mlp(df, layers = NULL), "layers must be a integer vector with length > 1.")
      expect_error(spark.mlp(df, layers = c()), "layers must be a integer vector with length > 1.")
      ```
      
      The problem is, `is.na` is internally called via `na.omit` in `spark.mlp` which causes warnings as below:
      
      ```
      Warnings -----------------------------------------------------------------------
      1. spark.mlp (test_mllib.R#400) - is.na() applied to non-(list or vector) of type 'NULL'
      
      2. spark.mlp (test_mllib.R#401) - is.na() applied to non-(list or vector) of type 'NULL'
      ```
      
      ## How was this patch tested?
      
      Manually tested. Also, Jenkins tests and AppVeyor.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #15232 from HyukjinKwon/remove-warnnings.
      4a833956
    • Josh Rosen's avatar
      [SPARK-17666] Ensure that RecordReaders are closed by data source file scans · b03b4adf
      Josh Rosen authored
      ## What changes were proposed in this pull request?
      
      This patch addresses a potential cause of resource leaks in data source file scans. As reported in [SPARK-17666](https://issues.apache.org/jira/browse/SPARK-17666), tasks which do not fully-consume their input may cause file handles / network connections (e.g. S3 connections) to be leaked. Spark's `NewHadoopRDD` uses a TaskContext callback to [close its record readers](https://github.com/apache/spark/blame/master/core/src/main/scala/org/apache/spark/rdd/NewHadoopRDD.scala#L208), but the new data source file scans will only close record readers once their iterators are fully-consumed.
      
      This patch modifies `RecordReaderIterator` and `HadoopFileLinesReader` to add `close()` methods and modifies all six implementations of `FileFormat.buildReader()` to register TaskContext task completion callbacks to guarantee that cleanup is eventually performed.
      
      ## How was this patch tested?
      
      Tested manually for now.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #15245 from JoshRosen/SPARK-17666-close-recordreader.
      b03b4adf
    • Liang-Chi Hsieh's avatar
      [SPARK-17056][CORE] Fix a wrong assert regarding unroll memory in MemoryStore · e7bce9e1
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      There is an assert in MemoryStore's putIteratorAsValues method which is used to check if unroll memory is not released too much. This assert looks wrong.
      
      ## How was this patch tested?
      
      Jenkins tests.
      
      Author: Liang-Chi Hsieh <simonh@tw.ibm.com>
      
      Closes #14642 from viirya/fix-unroll-memory.
      e7bce9e1
    • Josh Rosen's avatar
      [SPARK-17618] Guard against invalid comparisons between UnsafeRow and other formats · 2f84a686
      Josh Rosen authored
      This patch ports changes from #15185 to Spark 2.x. In that patch, a  correctness bug in Spark 1.6.x which was caused by an invalid `equals()` comparison between an `UnsafeRow` and another row of a different format. Spark 2.x is not affected by that specific correctness bug but it can still reap the error-prevention benefits of that patch's changes, which modify  ``UnsafeRow.equals()` to throw an IllegalArgumentException if it is called with an object that is not an `UnsafeRow`.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #15265 from JoshRosen/SPARK-17618-master.
      2f84a686
    • Reynold Xin's avatar
      [SPARK-17677][SQL] Break WindowExec.scala into multiple files · 67c73052
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      As of Spark 2.0, all the window function execution code are in WindowExec.scala. This file is pretty large (over 1k loc) and has a lot of different abstractions in them. This patch creates a new package sql.execution.window, moves WindowExec.scala in it, and breaks WindowExec.scala into multiple, more maintainable pieces:
      
      - AggregateProcessor.scala
      - BoundOrdering.scala
      - RowBuffer.scala
      - WindowExec
      - WindowFunctionFrame.scala
      
      ## How was this patch tested?
      This patch mostly moves code around, and should not change any existing test coverage.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #15252 from rxin/SPARK-17677.
      67c73052
    • gatorsmile's avatar
      [SPARK-17660][SQL] DESC FORMATTED for VIEW Lacks View Definition · 2ab24a7b
      gatorsmile authored
      ### What changes were proposed in this pull request?
      Before this PR, `DESC FORMATTED` does not have a section for the view definition. We should add it for permanent views, like what Hive does.
      
      ```
      +----------------------------+-------------------------------------------------------------------------------------------------------------------------------------+-------+
      |col_name                    |data_type                                                                                                                            |comment|
      +----------------------------+-------------------------------------------------------------------------------------------------------------------------------------+-------+
      |a                           |int                                                                                                                                  |null   |
      |                            |                                                                                                                                     |       |
      |# Detailed Table Information|                                                                                                                                     |       |
      |Database:                   |default                                                                                                                              |       |
      |Owner:                      |xiaoli                                                                                                                               |       |
      |Create Time:                |Sat Sep 24 21:46:19 PDT 2016                                                                                                         |       |
      |Last Access Time:           |Wed Dec 31 16:00:00 PST 1969                                                                                                         |       |
      |Location:                   |                                                                                                                                     |       |
      |Table Type:                 |VIEW                                                                                                                                 |       |
      |Table Parameters:           |                                                                                                                                     |       |
      |  transient_lastDdlTime     |1474778779                                                                                                                           |       |
      |                            |                                                                                                                                     |       |
      |# Storage Information       |                                                                                                                                     |       |
      |SerDe Library:              |org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe                                                                                   |       |
      |InputFormat:                |org.apache.hadoop.mapred.SequenceFileInputFormat                                                                                     |       |
      |OutputFormat:               |org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat                                                                            |       |
      |Compressed:                 |No                                                                                                                                   |       |
      |Storage Desc Parameters:    |                                                                                                                                     |       |
      |  serialization.format      |1                                                                                                                                    |       |
      |                            |                                                                                                                                     |       |
      |# View Information          |                                                                                                                                     |       |
      |View Original Text:         |SELECT * FROM tbl                                                                                                                    |       |
      |View Expanded Text:         |SELECT `gen_attr_0` AS `a` FROM (SELECT `gen_attr_0` FROM (SELECT `a` AS `gen_attr_0` FROM `default`.`tbl`) AS gen_subquery_0) AS tbl|       |
      +----------------------------+-------------------------------------------------------------------------------------------------------------------------------------+-------+
      ```
      
      ### How was this patch tested?
      Added a test case
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #15234 from gatorsmile/descFormattedView.
      2ab24a7b
    • Reynold Xin's avatar
      [SPARK-17682][SQL] Mark children as final for unary, binary, leaf expressions and plan nodes · 120723f9
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch marks the children method as final in unary, binary, and leaf expressions and plan nodes (both logical plan and physical plan), as brought up in http://apache-spark-developers-list.1001551.n3.nabble.com/Should-LeafExpression-have-children-final-override-like-Nondeterministic-td19104.html
      
      ## How was this patch tested?
      This is a simple modifier change and has no impact on test coverage.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #15256 from rxin/SPARK-17682.
      120723f9
Loading