Skip to content
Snippets Groups Projects
  1. Apr 10, 2017
    • root's avatar
      [SPARK-18555][SQL] DataFrameNaFunctions.fill miss up original values in long integers · b26f2c2c
      root authored
      
      ## What changes were proposed in this pull request?
      
         DataSet.na.fill(0) used on a DataSet which has a long value column, it will change the original long value.
      
         The reason is that the type of the function fill's param is Double, and the numeric columns are always cast to double(`fillCol[Double](f, value)`) .
      ```
        def fill(value: Double, cols: Seq[String]): DataFrame = {
          val columnEquals = df.sparkSession.sessionState.analyzer.resolver
          val projections = df.schema.fields.map { f =>
            // Only fill if the column is part of the cols list.
            if (f.dataType.isInstanceOf[NumericType] && cols.exists(col => columnEquals(f.name, col))) {
              fillCol[Double](f, value)
            } else {
              df.col(f.name)
            }
          }
          df.select(projections : _*)
        }
      ```
      
       For example:
      ```
      scala> val df = Seq[(Long, Long)]((1, 2), (-1, -2), (9123146099426677101L, 9123146560113991650L)).toDF("a", "b")
      df: org.apache.spark.sql.DataFrame = [a: bigint, b: bigint]
      
      scala> df.show
      +-------------------+-------------------+
      |                  a|                  b|
      +-------------------+-------------------+
      |                  1|                  2|
      |                 -1|                 -2|
      |9123146099426677101|9123146560113991650|
      +-------------------+-------------------+
      
      scala> df.na.fill(0).show
      +-------------------+-------------------+
      |                  a|                  b|
      +-------------------+-------------------+
      |                  1|                  2|
      |                 -1|                 -2|
      |9123146099426676736|9123146560113991680|
      +-------------------+-------------------+
       ```
      
      the original values changed [which is not we expected result]:
      ```
       9123146099426677101 -> 9123146099426676736
       9123146560113991650 -> 9123146560113991680
      ```
      
      ## How was this patch tested?
      
      unit test added.
      
      Author: root <root@iZbp1gsnrlfzjxh82cz80vZ.(none)>
      
      Closes #15994 from windpiger/nafillMissupOriginalValue.
      
      (cherry picked from commit 508de38c)
      Signed-off-by: default avatarDB Tsai <dbtsai@dbtsai.com>
      Unverified
      b26f2c2c
    • Shixiong Zhu's avatar
      [SPARK-20285][TESTS] Increase the pyspark streaming test timeout to 30 seconds · 489c1f35
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      Saw the following failure locally:
      
      ```
      Traceback (most recent call last):
        File "/home/jenkins/workspace/python/pyspark/streaming/tests.py", line 351, in test_cogroup
          self._test_func(input, func, expected, sort=True, input2=input2)
        File "/home/jenkins/workspace/python/pyspark/streaming/tests.py", line 162, in _test_func
          self.assertEqual(expected, result)
      AssertionError: Lists differ: [[(1, ([1], [2])), (2, ([1], [... != []
      
      First list contains 3 additional elements.
      First extra element 0:
      [(1, ([1], [2])), (2, ([1], [])), (3, ([1], []))]
      
      + []
      - [[(1, ([1], [2])), (2, ([1], [])), (3, ([1], []))],
      -  [(1, ([1, 1, 1], [])), (2, ([1], [])), (4, ([], [1]))],
      -  [('', ([1, 1], [1, 2])), ('a', ([1, 1], [1, 1])), ('b', ([1], [1]))]]
      ```
      
      It also happened on Jenkins: http://spark-tests.appspot.com/builds/spark-branch-2.1-test-sbt-hadoop-2.7/120
      
      
      
      It's because when the machine is overloaded, the timeout is not enough. This PR just increases the timeout to 30 seconds.
      
      ## How was this patch tested?
      
      Jenkins
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #17597 from zsxwing/SPARK-20285.
      
      (cherry picked from commit f9a50ba2)
      Signed-off-by: default avatarShixiong Zhu <shixiong@databricks.com>
      489c1f35
    • Bogdan Raducanu's avatar
      [SPARK-20280][CORE] FileStatusCache Weigher integer overflow · bc7304e1
      Bogdan Raducanu authored
      
      ## What changes were proposed in this pull request?
      
      Weigher.weigh needs to return Int but it is possible for an Array[FileStatus] to have size > Int.maxValue. To avoid this, the size is scaled down by a factor of 32. The maximumWeight of the cache is also scaled down by the same factor.
      
      ## How was this patch tested?
      New test in FileIndexSuite
      
      Author: Bogdan Raducanu <bogdan@databricks.com>
      
      Closes #17591 from bogdanrdc/SPARK-20280.
      
      (cherry picked from commit f6dd8e0e)
      Signed-off-by: default avatarHerman van Hovell <hvanhovell@databricks.com>
      bc7304e1
  2. Apr 09, 2017
    • Reynold Xin's avatar
      [SPARK-20264][SQL] asm should be non-test dependency in sql/core · 1a73046b
      Reynold Xin authored
      
      ## What changes were proposed in this pull request?
      sq/core module currently declares asm as a test scope dependency. Transitively it should actually be a normal dependency since the actual core module defines it. This occasionally confuses IntelliJ.
      
      ## How was this patch tested?
      N/A - This is a build change.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #17574 from rxin/SPARK-20264.
      
      (cherry picked from commit 7bfa05e0)
      Signed-off-by: default avatarXiao Li <gatorsmile@gmail.com>
      1a73046b
    • Vijay Ramesh's avatar
      [SPARK-20260][MLLIB] String interpolation required for error message · 43a7fcad
      Vijay Ramesh authored
      ## What changes were proposed in this pull request?
      This error message doesn't get properly formatted because of a missing `s`.  Currently the error looks like:
      
      ```
      Caused by: java.lang.IllegalArgumentException: requirement failed: indices should be one-based and in ascending order; found current=$current, previous=$previous; line="$line"
      ```
      (note the literal `$current` instead of the interpolated value)
      
      Please review http://spark.apache.org/contributing.html
      
       before opening a pull request.
      
      Author: Vijay Ramesh <vramesh@demandbase.com>
      
      Closes #17572 from vijaykramesh/master.
      
      (cherry picked from commit 261eaf51)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      43a7fcad
  3. Apr 07, 2017
    • Reynold Xin's avatar
      [SPARK-20262][SQL] AssertNotNull should throw NullPointerException · 658b3588
      Reynold Xin authored
      
      AssertNotNull currently throws RuntimeException. It should throw NullPointerException, which is more specific.
      
      N/A
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #17573 from rxin/SPARK-20262.
      
      (cherry picked from commit e1afc4dc)
      Signed-off-by: default avatarXiao Li <gatorsmile@gmail.com>
      658b3588
    • Wenchen Fan's avatar
      [SPARK-20246][SQL] should not push predicate down through aggregate with... · fc242ccf
      Wenchen Fan authored
      [SPARK-20246][SQL] should not push predicate down through aggregate with non-deterministic expressions
      
      ## What changes were proposed in this pull request?
      
      Similar to `Project`, when `Aggregate` has non-deterministic expressions, we should not push predicate down through it, as it will change the number of input rows and thus change the evaluation result of non-deterministic expressions in `Aggregate`.
      
      ## How was this patch tested?
      
      new regression test
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #17562 from cloud-fan/filter.
      
      (cherry picked from commit 7577e9c3)
      Signed-off-by: default avatarXiao Li <gatorsmile@gmail.com>
      fc242ccf
    • 郭小龙 10207633's avatar
      [SPARK-20218][DOC][APP-ID] applications//stages' in REST API,add description. · 77911201
      郭小龙 10207633 authored
      ## What changes were proposed in this pull request?
      
      1. '/applications/[app-id]/stages' in rest api.status should add description '?status=[active|complete|pending|failed] list only stages in the state.'
      
      Now the lack of this description, resulting in the use of this api do not know the use of the status through the brush stage list.
      
      2.'/applications/[app-id]/stages/[stage-id]' in REST API,remove redundant description ‘?status=[active|complete|pending|failed] list only stages in the state.’.
      Because only one stage is determined based on stage-id.
      
      code:
        GET
        def stageList(QueryParam("status") statuses: JList[StageStatus]): Seq[StageData] = {
          val listener = ui.jobProgressListener
          val stageAndStatus = AllStagesResource.stagesAndStatus(ui)
          val adjStatuses = {
            if (statuses.isEmpty()) {
              Arrays.asList(StageStatus.values(): _*)
            } else {
              statuses
            }
          };
      
      ## How was this patch tested?
      
      manual tests
      
      Please review http://spark.apache.org/contributing.html
      
       before opening a pull request.
      
      Author: 郭小龙 10207633 <guo.xiaolong1@zte.com.cn>
      
      Closes #17534 from guoxiaolongzte/SPARK-20218.
      
      (cherry picked from commit 9e0893b5)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      77911201
  4. Apr 05, 2017
    • Liang-Chi Hsieh's avatar
      [SPARK-20214][ML] Make sure converted csc matrix has sorted indices · fb81a412
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      `_convert_to_vector` converts a scipy sparse matrix to csc matrix for initializing `SparseVector`. However, it doesn't guarantee the converted csc matrix has sorted indices and so a failure happens when you do something like that:
      
          from scipy.sparse import lil_matrix
          lil = lil_matrix((4, 1))
          lil[1, 0] = 1
          lil[3, 0] = 2
          _convert_to_vector(lil.todok())
      
          File "/home/jenkins/workspace/python/pyspark/mllib/linalg/__init__.py", line 78, in _convert_to_vector
            return SparseVector(l.shape[0], csc.indices, csc.data)
          File "/home/jenkins/workspace/python/pyspark/mllib/linalg/__init__.py", line 556, in __init__
            % (self.indices[i], self.indices[i + 1]))
          TypeError: Indices 3 and 1 are not strictly increasing
      
      A simple test can confirm that `dok_matrix.tocsc()` won't guarantee sorted indices:
      
          >>> from scipy.sparse import lil_matrix
          >>> lil = lil_matrix((4, 1))
          >>> lil[1, 0] = 1
          >>> lil[3, 0] = 2
          >>> dok = lil.todok()
          >>> csc = dok.tocsc()
          >>> csc.has_sorted_indices
          0
          >>> csc.indices
          array([3, 1], dtype=int32)
      
      I checked the source codes of scipy. The only way to guarantee it is `csc_matrix.tocsr()` and `csr_matrix.tocsc()`.
      
      ## How was this patch tested?
      
      Existing tests.
      
      Please review http://spark.apache.org/contributing.html
      
       before opening a pull request.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #17532 from viirya/make-sure-sorted-indices.
      
      (cherry picked from commit 12206058)
      Signed-off-by: default avatarJoseph K. Bradley <joseph@databricks.com>
      fb81a412
    • wangzhenhua's avatar
      [SPARK-20223][SQL] Fix typo in tpcds q77.sql · 2b85e059
      wangzhenhua authored
      
      ## What changes were proposed in this pull request?
      
      Fix typo in tpcds q77.sql
      
      ## How was this patch tested?
      
      N/A
      
      Author: wangzhenhua <wangzhenhua@huawei.com>
      
      Closes #17538 from wzhfy/typoQ77.
      
      (cherry picked from commit a2d8d767)
      Signed-off-by: default avatarXiao Li <gatorsmile@gmail.com>
      2b85e059
    • Oliver Köth's avatar
      [SPARK-20042][WEB UI] Fix log page buttons for reverse proxy mode · efc72dcc
      Oliver Köth authored
      
      with spark.ui.reverseProxy=true, full path URLs like /log will point to
      the master web endpoint which is serving the worker UI as reverse proxy.
      To access a REST endpoint in the worker in reverse proxy mode , the
      leading /proxy/"target"/ part of the base URI must be retained.
      
      Added logic to log-view.js to handle this, similar to executorspage.js
      
      Patch was tested manually
      
      Author: Oliver Köth <okoeth@de.ibm.com>
      
      Closes #17370 from okoethibm/master.
      
      (cherry picked from commit 6f09dc70)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      efc72dcc
  5. Apr 04, 2017
    • Marcelo Vanzin's avatar
      [SPARK-20191][YARN] Crate wrapper for RackResolver so tests can override it. · 00c12488
      Marcelo Vanzin authored
      
      Current test code tries to override the RackResolver used by setting
      configuration params, but because YARN libs statically initialize the
      resolver the first time it's used, that means that those configs don't
      really take effect during Spark tests.
      
      This change adds a wrapper class that easily allows tests to override the
      behavior of the resolver for the Spark code that uses it.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #17508 from vanzin/SPARK-20191.
      
      (cherry picked from commit 0736980f)
      Signed-off-by: default avatarMarcelo Vanzin <vanzin@cloudera.com>
      00c12488
    • guoxiaolongzte's avatar
      [SPARK-20190][APP-ID] applications//jobs' in rest api,status should be [running|s… · f9546dac
      guoxiaolongzte authored
      …ucceeded|failed|unknown]
      
      ## What changes were proposed in this pull request?
      
      '/applications/[app-id]/jobs' in rest api.status should be'[running|succeeded|failed|unknown]'.
      now status is '[complete|succeeded|failed]'.
      but '/applications/[app-id]/jobs?status=complete' the server return 'HTTP ERROR 404'.
      Added '?status=running' and '?status=unknown'.
      code :
      public enum JobExecutionStatus {
      RUNNING,
      SUCCEEDED,
      FAILED,
      UNKNOWN;
      
      ## How was this patch tested?
      
       manual tests
      
      Please review http://spark.apache.org/contributing.html
      
       before opening a pull request.
      
      Author: guoxiaolongzte <guo.xiaolong1@zte.com.cn>
      
      Closes #17507 from guoxiaolongzte/SPARK-20190.
      
      (cherry picked from commit c95fbea6)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      f9546dac
  6. Apr 03, 2017
    • hyukjinkwon's avatar
      [MINOR][DOCS] Replace non-breaking space to normal spaces that breaks rendering markdown · 77700ea3
      hyukjinkwon authored
      # What changes were proposed in this pull request?
      
      It seems there are several non-breaking spaces were inserted into several `.md`s and they look breaking rendering markdown files.
      
      These are different. For example, this can be checked via `python` as below:
      
      ```python
      >>> " "
      '\xc2\xa0'
      >>> " "
      ' '
      ```
      
      _Note that it seems this PR description automatically replaces non-breaking spaces into normal spaces. Please open a `vi` and copy and paste it into `python` to verify this (do not copy the characters here)._
      
      I checked the output below in  Sapari and Chrome on Mac OS and, Internal Explorer on Windows 10.
      
      **Before**
      
      ![2017-04-03 12 37 17](https://cloud.githubusercontent.com/assets/6477701/24594655/50aaba02-186a-11e7-80bb-d34b17a3398a.png)
      ![2017-04-03 12 36 57](https://cloud.githubusercontent.com/assets/6477701/24594654/50a855e6-186a-11e7-94e2-661e56544b0f.png)
      
      **After**
      
      ![2017-04-03 12 36 46](https://cloud.githubusercontent.com/assets/6477701/24594657/53c2545c-186a-11e7-9a73-00529afbfd75.png)
      ![2017-04-03 12 36 31](https://cloud.githubusercontent.com/assets/6477701/24594658/53c286c0-186a-11e7-99c9-e66b1f510fe7.png
      
      )
      
      ## How was this patch tested?
      
      Manually checking.
      
      These instances were found via
      
      ```
      grep --include=*.scala --include=*.python --include=*.java --include=*.r --include=*.R --include=*.md --include=*.r -r -I " " .
      ```
      
      in Mac OS.
      
      It seems there are several instances more as below:
      
      ```
      ./docs/sql-programming-guide.md:        │   ├── ...
      ./docs/sql-programming-guide.md:        │   │
      ./docs/sql-programming-guide.md:        │   ├── country=US
      ./docs/sql-programming-guide.md:        │   │   └── data.parquet
      ./docs/sql-programming-guide.md:        │   ├── country=CN
      ./docs/sql-programming-guide.md:        │   │   └── data.parquet
      ./docs/sql-programming-guide.md:        │   └── ...
      ./docs/sql-programming-guide.md:            ├── ...
      ./docs/sql-programming-guide.md:            │
      ./docs/sql-programming-guide.md:            ├── country=US
      ./docs/sql-programming-guide.md:            │   └── data.parquet
      ./docs/sql-programming-guide.md:            ├── country=CN
      ./docs/sql-programming-guide.md:            │   └── data.parquet
      ./docs/sql-programming-guide.md:            └── ...
      ./sql/core/src/test/README.md:│   ├── *.avdl                  # Testing Avro IDL(s)
      ./sql/core/src/test/README.md:│   └── *.avpr                  # !! NO TOUCH !! Protocol files generated from Avro IDL(s)
      ./sql/core/src/test/README.md:│   ├── gen-avro.sh             # Script used to generate Java code for Avro
      ./sql/core/src/test/README.md:│   └── gen-thrift.sh           # Script used to generate Java code for Thrift
      ```
      
      These seems generated via `tree` command which inserts non-breaking spaces. They do not look causing any problem for rendering within code blocks and I did not fix it to reduce the overhead to manually replace it when it is overwritten via `tree` command in the future.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #17517 from HyukjinKwon/non-breaking-space.
      
      (cherry picked from commit 364b0db7)
      Signed-off-by: default avatarSean Owen <sowen@cloudera.com>
      77700ea3
  7. Apr 02, 2017
  8. Mar 31, 2017
    • Ryan Blue's avatar
      [SPARK-20084][CORE] Remove internal.metrics.updatedBlockStatuses from history files. · e3cec18e
      Ryan Blue authored
      
      ## What changes were proposed in this pull request?
      
      Remove accumulator updates for internal.metrics.updatedBlockStatuses from SparkListenerTaskEnd entries in the history file. These can cause history files to grow to hundreds of GB because the value of the accumulator contains all tracked blocks.
      
      ## How was this patch tested?
      
      Current History UI tests cover use of the history file.
      
      Author: Ryan Blue <blue@apache.org>
      
      Closes #17412 from rdblue/SPARK-20084-remove-block-accumulator-info.
      
      (cherry picked from commit c4c03eed)
      Signed-off-by: default avatarMarcelo Vanzin <vanzin@cloudera.com>
      e3cec18e
    • Kunal Khamar's avatar
      [SPARK-20164][SQL] AnalysisException not tolerant of null query plan. · 6a1b2eb4
      Kunal Khamar authored
      
      The query plan in an `AnalysisException` may be `null` when an `AnalysisException` object is serialized and then deserialized, since `plan` is marked `transient`. Or when someone throws an `AnalysisException` with a null query plan (which should not happen).
      `def getMessage` is not tolerant of this and throws a `NullPointerException`, leading to loss of information about the original exception.
      The fix is to add a `null` check in `getMessage`.
      
      - Unit test
      
      Author: Kunal Khamar <kkhamar@outlook.com>
      
      Closes #17486 from kunalkhamar/spark-20164.
      
      (cherry picked from commit 254877c2)
      Signed-off-by: default avatarXiao Li <gatorsmile@gmail.com>
      6a1b2eb4
  9. Mar 29, 2017
    • jerryshao's avatar
      [SPARK-20059][YARN] Use the correct classloader for HBaseCredentialProvider · 103ff54d
      jerryshao authored
      
      ## What changes were proposed in this pull request?
      
      Currently we use system classloader to find HBase jars, if it is specified by `--jars`, then it will be failed with ClassNotFound issue. So here changing to use child classloader.
      
      Also putting added jars and main jar into classpath of submitted application in yarn cluster mode, otherwise HBase jars specified with `--jars` will never be honored in cluster mode, and fetching tokens in client side will always be failed.
      
      ## How was this patch tested?
      
      Unit test and local verification.
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #17388 from jerryshao/SPARK-20059.
      
      (cherry picked from commit c622a87c)
      Signed-off-by: default avatarMarcelo Vanzin <vanzin@cloudera.com>
      103ff54d
    • Reynold Xin's avatar
      [SPARK-20134][SQL] SQLMetrics.postDriverMetricUpdates to simplify driver side metric updates · f8c1b3e2
      Reynold Xin authored
      
      ## What changes were proposed in this pull request?
      It is not super intuitive how to update SQLMetric on the driver side. This patch introduces a new SQLMetrics.postDriverMetricUpdates function to do that, and adds documentation to make it more obvious.
      
      ## How was this patch tested?
      Updated a test case to use this method.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #17464 from rxin/SPARK-20134.
      
      (cherry picked from commit 9712bd39)
      Signed-off-by: default avatarReynold Xin <rxin@databricks.com>
      f8c1b3e2
  10. Mar 28, 2017
    • 颜发才(Yan Facai)'s avatar
      [SPARK-20043][ML] DecisionTreeModel: ImpurityCalculator builder fails for... · 30954806
      颜发才(Yan Facai) authored
      [SPARK-20043][ML] DecisionTreeModel: ImpurityCalculator builder fails for uppercase impurity type Gini
      
      Fix bug: DecisionTreeModel can't recongnize Impurity "Gini" when loading
      
      TODO:
      + [x] add unit test
      + [x] fix the bug
      
      Author: 颜发才(Yan Facai) <facai.yan@gmail.com>
      
      Closes #17407 from facaiy/BUG/decision_tree_loader_failer_with_Gini_impurity.
      
      (cherry picked from commit 7d432af8)
      Signed-off-by: default avatarJoseph K. Bradley <joseph@databricks.com>
      30954806
    • Patrick Wendell's avatar
      4964dbed
    • Patrick Wendell's avatar
      Preparing Spark release v2.1.1-rc2 · 02b165dc
      Patrick Wendell authored
      02b165dc
    • sureshthalamati's avatar
      [SPARK-14536][SQL][BACKPORT-2.1] fix to handle null value in array type column for postgres. · e669dd7e
      sureshthalamati authored
      ## What changes were proposed in this pull request?
      JDBC read is failing with NPE due to missing null value check for array data type if the source table has null values in the array type column. For null values Resultset.getArray() returns null.
      This PR adds null safe check to the Resultset.getArray() value before invoking method on the Array object
      
      ## How was this patch tested?
      Updated the PostgresIntegration test suite to test null values. Ran docker integration tests on my laptop.
      
      Author: sureshthalamati <suresh.thalamati@gmail.com>
      
      Closes #17460 from sureshthalamati/jdbc_array_null_fix_spark_2.1-SPARK-14536.
      e669dd7e
    • Wenchen Fan's avatar
      [SPARK-20125][SQL] Dataset of type option of map does not work · fd2e4061
      Wenchen Fan authored
      
      When we build the deserializer expression for map type, we will use `StaticInvoke` to call `ArrayBasedMapData.toScalaMap`, and declare the return type as `scala.collection.immutable.Map`. If the map is inside an Option, we will wrap this `StaticInvoke` with `WrapOption`, which requires the input to be `scala.collect.Map`. Ideally this should be fine, as `scala.collection.immutable.Map` extends `scala.collect.Map`, but our `ObjectType` is too strict about this, this PR fixes it.
      
      new regression test
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #17454 from cloud-fan/map.
      
      (cherry picked from commit d4fac410)
      Signed-off-by: default avatarCheng Lian <lian@databricks.com>
      fd2e4061
    • jerryshao's avatar
      [SPARK-19995][YARN] Register tokens to current UGI to avoid re-issuing of... · 4bcb7d67
      jerryshao authored
      [SPARK-19995][YARN] Register tokens to current UGI to avoid re-issuing of tokens in yarn client mode
      
      ## What changes were proposed in this pull request?
      
      In the current Spark on YARN code, we will obtain tokens from provided services, but we're not going to add these tokens to the current user's credentials. This will make all the following operations to these services still require TGT rather than delegation tokens. This is unnecessary since we already got the tokens, also this will lead to failure in user impersonation scenario, because the TGT is granted by real user, not proxy user.
      
      So here changing to put all the tokens to the current UGI, so that following operations to these services will honor tokens rather than TGT, and this will further handle the proxy user issue mentioned above.
      
      ## How was this patch tested?
      
      Local verified in secure cluster.
      
      vanzin tgravescs mridulm  dongjoon-hyun please help to review, thanks a lot.
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #17335 from jerryshao/SPARK-19995.
      
      (cherry picked from commit 17eddb35)
      Signed-off-by: default avatarMarcelo Vanzin <vanzin@cloudera.com>
      4bcb7d67
  11. Mar 27, 2017
    • Josh Rosen's avatar
      [SPARK-20102] Fix nightly packaging and RC packaging scripts w/ two minor build fixes · 4056191d
      Josh Rosen authored
      
      ## What changes were proposed in this pull request?
      
      The master snapshot publisher builds are currently broken due to two minor build issues:
      
      1. For unknown reasons, the LFTP `mkdir -p` command began throwing errors when the remote directory already exists. This change of behavior might have been caused by configuration changes in the ASF's SFTP server, but I'm not entirely sure of that. To work around this problem, this patch updates the script to ignore errors from the `lftp mkdir -p` commands.
      2. The PySpark `setup.py` file references a non-existent `pyspark.ml.stat` module, causing Python packaging to fail by complaining about a missing directory. The fix is to simply drop that line from the setup script.
      
      ## How was this patch tested?
      
      The LFTP fix was tested by manually running the failing commands on AMPLab Jenkins against the ASF SFTP server. The PySpark fix was tested locally.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #17437 from JoshRosen/spark-20102.
      
      (cherry picked from commit 314cf51d)
      Signed-off-by: default avatarJosh Rosen <joshrosen@databricks.com>
      4056191d
  12. Mar 26, 2017
    • Herman van Hovell's avatar
      [SPARK-20086][SQL] CollapseWindow should not collapse dependent adjacent windows · b6d348ee
      Herman van Hovell authored
      
      ## What changes were proposed in this pull request?
      The `CollapseWindow` is currently to aggressive when collapsing adjacent windows. It also collapses windows in the which the parent produces a column that is consumed by the child; this creates an invalid window which will fail at runtime.
      
      This PR fixes this by adding a check for dependent adjacent windows to the `CollapseWindow` rule.
      
      ## How was this patch tested?
      Added a new test case to `CollapseWindowSuite`
      
      Author: Herman van Hovell <hvanhovell@databricks.com>
      
      Closes #17432 from hvanhovell/SPARK-20086.
      
      (cherry picked from commit 617ab644)
      Signed-off-by: default avatarHerman van Hovell <hvanhovell@databricks.com>
      b6d348ee
  13. Mar 25, 2017
    • Carson Wang's avatar
      [SPARK-19674][SQL] Ignore driver accumulator updates don't belong to … · d989434e
      Carson Wang authored
      [SPARK-19674][SQL] Ignore driver accumulator updates don't belong to the execution when merging all accumulator updates
      
      N.B. This is a backport to branch-2.1 of #17009.
      
      ## What changes were proposed in this pull request?
      In SQLListener.getExecutionMetrics, driver accumulator updates don't belong to the execution should be ignored when merging all accumulator updates to prevent NoSuchElementException.
      
      ## How was this patch tested?
      Updated unit test.
      
      Author: Carson Wang <carson.wangintel.com>
      
      Author: Carson Wang <carson.wang@intel.com>
      
      Closes #17418 from mallman/spark-19674-backport_2.1.
      d989434e
  14. Mar 23, 2017
    • Kazuaki Ishizaki's avatar
      [SPARK-19959][SQL] Fix to throw NullPointerException in df[java.lang.Long].collect · 92f0b012
      Kazuaki Ishizaki authored
      
      ## What changes were proposed in this pull request?
      
      This PR fixes `NullPointerException` in the generated code by Catalyst. When we run the following code, we get the following `NullPointerException`. This is because there is no null checks for `inputadapter_value`  while `java.lang.Long inputadapter_value` at Line 30 may have `null`.
      
      This happen when a type of DataFrame is nullable primitive type such as `java.lang.Long` and the wholestage codegen is used. While the physical plan keeps `nullable=true` in `input[0, java.lang.Long, true].longValue`, `BoundReference.doGenCode` ignores `nullable=true`. Thus, nullcheck code will not be generated and `NullPointerException` will occur.
      
      This PR checks the nullability and correctly generates nullcheck if needed.
      ```java
      sparkContext.parallelize(Seq[java.lang.Long](0L, null, 2L), 1).toDF.collect
      ```
      
      ```java
      Caused by: java.lang.NullPointerException
      	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:37)
      	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
      	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:393)
      ...
      ```
      
      Generated code without this PR
      ```java
      /* 005 */ final class GeneratedIterator extends org.apache.spark.sql.execution.BufferedRowIterator {
      /* 006 */   private Object[] references;
      /* 007 */   private scala.collection.Iterator[] inputs;
      /* 008 */   private scala.collection.Iterator inputadapter_input;
      /* 009 */   private UnsafeRow serializefromobject_result;
      /* 010 */   private org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder serializefromobject_holder;
      /* 011 */   private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter serializefromobject_rowWriter;
      /* 012 */
      /* 013 */   public GeneratedIterator(Object[] references) {
      /* 014 */     this.references = references;
      /* 015 */   }
      /* 016 */
      /* 017 */   public void init(int index, scala.collection.Iterator[] inputs) {
      /* 018 */     partitionIndex = index;
      /* 019 */     this.inputs = inputs;
      /* 020 */     inputadapter_input = inputs[0];
      /* 021 */     serializefromobject_result = new UnsafeRow(1);
      /* 022 */     this.serializefromobject_holder = new org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder(serializefromobject_result, 0);
      /* 023 */     this.serializefromobject_rowWriter = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(serializefromobject_holder, 1);
      /* 024 */
      /* 025 */   }
      /* 026 */
      /* 027 */   protected void processNext() throws java.io.IOException {
      /* 028 */     while (inputadapter_input.hasNext() && !stopEarly()) {
      /* 029 */       InternalRow inputadapter_row = (InternalRow) inputadapter_input.next();
      /* 030 */       java.lang.Long inputadapter_value = (java.lang.Long)inputadapter_row.get(0, null);
      /* 031 */
      /* 032 */       boolean serializefromobject_isNull = true;
      /* 033 */       long serializefromobject_value = -1L;
      /* 034 */       if (!false) {
      /* 035 */         serializefromobject_isNull = false;
      /* 036 */         if (!serializefromobject_isNull) {
      /* 037 */           serializefromobject_value = inputadapter_value.longValue();
      /* 038 */         }
      /* 039 */
      /* 040 */       }
      /* 041 */       serializefromobject_rowWriter.zeroOutNullBytes();
      /* 042 */
      /* 043 */       if (serializefromobject_isNull) {
      /* 044 */         serializefromobject_rowWriter.setNullAt(0);
      /* 045 */       } else {
      /* 046 */         serializefromobject_rowWriter.write(0, serializefromobject_value);
      /* 047 */       }
      /* 048 */       append(serializefromobject_result);
      /* 049 */       if (shouldStop()) return;
      /* 050 */     }
      /* 051 */   }
      /* 052 */ }
      ```
      
      Generated code with this PR
      
      ```java
      /* 005 */ final class GeneratedIterator extends org.apache.spark.sql.execution.BufferedRowIterator {
      /* 006 */   private Object[] references;
      /* 007 */   private scala.collection.Iterator[] inputs;
      /* 008 */   private scala.collection.Iterator inputadapter_input;
      /* 009 */   private UnsafeRow serializefromobject_result;
      /* 010 */   private org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder serializefromobject_holder;
      /* 011 */   private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter serializefromobject_rowWriter;
      /* 012 */
      /* 013 */   public GeneratedIterator(Object[] references) {
      /* 014 */     this.references = references;
      /* 015 */   }
      /* 016 */
      /* 017 */   public void init(int index, scala.collection.Iterator[] inputs) {
      /* 018 */     partitionIndex = index;
      /* 019 */     this.inputs = inputs;
      /* 020 */     inputadapter_input = inputs[0];
      /* 021 */     serializefromobject_result = new UnsafeRow(1);
      /* 022 */     this.serializefromobject_holder = new org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder(serializefromobject_result, 0);
      /* 023 */     this.serializefromobject_rowWriter = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(serializefromobject_holder, 1);
      /* 024 */
      /* 025 */   }
      /* 026 */
      /* 027 */   protected void processNext() throws java.io.IOException {
      /* 028 */     while (inputadapter_input.hasNext() && !stopEarly()) {
      /* 029 */       InternalRow inputadapter_row = (InternalRow) inputadapter_input.next();
      /* 030 */       boolean inputadapter_isNull = inputadapter_row.isNullAt(0);
      /* 031 */       java.lang.Long inputadapter_value = inputadapter_isNull ? null : ((java.lang.Long)inputadapter_row.get(0, null));
      /* 032 */
      /* 033 */       boolean serializefromobject_isNull = true;
      /* 034 */       long serializefromobject_value = -1L;
      /* 035 */       if (!inputadapter_isNull) {
      /* 036 */         serializefromobject_isNull = false;
      /* 037 */         if (!serializefromobject_isNull) {
      /* 038 */           serializefromobject_value = inputadapter_value.longValue();
      /* 039 */         }
      /* 040 */
      /* 041 */       }
      /* 042 */       serializefromobject_rowWriter.zeroOutNullBytes();
      /* 043 */
      /* 044 */       if (serializefromobject_isNull) {
      /* 045 */         serializefromobject_rowWriter.setNullAt(0);
      /* 046 */       } else {
      /* 047 */         serializefromobject_rowWriter.write(0, serializefromobject_value);
      /* 048 */       }
      /* 049 */       append(serializefromobject_result);
      /* 050 */       if (shouldStop()) return;
      /* 051 */     }
      /* 052 */   }
      /* 053 */ }
      ```
      
      ## How was this patch tested?
      
      Added new test suites in `DataFrameSuites`
      
      Author: Kazuaki Ishizaki <ishizaki@jp.ibm.com>
      
      Closes #17302 from kiszk/SPARK-19959.
      
      (cherry picked from commit bb823ca4)
      Signed-off-by: default avatarWenchen Fan <wenchen@databricks.com>
      92f0b012
    • Dongjoon Hyun's avatar
      [SPARK-19970][SQL][BRANCH-2.1] Table owner should be USER instead of PRINCIPAL... · af960e86
      Dongjoon Hyun authored
      [SPARK-19970][SQL][BRANCH-2.1] Table owner should be USER instead of PRINCIPAL in kerberized clusters
      
      ## What changes were proposed in this pull request?
      
      In the kerberized hadoop cluster, when Spark creates tables, the owner of tables are filled with PRINCIPAL strings instead of USER names. This is inconsistent with Hive and causes problems when using [ROLE](https://cwiki.apache.org/confluence/display/Hive/SQL+Standard+Based+Hive+Authorization) in Hive. We had better to fix this.
      
      **BEFORE**
      ```scala
      scala> sql("create table t(a int)").show
      scala> sql("desc formatted t").show(false)
      ...
      |Owner:                      |sparkEXAMPLE.COM                                         |       |
      ```
      
      **AFTER**
      ```scala
      scala> sql("create table t(a int)").show
      scala> sql("desc formatted t").show(false)
      ...
      |Owner:                      |spark                                         |       |
      ```
      
      ## How was this patch tested?
      
      Manually do `create table` and `desc formatted` because this happens in Kerberized clusters.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #17363 from dongjoon-hyun/SPARK-19970-2.
      af960e86
  15. Mar 22, 2017
  16. Mar 21, 2017
    • Patrick Wendell's avatar
      c4d2b833
    • Patrick Wendell's avatar
      Preparing Spark release v2.1.1-rc1 · 30abb95c
      Patrick Wendell authored
      30abb95c
    • Takeshi Yamamuro's avatar
      [SPARK-19980][SQL][BACKPORT-2.1] Add NULL checks in Bean serializer · a04428fe
      Takeshi Yamamuro authored
      ## What changes were proposed in this pull request?
      A Bean serializer in `ExpressionEncoder`  could change values when Beans having NULL. A concrete example is as follows;
      ```
      scala> :paste
      class Outer extends Serializable {
        private var cls: Inner = _
        def setCls(c: Inner): Unit = cls = c
        def getCls(): Inner = cls
      }
      
      class Inner extends Serializable {
        private var str: String = _
        def setStr(s: String): Unit = str = str
        def getStr(): String = str
      }
      
      scala> Seq("""{"cls":null}""", """{"cls": {"str":null}}""").toDF().write.text("data")
      scala> val encoder = Encoders.bean(classOf[Outer])
      scala> val schema = encoder.schema
      scala> val df = spark.read.schema(schema).json("data").as[Outer](encoder)
      scala> df.show
      +------+
      |   cls|
      +------+
      |[null]|
      |  null|
      +------+
      
      scala> df.map(x => x)(encoder).show()
      +------+
      |   cls|
      +------+
      |[null]|
      |[null]|     // <-- Value changed
      +------+
      ```
      
      This is because the Bean serializer does not have the NULL-check expressions that the serializer of Scala's product types has. Actually, this value change does not happen in Scala's product types;
      
      ```
      scala> :paste
      case class Outer(cls: Inner)
      case class Inner(str: String)
      
      scala> val encoder = Encoders.product[Outer]
      scala> val schema = encoder.schema
      scala> val df = spark.read.schema(schema).json("data").as[Outer](encoder)
      scala> df.show
      +------+
      |   cls|
      +------+
      |[null]|
      |  null|
      +------+
      
      scala> df.map(x => x)(encoder).show()
      +------+
      |   cls|
      +------+
      |[null]|
      |  null|
      +------+
      ```
      
      This pr added the NULL-check expressions in Bean serializer along with the serializer of Scala's product types.
      
      ## How was this patch tested?
      Added tests in `JavaDatasetSuite`.
      
      Author: Takeshi Yamamuro <yamamuro@apache.org>
      
      Closes #17372 from maropu/SPARK-19980-BACKPORT2.1.
      a04428fe
    • Will Manning's avatar
      clarify array_contains function description · 9dfdd2ad
      Will Manning authored
      ## What changes were proposed in this pull request?
      
      The description in the comment for array_contains is vague/incomplete (i.e., doesn't mention that it returns `null` if the array is `null`); this PR fixes that.
      
      ## How was this patch tested?
      
      No testing, since it merely changes a comment.
      
      Please review http://spark.apache.org/contributing.html
      
       before opening a pull request.
      
      Author: Will Manning <lwwmanning@gmail.com>
      
      Closes #17380 from lwwmanning/patch-1.
      
      (cherry picked from commit a04dcde8)
      Signed-off-by: default avatarReynold Xin <rxin@databricks.com>
      9dfdd2ad
    • Felix Cheung's avatar
      [SPARK-19237][SPARKR][CORE] On Windows spark-submit should handle when java is not installed · 5c18b6c3
      Felix Cheung authored
      
      ## What changes were proposed in this pull request?
      
      When SparkR is installed as a R package there might not be any java runtime.
      If it is not there SparkR's `sparkR.session()` will block waiting for the connection timeout, hanging the R IDE/shell, without any notification or message.
      
      ## How was this patch tested?
      
      manually
      
      - [x] need to test on Windows
      
      Author: Felix Cheung <felixcheung_m@hotmail.com>
      
      Closes #16596 from felixcheung/rcheckjava.
      
      (cherry picked from commit a8877bdb)
      Signed-off-by: default avatarShivaram Venkataraman <shivaram@cs.berkeley.edu>
      5c18b6c3
    • zhaorongsheng's avatar
      [SPARK-20017][SQL] change the nullability of function 'StringToMap' from 'false' to 'true' · a88c88aa
      zhaorongsheng authored
      
      ## What changes were proposed in this pull request?
      
      Change the nullability of function `StringToMap` from `false` to `true`.
      
      Author: zhaorongsheng <334362872@qq.com>
      
      Closes #17350 from zhaorongsheng/bug-fix_strToMap_NPE.
      
      (cherry picked from commit 7dbc162f)
      Signed-off-by: default avatarXiao Li <gatorsmile@gmail.com>
      a88c88aa
  17. Mar 20, 2017
    • Dongjoon Hyun's avatar
      [SPARK-19912][SQL] String literals should be escaped for Hive metastore partition pruning · c4c7b185
      Dongjoon Hyun authored
      
      ## What changes were proposed in this pull request?
      
      Since current `HiveShim`'s `convertFilters` does not escape the string literals. There exists the following correctness issues. This PR aims to return the correct result and also shows the more clear exception message.
      
      **BEFORE**
      
      ```scala
      scala> Seq((1, "p1", "q1"), (2, "p1\" and q=\"q1", "q2")).toDF("a", "p", "q").write.partitionBy("p", "q").saveAsTable("t1")
      
      scala> spark.table("t1").filter($"p" === "p1\" and q=\"q1").select($"a").show
      +---+
      |  a|
      +---+
      +---+
      
      scala> spark.table("t1").filter($"p" === "'\"").select($"a").show
      java.lang.RuntimeException: Caught Hive MetaException attempting to get partition metadata by filter from ...
      ```
      
      **AFTER**
      
      ```scala
      scala> spark.table("t1").filter($"p" === "p1\" and q=\"q1").select($"a").show
      +---+
      |  a|
      +---+
      |  2|
      +---+
      
      scala> spark.table("t1").filter($"p" === "'\"").select($"a").show
      java.lang.UnsupportedOperationException: Partition filter cannot have both `"` and `'` characters
      ```
      
      ## How was this patch tested?
      
      Pass the Jenkins test with new test cases.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #17266 from dongjoon-hyun/SPARK-19912.
      
      (cherry picked from commit 21e366ae)
      Signed-off-by: default avatarWenchen Fan <wenchen@databricks.com>
      c4c7b185
Loading