Skip to content
Snippets Groups Projects
  1. Jul 04, 2017
    • Dongjoon Hyun's avatar
      [SPARK-20256][SQL] SessionState should be created more lazily · 1b50e0e0
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      `SessionState` is designed to be created lazily. However, in reality, it created immediately in `SparkSession.Builder.getOrCreate` ([here](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala#L943)).
      
      This PR aims to recover the lazy behavior by keeping the options into `initialSessionOptions`. The benefit is like the following. Users can start `spark-shell` and use RDD operations without any problems.
      
      **BEFORE**
      ```scala
      $ bin/spark-shell
      java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder'
      ...
      Caused by: org.apache.spark.sql.AnalysisException:
          org.apache.hadoop.hive.ql.metadata.HiveException:
             MetaException(message:java.security.AccessControlException:
                Permission denied: user=spark, access=READ,
                   inode="/apps/hive/warehouse":hive:hdfs:drwx------
      ```
      As reported in SPARK-20256, this happens when the warehouse directory is not allowed for this user.
      
      **AFTER**
      ```scala
      $ bin/spark-shell
      ...
      Welcome to
            ____              __
           / __/__  ___ _____/ /__
          _\ \/ _ \/ _ `/ __/  '_/
         /___/ .__/\_,_/_/ /_/\_\   version 2.3.0-SNAPSHOT
            /_/
      
      Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_112)
      Type in expressions to have them evaluated.
      Type :help for more information.
      
      scala> sc.range(0, 10, 1).count()
      res0: Long = 10
      ```
      
      ## How was this patch tested?
      
      Manual.
      
      This closes #18512 .
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #18501 from dongjoon-hyun/SPARK-20256.
      1b50e0e0
    • YIHAODIAN\wangshuangshuang's avatar
      [SPARK-19726][SQL] Faild to insert null timestamp value to mysql using spark jdbc · a3c29fcb
      YIHAODIAN\wangshuangshuang authored
      ## What changes were proposed in this pull request?
      
      when creating table like following:
      > create table timestamp_test(id int(11), time_stamp timestamp not null default current_timestamp);
      
      The result of Excuting "insert into timestamp_test values (111, null)" is different between Spark and JDBC.
      ```
      mysql> select * from timestamp_test;
      +------+---------------------+
      | id   | time_stamp          |
      +------+---------------------+
      |  111 | 1970-01-01 00:00:00 | -> spark
      |  111 | 2017-06-27 19:32:38 | -> mysql
      +------+---------------------+
      2 rows in set (0.00 sec)
      ```
         Because in such case ```StructField.nullable``` is false, so the generated codes of ```InvokeLike``` and ```BoundReference``` don't check whether the field is null or not. Instead, they directly use ```CodegenContext.INPUT_ROW.getLong(1)```, however, ```UnsafeRow.setNullAt(1)``` will put 0 in the underlying memory.
      
         The PR will ```always``` set ```StructField.nullable```  true after obtaining metadata from jdbc connection, Since we can insert null to not null timestamp column in MySQL. In this way, spark will propagate null to underlying DB engine, and let DB to choose how to process NULL.
      
      ## How was this patch tested?
      
      Added tests.
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: YIHAODIAN\wangshuangshuang <wangshuangshuang@yihaodian.com>
      Author: Shuangshuang Wang <wsszone@gmail.com>
      
      Closes #18445 from shuangshuangwang/SPARK-19726.
      a3c29fcb
    • gatorsmile's avatar
      [SPARK-21256][SQL] Add withSQLConf to Catalyst Test · 29b1f6b0
      gatorsmile authored
      ### What changes were proposed in this pull request?
      SQLConf is moved to Catalyst. We are adding more and more test cases for verifying the conf-specific behaviors. It is nice to add a helper function to simplify the test cases.
      
      ### How was this patch tested?
      N/A
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #18469 from gatorsmile/withSQLConf.
      29b1f6b0
    • hyukjinkwon's avatar
      [SPARK-19507][SPARK-21296][PYTHON] Avoid per-record type dispatch in schema... · d492cc5a
      hyukjinkwon authored
      [SPARK-19507][SPARK-21296][PYTHON] Avoid per-record type dispatch in schema verification and improve exception message
      
      ## What changes were proposed in this pull request?
      **Context**
      
      While reviewing https://github.com/apache/spark/pull/17227, I realised here we type-dispatch per record. The PR itself is fine in terms of performance as is but this prints a prefix, `"obj"` in exception message as below:
      
      ```
      from pyspark.sql.types import *
      schema = StructType([StructField('s', IntegerType(), nullable=False)])
      spark.createDataFrame([["1"]], schema)
      ...
      TypeError: obj.s: IntegerType can not accept object '1' in type <type 'str'>
      ```
      
      I suggested to get rid of this but during investigating this, I realised my approach might bring a performance regression as it is a hot path.
      
      Only for SPARK-19507 and https://github.com/apache/spark/pull/17227, It needs more changes to cleanly get rid of the prefix and I rather decided to fix both issues together.
      
      **Propersal**
      
      This PR tried to
      
        - get rid of per-record type dispatch as we do in many code paths in Scala  so that it improves the performance (roughly ~25% improvement) - SPARK-21296
      
          This was tested with a simple code `spark.createDataFrame(range(1000000), "int")`. However, I am quite sure the actual improvement in practice is larger than this, in particular, when the schema is complicated.
      
         - improve error message in exception describing field information as prose - SPARK-19507
      
      ## How was this patch tested?
      
      Manually tested and unit tests were added in `python/pyspark/sql/tests.py`.
      
      Benchmark - codes: https://gist.github.com/HyukjinKwon/c3397469c56cb26c2d7dd521ed0bc5a3
      Error message - codes: https://gist.github.com/HyukjinKwon/b1b2c7f65865444c4a8836435100e398
      
      **Before**
      
      Benchmark:
        - Results: https://gist.github.com/HyukjinKwon/4a291dab45542106301a0c1abcdca924
      
      Error message
        - Results: https://gist.github.com/HyukjinKwon/57b1916395794ce924faa32b14a3fe19
      
      **After**
      
      Benchmark
        - Results: https://gist.github.com/HyukjinKwon/21496feecc4a920e50c4e455f836266e
      
      Error message
        - Results: https://gist.github.com/HyukjinKwon/7a494e4557fe32a652ce1236e504a395
      
      Closes #17227
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      Author: David Gingrich <david@textio.com>
      
      Closes #18521 from HyukjinKwon/python-type-dispatch.
      d492cc5a
    • hyukjinkwon's avatar
      [MINOR][SPARK SUBMIT] Print out R file usage in spark-submit · 2b1e94b9
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      Currently, running the shell below:
      
      ```bash
      $ ./bin/spark-submit tmp.R a b c
      ```
      
      with R file, `tmp.R` as below:
      
      ```r
      #!/usr/bin/env Rscript
      
      library(SparkR)
      sparkRSQL.init(sparkR.init(master = "local"))
      collect(createDataFrame(list(list(1))))
      print(commandArgs(trailingOnly = TRUE))
      ```
      
      working fine as below:
      
      ```bash
        _1
      1  1
      [1] "a" "b" "c"
      ```
      
      However, it looks not printed in usage documentation as below:
      
      ```bash
      $ ./bin/spark-submit
      ```
      
      ```
      Usage: spark-submit [options] <app jar | python file> [app arguments]
      ...
      ```
      
      For `./bin/sparkR`, it looks fine as below:
      
      ```bash
      $ ./bin/sparkR tmp.R
      ```
      
      ```
      Running R applications through 'sparkR' is not supported as of Spark 2.0.
      Use ./bin/spark-submit <R file>
      ```
      
      Running the script below:
      
      ```bash
      $ ./bin/spark-submit
      ```
      
      **Before**
      
      ```
      Usage: spark-submit [options] <app jar | python file> [app arguments]
      ...
      ```
      
      **After**
      
      ```
      Usage: spark-submit [options] <app jar | python file | R file> [app arguments]
      ...
      ```
      
      ## How was this patch tested?
      
      Manually tested.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #18505 from HyukjinKwon/minor-doc-summit.
      2b1e94b9
    • Thomas Decaux's avatar
      [MINOR] Add french stop word "les" · 8ca4ebef
      Thomas Decaux authored
      ## What changes were proposed in this pull request?
      
      Added "les" as french stop word (plurial of le)
      
      Author: Thomas Decaux <ebuildy@gmail.com>
      
      Closes #18514 from ebuildy/patch-1.
      8ca4ebef
  2. Jul 03, 2017
    • hyukjinkwon's avatar
      [SPARK-21264][PYTHON] Call cross join path in join without 'on' and with 'how' · a848d552
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      Currently, it throws a NPE when missing columns but join type is speicified in join at PySpark as below:
      
      ```python
      spark.conf.set("spark.sql.crossJoin.enabled", "false")
      spark.range(1).join(spark.range(1), how="inner").show()
      ```
      
      ```
      Traceback (most recent call last):
      ...
      py4j.protocol.Py4JJavaError: An error occurred while calling o66.join.
      : java.lang.NullPointerException
      	at org.apache.spark.sql.Dataset.join(Dataset.scala:931)
      	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      ...
      ```
      
      ```python
      spark.conf.set("spark.sql.crossJoin.enabled", "true")
      spark.range(1).join(spark.range(1), how="inner").show()
      ```
      
      ```
      ...
      py4j.protocol.Py4JJavaError: An error occurred while calling o84.join.
      : java.lang.NullPointerException
      	at org.apache.spark.sql.Dataset.join(Dataset.scala:931)
      	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      ...
      ```
      
      This PR suggests to follow Scala's one as below:
      
      ```scala
      scala> spark.conf.set("spark.sql.crossJoin.enabled", "false")
      
      scala> spark.range(1).join(spark.range(1), Seq.empty[String], "inner").show()
      ```
      
      ```
      org.apache.spark.sql.AnalysisException: Detected cartesian product for INNER join between logical plans
      Range (0, 1, step=1, splits=Some(8))
      and
      Range (0, 1, step=1, splits=Some(8))
      Join condition is missing or trivial.
      Use the CROSS JOIN syntax to allow cartesian products between these relations.;
      ...
      ```
      
      ```scala
      scala> spark.conf.set("spark.sql.crossJoin.enabled", "true")
      
      scala> spark.range(1).join(spark.range(1), Seq.empty[String], "inner").show()
      ```
      ```
      +---+---+
      | id| id|
      +---+---+
      |  0|  0|
      +---+---+
      ```
      
      **After**
      
      ```python
      spark.conf.set("spark.sql.crossJoin.enabled", "false")
      spark.range(1).join(spark.range(1), how="inner").show()
      ```
      
      ```
      Traceback (most recent call last):
      ...
      pyspark.sql.utils.AnalysisException: u'Detected cartesian product for INNER join between logical plans\nRange (0, 1, step=1, splits=Some(8))\nand\nRange (0, 1, step=1, splits=Some(8))\nJoin condition is missing or trivial.\nUse the CROSS JOIN syntax to allow cartesian products between these relations.;'
      ```
      
      ```python
      spark.conf.set("spark.sql.crossJoin.enabled", "true")
      spark.range(1).join(spark.range(1), how="inner").show()
      ```
      ```
      +---+---+
      | id| id|
      +---+---+
      |  0|  0|
      +---+---+
      ```
      
      ## How was this patch tested?
      
      Added tests in `python/pyspark/sql/tests.py`.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #18484 from HyukjinKwon/SPARK-21264.
      a848d552
    • liuxian's avatar
      [SPARK-21283][CORE] FileOutputStream should be created as append mode · 6657e00d
      liuxian authored
      ## What changes were proposed in this pull request?
      
      `FileAppender` is used to write `stderr` and `stdout` files  in `ExecutorRunner`, But before writing `ErrorStream` into the the `stderr` file, the header information has been written into ,if  FileOutputStream is  not created as append mode, the  header information will be lost
      
      ## How was this patch tested?
      unit test case
      
      Author: liuxian <liu.xian3@zte.com.cn>
      
      Closes #18507 from 10110346/wip-lx-0703.
      6657e00d
    • gatorsmile's avatar
      [TEST] Different behaviors of SparkContext Conf when building SparkSession · c79c10eb
      gatorsmile authored
      ## What changes were proposed in this pull request?
      If the created ACTIVE sparkContext is not EXPLICITLY passed through the Builder's API `sparkContext()`, the conf of this sparkContext will also contain the conf set through the API `config()`; otherwise, the conf of this sparkContext will NOT contain the conf set through the API `config()`
      
      ## How was this patch tested?
      N/A
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #18517 from gatorsmile/fixTestCase2.
      c79c10eb
    • Wenchen Fan's avatar
      [SPARK-21284][SQL] rename SessionCatalog.registerFunction parameter name · f953ca56
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      Looking at the code in `SessionCatalog.registerFunction`, the parameter `ignoreIfExists` is a wrong name. When `ignoreIfExists` is true, we will override the function if it already exists. So `overrideIfExists` should be the corrected name.
      
      ## How was this patch tested?
      
      N/A
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #18510 from cloud-fan/minor.
      f953ca56
    • Takeshi Yamamuro's avatar
      [SPARK-20073][SQL] Prints an explicit warning message in case of NULL-safe equals · 363bfe30
      Takeshi Yamamuro authored
      ## What changes were proposed in this pull request?
      This pr added code to print the same warning messages with `===` cases when using NULL-safe equals (`<=>`).
      
      ## How was this patch tested?
      Existing tests.
      
      Author: Takeshi Yamamuro <yamamuro@apache.org>
      
      Closes #18436 from maropu/SPARK-20073.
      363bfe30
    • aokolnychyi's avatar
      [SPARK-21102][SQL] Refresh command is too aggressive in parsing · 17bdc36e
      aokolnychyi authored
      ### Idea
      
      This PR adds validation to REFRESH sql statements. Currently, users can specify whatever they want as resource path. For example, spark.sql("REFRESH ! $ !") will be executed without any exceptions.
      
      ### Implementation
      
      I am not sure that my current implementation is the most optimal, so any feedback is appreciated. My first idea was to make the grammar as strict as possible. Unfortunately, there were some problems. I tried the approach below:
      
      SqlBase.g4
      ```
      ...
          | REFRESH TABLE tableIdentifier                                    #refreshTable
          | REFRESH resourcePath                                             #refreshResource
      ...
      
      resourcePath
          : STRING
          | (IDENTIFIER | number | nonReserved | '/' | '-')+ // other symbols can be added if needed
          ;
      ```
      It is not flexible enough and requires to explicitly mention all possible symbols. Therefore, I came up with the current approach that is implemented in the code.
      
      Let me know your opinion on which one is better.
      
      Author: aokolnychyi <anton.okolnychyi@sap.com>
      
      Closes #18368 from aokolnychyi/spark-21102.
      17bdc36e
    • Zhenhua Wang's avatar
      [TEST] Load test table based on case sensitivity · eb7a5a66
      Zhenhua Wang authored
      ## What changes were proposed in this pull request?
      
      It is strange that we will get "table not found" error if **the first sql** uses upper case table names, when developers write tests with `TestHiveSingleton`, **although case insensitivity**. This is because in `TestHiveQueryExecution`, test tables are loaded based on exact matching instead of case sensitivity.
      
      ## How was this patch tested?
      
      Added a new test case.
      
      Author: Zhenhua Wang <wzh_zju@163.com>
      
      Closes #18504 from wzhfy/testHive.
      eb7a5a66
    • Sean Owen's avatar
      [SPARK-21137][CORE] Spark reads many small files slowly · a9339db9
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Parallelize FileInputFormat.listStatus in Hadoop API via LIST_STATUS_NUM_THREADS to speed up examination of file sizes for wholeTextFiles et al
      
      ## How was this patch tested?
      
      Existing tests, which will exercise the key path here: using a local file system.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #18441 from srowen/SPARK-21137.
      a9339db9
    • guoxiaolong's avatar
      [SPARK-21250][WEB-UI] Add a url in the table of 'Running Executors' in worker... · d913db16
      guoxiaolong authored
      [SPARK-21250][WEB-UI] Add a url in the table of 'Running Executors' in worker page to visit job page.
      
      ## What changes were proposed in this pull request?
      
      Add a url in the table of 'Running Executors' in worker page to visit job page.
      
      When I click URL of 'Name', the current page jumps to the job page. Of course this is only in the table of 'Running Executors'.
      
      This URL of 'Name' is in the table of 'Finished Executors' does not exist, the click will not jump to any page.
      
      fix before:
      ![1](https://user-images.githubusercontent.com/26266482/27679397-30ddc262-5ceb-11e7-839b-0889d1f42480.png)
      
      fix after:
      ![2](https://user-images.githubusercontent.com/26266482/27679405-3588ef12-5ceb-11e7-9756-0a93815cd698.png)
      
      ## How was this patch tested?
      manual tests
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: guoxiaolong <guo.xiaolong1@zte.com.cn>
      
      Closes #18464 from guoxiaolongzte/SPARK-21250.
      d913db16
  3. Jul 02, 2017
    • Rui Zha's avatar
      [SPARK-18004][SQL] Make sure the date or timestamp related predicate can be... · d4107196
      Rui Zha authored
      [SPARK-18004][SQL] Make sure the date or timestamp related predicate can be pushed down to Oracle correctly
      
      ## What changes were proposed in this pull request?
      
      Move `compileValue` method in JDBCRDD to JdbcDialect, and override the `compileValue` method in OracleDialect to rewrite the Oracle-specific timestamp and date literals in where clause.
      
      ## How was this patch tested?
      
      An integration test has been added.
      
      Author: Rui Zha <zrdt713@gmail.com>
      Author: Zharui <zrdt713@gmail.com>
      
      Closes #18451 from SharpRay/extend-compileValue-to-dialects.
      d4107196
    • Yanbo Liang's avatar
      [SPARK-19852][PYSPARK][ML] Python StringIndexer supports 'keep' to handle invalid data · c19680be
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      This PR is to maintain API parity with changes made in SPARK-17498 to support a new option
      'keep' in StringIndexer to handle unseen labels or NULL values with PySpark.
      
      Note: This is updated version of #17237 , the primary author of this PR is VinceShieh .
      ## How was this patch tested?
      Unit tests.
      
      Author: VinceShieh <vincent.xie@intel.com>
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #18453 from yanboliang/spark-19852.
      c19680be
    • Xingbo Jiang's avatar
      [SPARK-21260][SQL][MINOR] Remove the unused OutputFakerExec · c605fee0
      Xingbo Jiang authored
      ## What changes were proposed in this pull request?
      
      OutputFakerExec was added long ago and is not used anywhere now so we should remove it.
      
      ## How was this patch tested?
      N/A
      
      Author: Xingbo Jiang <xingbo.jiang@databricks.com>
      
      Closes #18473 from jiangxb1987/OutputFakerExec.
      c605fee0
  4. Jul 01, 2017
    • Devaraj K's avatar
      [SPARK-21170][CORE] Utils.tryWithSafeFinallyAndFailureCallbacks throws... · 6beca9ce
      Devaraj K authored
      [SPARK-21170][CORE] Utils.tryWithSafeFinallyAndFailureCallbacks throws IllegalArgumentException: Self-suppression not permitted
      
      ## What changes were proposed in this pull request?
      
      Not adding the exception to the suppressed if it is the same instance as originalThrowable.
      
      ## How was this patch tested?
      
      Added new tests to verify this, these tests fail without source code changes and passes with the change.
      
      Author: Devaraj K <devaraj@apache.org>
      
      Closes #18384 from devaraj-kavali/SPARK-21170.
      6beca9ce
    • Ruifeng Zheng's avatar
      [SPARK-18518][ML] HasSolver supports override · e0b047ea
      Ruifeng Zheng authored
      ## What changes were proposed in this pull request?
      1, make param support non-final with `finalFields` option
      2, generate `HasSolver` with `finalFields = false`
      3, override `solver` in LiR, GLR, and make MLPC inherit `HasSolver`
      
      ## How was this patch tested?
      existing tests
      
      Author: Ruifeng Zheng <ruifengz@foxmail.com>
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #16028 from zhengruifeng/param_non_final.
      e0b047ea
    • actuaryzhang's avatar
      [SPARK-21275][ML] Update GLM test to use supportedFamilyNames · 37ef32e5
      actuaryzhang authored
      ## What changes were proposed in this pull request?
      Update GLM test to use supportedFamilyNames as suggested here:
      https://github.com/apache/spark/pull/16699#discussion-diff-100574976R855
      
      Author: actuaryzhang <actuaryzhang10@gmail.com>
      
      Closes #18495 from actuaryzhang/mlGlmTest2.
      37ef32e5
  5. Jun 30, 2017
    • Reynold Xin's avatar
      [SPARK-21273][SQL] Propagate logical plan stats using visitor pattern and mixin · b1d719e7
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      We currently implement statistics propagation directly in logical plan. Given we already have two different implementations, it'd make sense to actually decouple the two and add stats propagation using mixin. This would reduce the coupling between logical plan and statistics handling.
      
      This can also be a powerful pattern in the future to add additional properties (e.g. constraints).
      
      ## How was this patch tested?
      Should be covered by existing test cases.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #18479 from rxin/stats-trait.
      b1d719e7
    • wangzhenhua's avatar
      [SPARK-21127][SQL] Update statistics after data changing commands · 61b5df56
      wangzhenhua authored
      ## What changes were proposed in this pull request?
      
      Update stats after the following data changing commands:
      
      - InsertIntoHadoopFsRelationCommand
      - InsertIntoHiveTable
      - LoadDataCommand
      - TruncateTableCommand
      - AlterTableSetLocationCommand
      - AlterTableDropPartitionCommand
      
      ## How was this patch tested?
      Added new test cases.
      
      Author: wangzhenhua <wangzhenhua@huawei.com>
      Author: Zhenhua Wang <wzh_zju@163.com>
      
      Closes #18334 from wzhfy/changeStatsForOperation.
      61b5df56
    • Wenchen Fan's avatar
      [SPARK-17528][SQL] data should be copied properly before saving into InternalRow · 4eb41879
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      For performance reasons, `UnsafeRow.getString`, `getStruct`, etc. return a "pointer" that points to a memory region of this unsafe row. This makes the unsafe projection a little dangerous, because all of its output rows share one instance.
      
      When we implement SQL operators, we should be careful to not cache the input rows because they may be produced by unsafe projection from child operator and thus its content may change overtime.
      
      However, when we updating values of InternalRow(e.g. in mutable projection and safe projection), we only copy UTF8String, we should also copy InternalRow, ArrayData and MapData. This PR fixes this, and also fixes the copy of vairous InternalRow, ArrayData and MapData implementations.
      
      ## How was this patch tested?
      
      new regression tests
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #18483 from cloud-fan/fix-copy.
      4eb41879
    • Liang-Chi Hsieh's avatar
      [SPARK-21052][SQL][FOLLOW-UP] Add hash map metrics to join · fd132552
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      Remove `numHashCollisions` in `BytesToBytesMap`. And change `getAverageProbesPerLookup()` to `getAverageProbesPerLookup` as suggested.
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #18480 from viirya/SPARK-21052-followup.
      fd132552
    • Xiao Li's avatar
      [SPARK-21129][SQL] Arguments of SQL function call should not be named expressions · eed9c4ef
      Xiao Li authored
      ### What changes were proposed in this pull request?
      
      Function argument should not be named expressions. It could cause two issues:
      - Misleading error message
      - Unexpected query results when the column name is `distinct`, which is not a reserved word in our parser.
      
      ```
      spark-sql> select count(distinct c1, distinct c2) from t1;
      Error in query: cannot resolve '`distinct`' given input columns: [c1, c2]; line 1 pos 26;
      'Project [unresolvedalias('count(c1#30, 'distinct), None)]
      +- SubqueryAlias t1
         +- CatalogRelation `default`.`t1`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [c1#30, c2#31]
      ```
      
      After the fix, the error message becomes
      ```
      spark-sql> select count(distinct c1, distinct c2) from t1;
      Error in query:
      extraneous input 'c2' expecting {')', ',', '.', '[', 'OR', 'AND', 'IN', NOT, 'BETWEEN', 'LIKE', RLIKE, 'IS', EQ, '<=>', '<>', '!=', '<', LTE, '>', GTE, '+', '-', '*', '/', '%', 'DIV', '&', '|', '||', '^'}(line 1, pos 35)
      
      == SQL ==
      select count(distinct c1, distinct c2) from t1
      -----------------------------------^^^
      ```
      
      ### How was this patch tested?
      Added a test case to parser suite.
      
      Author: Xiao Li <gatorsmile@gmail.com>
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #18338 from gatorsmile/parserDistinctAggFunc.
      eed9c4ef
    • 曾林西's avatar
      [SPARK-21223] Change fileToAppInfo in FsHistoryProvider to fix concurrent issue. · 1fe08d62
      曾林西 authored
      # What issue does this PR address ?
      Jira:https://issues.apache.org/jira/browse/SPARK-21223
      fix the Thread-safety issue in FsHistoryProvider
      Currently, Spark HistoryServer use a HashMap named fileToAppInfo in class FsHistoryProvider to store the map of eventlog path and attemptInfo.
      When use ThreadPool to Replay the log files in the list and merge the list of old applications with new ones, multi thread may update fileToAppInfo at the same time, which may cause Thread-safety issues, such as  falling into an infinite loop because of calling resize func of the hashtable.
      
      Author: 曾林西 <zenglinxi@meituan.com>
      
      Closes #18430 from zenglinxi0615/master.
      1fe08d62
    • Yanbo Liang's avatar
      [ML] Fix scala-2.10 build failure of GeneralizedLinearRegressionSuite. · 528c9281
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      Fix scala-2.10 build failure of ```GeneralizedLinearRegressionSuite```.
      
      ## How was this patch tested?
      Build with scala-2.10.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #18489 from yanboliang/glr.
      528c9281
    • Xingbo Jiang's avatar
      [SPARK-18294][CORE] Implement commit protocol to support `mapred` package's committer · 3c2fc19d
      Xingbo Jiang authored
      ## What changes were proposed in this pull request?
      
      This PR makes the following changes:
      
      - Implement a new commit protocol `HadoopMapRedCommitProtocol` which support the old `mapred` package's committer;
      - Refactor SparkHadoopWriter and SparkHadoopMapReduceWriter, now they are combined together, thus we can support write through both mapred and mapreduce API by the new SparkHadoopWriter, a lot of duplicated codes are removed.
      
      After this change, it should be pretty easy for us to support the committer from both the new and the old hadoop API at high level.
      
      ## How was this patch tested?
      No major behavior change, passed the existing test cases.
      
      Author: Xingbo Jiang <xingbo.jiang@databricks.com>
      
      Closes #18438 from jiangxb1987/SparkHadoopWriter.
      3c2fc19d
    • actuaryzhang's avatar
      [SPARK-18710][ML] Add offset in GLM · 49d767d8
      actuaryzhang authored
      ## What changes were proposed in this pull request?
      Add support for offset in GLM. This is useful for at least two reasons:
      
      1. Account for exposure: e.g., when modeling the number of accidents, we may need to use miles driven as an offset to access factors on frequency.
      2. Test incremental effects of new variables: we can use predictions from the existing model as offset and run a much smaller model on only new variables. This avoids re-estimating the large model with all variables (old + new) and can be very important for efficient large-scaled analysis.
      
      ## How was this patch tested?
      New test.
      
      yanboliang srowen felixcheung sethah
      
      Author: actuaryzhang <actuaryzhang10@gmail.com>
      
      Closes #16699 from actuaryzhang/offset.
      49d767d8
    • actuaryzhang's avatar
      [SPARK-20889][SPARKR] Grouped documentation for COLLECTION column methods · 52981715
      actuaryzhang authored
      ## What changes were proposed in this pull request?
      
      Grouped documentation for column collection methods.
      
      Author: actuaryzhang <actuaryzhang10@gmail.com>
      Author: Wayne Zhang <actuaryzhang10@gmail.com>
      
      Closes #18458 from actuaryzhang/sparkRDocCollection.
      52981715
  6. Jun 29, 2017
    • actuaryzhang's avatar
      [SPARK-20889][SPARKR] Grouped documentation for MISC column methods · fddb63f4
      actuaryzhang authored
      ## What changes were proposed in this pull request?
      Grouped documentation for column misc methods.
      
      Author: actuaryzhang <actuaryzhang10@gmail.com>
      Author: Wayne Zhang <actuaryzhang10@gmail.com>
      
      Closes #18448 from actuaryzhang/sparkRDocMisc.
      fddb63f4
    • Herman van Hovell's avatar
      [SPARK-21258][SQL] Fix WindowExec complex object aggregation with spilling · e2f32ee4
      Herman van Hovell authored
      ## What changes were proposed in this pull request?
      `WindowExec` currently improperly stores complex objects (UnsafeRow, UnsafeArrayData, UnsafeMapData, UTF8String) during aggregation by keeping a reference in the buffer used by `GeneratedMutableProjections` to the actual input data. Things go wrong when the input object (or the backing bytes) are reused for other things. This could happen in window functions when it starts spilling to disk. When reading the back the spill files the `UnsafeSorterSpillReader` reuses the buffer to which the `UnsafeRow` points, leading to weird corruption scenario's. Note that this only happens for aggregate functions that preserve (parts of) their input, for example `FIRST`, `LAST`, `MIN` & `MAX`.
      
      This was not seen before, because the spilling logic was not doing actual spills as much and actually used an in-memory page. This page was not cleaned up during window processing and made sure unsafe objects point to their own dedicated memory location. This was changed by https://github.com/apache/spark/pull/16909, after this PR Spark spills more eagerly.
      
      This PR provides a surgical fix because we are close to releasing Spark 2.2. This change just makes sure that there cannot be any object reuse at the expensive of a little bit of performance. We will follow-up with a more subtle solution at a later point.
      
      ## How was this patch tested?
      Added a regression test to `DataFrameWindowFunctionsSuite`.
      
      Author: Herman van Hovell <hvanhovell@databricks.com>
      
      Closes #18470 from hvanhovell/SPARK-21258.
      e2f32ee4
    • Shixiong Zhu's avatar
      [SPARK-21253][CORE][HOTFIX] Fix Scala 2.10 build · cfc696f4
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      A follow up PR to fix Scala 2.10 build for #18472
      
      ## How was this patch tested?
      
      Jenkins
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #18478 from zsxwing/SPARK-21253-2.
      cfc696f4
    • IngoSchuster's avatar
      [SPARK-21176][WEB UI] Limit number of selector threads for admin ui proxy servlets to 8 · 88a536ba
      IngoSchuster authored
      ## What changes were proposed in this pull request?
      Please see also https://issues.apache.org/jira/browse/SPARK-21176
      
      This change limits the number of selector threads that jetty creates to maximum 8 per proxy servlet (Jetty default is number of processors / 2).
      The newHttpClient for Jettys ProxyServlet class is overwritten to avoid the Jetty defaults (which are designed for high-performance http servers).
      Once https://github.com/eclipse/jetty.project/issues/1643 is available, the code could be cleaned up to avoid the method override.
      
      I really need this on v2.1.1 - what is the best way for a backport automatic merge works fine)? Shall I create another PR?
      
      ## How was this patch tested?
      (Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
      The patch was tested manually on a Spark cluster with a head node that has 88 processors using JMX to verify that the number of selector threads is now limited to 8 per proxy.
      
      gurvindersingh zsxwing can you please review the change?
      
      Author: IngoSchuster <ingo.schuster@de.ibm.com>
      Author: Ingo Schuster <ingo.schuster@de.ibm.com>
      
      Closes #18437 from IngoSchuster/master.
      88a536ba
    • Shixiong Zhu's avatar
      [SPARK-21253][CORE] Disable spark.reducer.maxReqSizeShuffleToMem · 80f7ac3a
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      Disable spark.reducer.maxReqSizeShuffleToMem because it breaks the old shuffle service.
      
      Credits to wangyum
      
      Closes #18466
      
      ## How was this patch tested?
      
      Jenkins
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      Author: Yuming Wang <wgyumg@gmail.com>
      
      Closes #18467 from zsxwing/SPARK-21253.
      80f7ac3a
    • Shixiong Zhu's avatar
      [SPARK-21253][CORE] Fix a bug that StreamCallback may not be notified if network errors happen · 4996c539
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      If a network error happens before processing StreamResponse/StreamFailure events, StreamCallback.onFailure won't be called.
      
      This PR fixes `failOutstandingRequests` to also notify outstanding StreamCallbacks.
      
      ## How was this patch tested?
      
      The new unit tests.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #18472 from zsxwing/fix-stream-2.
      4996c539
    • Feng Liu's avatar
      [SPARK-21188][CORE] releaseAllLocksForTask should synchronize the whole method · f9151beb
      Feng Liu authored
      ## What changes were proposed in this pull request?
      
      Since the objects `readLocksByTask`, `writeLocksByTask` and `info`s are coupled and supposed to be modified by other threads concurrently, all the read and writes of them in the method `releaseAllLocksForTask` should be protected by a single synchronized block like other similar methods.
      
      ## How was this patch tested?
      
      existing tests
      
      Author: Feng Liu <fengliu@databricks.com>
      
      Closes #18400 from liufengdb/synchronize.
      f9151beb
    • Liang-Chi Hsieh's avatar
      [SPARK-21052][SQL] Add hash map metrics to join · 18066f2e
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      This adds the average hash map probe metrics to join operator such as `BroadcastHashJoin` and `ShuffledHashJoin`.
      
      This PR adds the API to `HashedRelation` to get average hash map probe.
      
      ## How was this patch tested?
      
      Related test cases are added.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #18301 from viirya/SPARK-21052.
      18066f2e
    • 杨治国10192065's avatar
      [SPARK-21225][CORE] Considering CPUS_PER_TASK when allocating task slots for each WorkerOffer · 29bd251d
      杨治国10192065 authored
      JIRA Issue:https://issues.apache.org/jira/browse/SPARK-21225
          In the function "resourceOffers", It declare a variable "tasks" for storage the tasks which have allocated a executor. It declared like this:
      `val tasks = shuffledOffers.map(o => new ArrayBuffer[TaskDescription](o.cores))`
          But, I think this code only conside a situation for that one task per core. If the user set "spark.task.cpus" as 2 or 3, It really don't need so much Mem. I think It can motify as follow:
      val tasks = shuffledOffers.map(o => new ArrayBuffer[TaskDescription](o.cores / CPUS_PER_TASK))
       to instead.
          Motify like this the other earning is that it's more easy to understand the way how the tasks allocate offers.
      
      Author: 杨治国10192065 <yang.zhiguo@zte.com.cn>
      
      Closes #18435 from JackYangzg/motifyTaskCoreDisp.
      29bd251d
Loading