Skip to content
Snippets Groups Projects
  1. May 20, 2016
    • Sameer Agarwal's avatar
      [SPARK-15078] [SQL] Add all TPCDS 1.4 benchmark queries for SparkSQL · a78d6ce3
      Sameer Agarwal authored
      ## What changes were proposed in this pull request?
      
      Now that SparkSQL supports all TPC-DS queries, this patch adds all 99 benchmark queries inside SparkSQL.
      
      ## How was this patch tested?
      
      Benchmark only
      
      Author: Sameer Agarwal <sameer@databricks.com>
      
      Closes #13188 from sameeragarwal/tpcds-all.
      a78d6ce3
    • Reynold Xin's avatar
      [SPARK-15454][SQL] Filter out files starting with _ · dcac8e6f
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      Many other systems (e.g. Impala) uses _xxx as staging, and Spark should not be reading those files.
      
      ## How was this patch tested?
      Added a unit test case.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #13227 from rxin/SPARK-15454.
      dcac8e6f
    • Davies Liu's avatar
      [SPARK-15438][SQL] improve explain of whole stage codegen · 0e70fd61
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      Currently, the explain of a query with whole-stage codegen looks like this
      ```
      >>> df = sqlCtx.range(1000);df2 = sqlCtx.range(1000);df.join(pyspark.sql.functions.broadcast(df2), 'id').explain()
      == Physical Plan ==
      WholeStageCodegen
      :  +- Project [id#1L]
      :     +- BroadcastHashJoin [id#1L], [id#4L], Inner, BuildRight, None
      :        :- Range 0, 1, 4, 1000, [id#1L]
      :        +- INPUT
      +- BroadcastExchange HashedRelationBroadcastMode(List(input[0, bigint]))
         +- WholeStageCodegen
            :  +- Range 0, 1, 4, 1000, [id#4L]
      ```
      
      The problem is that the plan looks much different than logical plan, make us hard to understand the plan (especially when the logical plan is not showed together).
      
      This PR will change it to:
      
      ```
      >>> df = sqlCtx.range(1000);df2 = sqlCtx.range(1000);df.join(pyspark.sql.functions.broadcast(df2), 'id').explain()
      == Physical Plan ==
      *Project [id#0L]
      +- *BroadcastHashJoin [id#0L], [id#3L], Inner, BuildRight, None
         :- *Range 0, 1, 4, 1000, [id#0L]
         +- BroadcastExchange HashedRelationBroadcastMode(List(input[0, bigint, false]))
            +- *Range 0, 1, 4, 1000, [id#3L]
      ```
      
      The `*`before the plan means that it's part of whole-stage codegen, it's easy to understand.
      
      ## How was this patch tested?
      
      Manually ran some queries and check the explain.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #13204 from davies/explain_codegen.
      0e70fd61
    • Michael Armbrust's avatar
      [SPARK-10216][SQL] Revert "[] Avoid creating empty files during overwrit… · 2ba3ff04
      Michael Armbrust authored
      This reverts commit 8d05a7a9 from #12855, which seems to have caused regressions when working with empty DataFrames.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #13181 from marmbrus/revert12855.
      2ba3ff04
    • Shixiong Zhu's avatar
      [SPARK-15190][SQL] Support using SQLUserDefinedType for case classes · dfa61f7b
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      Right now inferring the schema for case classes happens before searching the SQLUserDefinedType annotation, so the SQLUserDefinedType annotation for case classes doesn't work.
      
      This PR simply changes the inferring order to resolve it. I also reenabled the java.math.BigDecimal test and added two tests for `List`.
      
      ## How was this patch tested?
      
      `encodeDecodeTest(UDTCaseClass(new java.net.URI("http://spark.apache.org/")), "udt with case class")`
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #12965 from zsxwing/SPARK-15190.
      dfa61f7b
    • Kousuke Saruta's avatar
      [SPARK-15165] [SPARK-15205] [SQL] Introduce place holder for comments in generated code · 22947cd0
      Kousuke Saruta authored
      ## What changes were proposed in this pull request?
      
      This PR introduce place holder for comment in generated code and the purpose  is same for #12939 but much safer.
      
      Generated code to be compiled doesn't include actual comments but includes place holder instead.
      
      Place holders in generated code will be replaced with actual comments only at the time of  logging.
      
      Also, this PR can resolve SPARK-15205.
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>
      
      Closes #12979 from sarutak/SPARK-15205.
      22947cd0
    • Davies Liu's avatar
      [HOTFIX] disable stress test · 5a25cd4f
      Davies Liu authored
      5a25cd4f
    • wm624@hotmail.com's avatar
      [SPARK-15360][SPARK-SUBMIT] Should print spark-submit usage when no arguments is specified · fe2fcb48
      wm624@hotmail.com authored
      (Please fill in changes proposed in this fix)
      In 2.0, ./bin/spark-submit doesn't print out usage, but it raises an exception.
      In this PR, an exception handling is added in the Main.java when the exception is thrown. In the handling code, if there is no additional argument, it prints out usage.
      
      (Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
      Manually tested.
      ./bin/spark-submit
      Usage: spark-submit [options] <app jar | python file> [app arguments]
      Usage: spark-submit --kill [submission ID] --master [spark://...]
      Usage: spark-submit --status [submission ID] --master [spark://...]
      Usage: spark-submit run-example [options] example-class [example args]
      
      Options:
        --master MASTER_URL         spark://host:port, mesos://host:port, yarn, or local.
        --deploy-mode DEPLOY_MODE   Whether to launch the driver program locally ("client") or
                                    on one of the worker machines inside the cluster ("cluster")
                                    (Default: client).
        --class CLASS_NAME          Your application's main class (for Java / Scala apps).
        --name NAME                 A name of your application.
        --jars JARS                 Comma-separated list of local jars to include on the driver
                                    and executor classpaths.
        --packages                  Comma-separated list of maven coordinates of jars to include
                                    on the driver and executor classpaths. Will search the local
                                    maven repo, then maven central and any additional remote
                                    repositories given by --repositories. The format for the
                                    coordinates should be groupId:artifactId:version.
      
      Author: wm624@hotmail.com <wm624@hotmail.com>
      
      Closes #13163 from wangmiao1981/submit.
      fe2fcb48
    • Takuya UESHIN's avatar
      [SPARK-15400][SQL] CreateNamedStruct and CreateNamedStructUnsafe should... · 2cbe96e6
      Takuya UESHIN authored
      [SPARK-15400][SQL] CreateNamedStruct and CreateNamedStructUnsafe should preserve metadata of value expressions if it is NamedExpression.
      
      ## What changes were proposed in this pull request?
      
      `CreateNamedStruct` and `CreateNamedStructUnsafe` should preserve metadata of value expressions if it is `NamedExpression` like `CreateStruct` or `CreateStructUnsafe` are doing.
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: Takuya UESHIN <ueshin@happy-camper.st>
      
      Closes #13193 from ueshin/issues/SPARK-15400.
      2cbe96e6
    • Reynold Xin's avatar
      [SPARK-15435][SQL] Append Command to all commands · e8adc552
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      We started this convention to append Command suffix to all SQL commands. However, not all commands follow that convention. This patch adds Command suffix to all RunnableCommands.
      
      ## How was this patch tested?
      Updated test cases to reflect the renames.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #13215 from rxin/SPARK-15435.
      e8adc552
    • Takuya UESHIN's avatar
      [SPARK-15308][SQL] RowEncoder should preserve nested column name. · d2e1aa97
      Takuya UESHIN authored
      ## What changes were proposed in this pull request?
      
      The following code generates wrong schema:
      
      ```
      val schema = new StructType().add(
        "struct",
        new StructType()
          .add("i", IntegerType, nullable = false)
          .add(
            "s",
            new StructType().add("int", IntegerType, nullable = false),
            nullable = false),
        nullable = false)
      val ds = sqlContext.range(10).map(l => Row(l, Row(l)))(RowEncoder(schema))
      ds.printSchema()
      ```
      
      This should print as follows:
      
      ```
      root
       |-- struct: struct (nullable = false)
       |    |-- i: integer (nullable = false)
       |    |-- s: struct (nullable = false)
       |    |    |-- int: integer (nullable = false)
      ```
      
      but the result is:
      
      ```
      root
       |-- struct: struct (nullable = false)
       |    |-- col1: integer (nullable = false)
       |    |-- col2: struct (nullable = false)
       |    |    |-- col1: integer (nullable = false)
      ```
      
      This PR fixes `RowEncoder` to preserve nested column name.
      
      ## How was this patch tested?
      
      Existing tests and I added a test to check if `RowEncoder` preserves nested column name.
      
      Author: Takuya UESHIN <ueshin@happy-camper.st>
      
      Closes #13090 from ueshin/issues/SPARK-15308.
      d2e1aa97
    • Yanbo Liang's avatar
      [SPARK-15222][SPARKR][ML] SparkR ML examples update in 2.0 · 9a9c6f5c
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      Update example code in examples/src/main/r/ml.R to reflect the new algorithms.
      * spark.glm and glm
      * spark.survreg
      * spark.naiveBayes
      * spark.kmeans
      
      ## How was this patch tested?
      Offline test.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #13000 from yanboliang/spark-15222.
      9a9c6f5c
    • WeichenXu's avatar
      [SPARK-15203][DEPLOY] The spark daemon shell script error, daemon process... · a3ceb875
      WeichenXu authored
      [SPARK-15203][DEPLOY] The spark daemon shell script error, daemon process start successfully but script output fail message
      
      ## What changes were proposed in this pull request?
      
      fix the bug:
      The spark daemon shell script error, daemon process start successfully but script output fail message
      
      ## How was this patch tested?
      
      existing test.
      
      Author: WeichenXu <WeichenXu123@outlook.com>
      
      Closes #13172 from WeichenXu123/fix-spark-15203.
      a3ceb875
    • Liang-Chi Hsieh's avatar
      [SPARK-15444][PYSPARK][ML][HOTFIX] Default value mismatch of param... · 4e739331
      Liang-Chi Hsieh authored
      [SPARK-15444][PYSPARK][ML][HOTFIX] Default value mismatch of param linkPredictionCol for GeneralizedLinearRegression
      
      ## What changes were proposed in this pull request?
      
      Default value mismatch of param linkPredictionCol for GeneralizedLinearRegression between PySpark and Scala. That is because default value conflict between #13106 and #13129. This causes ml.tests failed.
      
      ## How was this patch tested?
      Existing tests.
      
      Author: Liang-Chi Hsieh <simonh@tw.ibm.com>
      
      Closes #13220 from viirya/hotfix-regresstion.
      4e739331
    • Andrew Or's avatar
      [SPARK-15417][SQL][PYTHON] PySpark shell always uses in-memory catalog · c32b1b16
      Andrew Or authored
      ## What changes were proposed in this pull request?
      
      There is no way to use the Hive catalog in `pyspark-shell`. This is because we used to create a `SparkContext` before calling `SparkSession.enableHiveSupport().getOrCreate()`, which just gets the existing `SparkContext` instead of creating a new one. As a result, `spark.sql.catalogImplementation` was never propagated.
      
      ## How was this patch tested?
      
      Manual.
      
      Author: Andrew Or <andrew@databricks.com>
      
      Closes #13203 from andrewor14/fix-pyspark-shell.
      c32b1b16
    • Andrew Or's avatar
      [SPARK-15421][SQL] Validate DDL property values · 25737501
      Andrew Or authored
      ## What changes were proposed in this pull request?
      
      When we parse DDLs involving table or database properties, we need to validate the values.
      E.g. if we alter a database's property without providing a value:
      ```
      ALTER DATABASE my_db SET DBPROPERTIES('some_key')
      ```
      Then we'll ignore it with Hive, but override the property with the in-memory catalog. Inconsistencies like these arise because we don't validate the property values.
      
      In such cases, we should throw exceptions instead.
      
      ## How was this patch tested?
      
      `DDLCommandSuite`
      
      Author: Andrew Or <andrew@databricks.com>
      
      Closes #13205 from andrewor14/ddl-prop-values.
      25737501
    • gatorsmile's avatar
      [SPARK-15367][SQL] Add refreshTable back · 39fd4690
      gatorsmile authored
      #### What changes were proposed in this pull request?
      `refreshTable` was a method in `HiveContext`. It was deleted accidentally while we were migrating the APIs. This PR is to add it back to `HiveContext`.
      
      In addition, in `SparkSession`, we put it under the catalog namespace (`SparkSession.catalog.refreshTable`).
      
      #### How was this patch tested?
      Changed the existing test cases to use the function `refreshTable`. Also added a test case for refreshTable in `hivecontext-compatibility`
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #13156 from gatorsmile/refreshTable.
      39fd4690
    • Yanbo Liang's avatar
      [SPARK-15339][ML] ML 2.0 QA: Scala APIs and code audit for regression · c94b34eb
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      * ```GeneralizedLinearRegression``` API docs enhancement.
      * The default value of ```GeneralizedLinearRegression``` ```linkPredictionCol``` is not set rather than empty. This will consistent with other similar params such as ```weightCol```
      * Make some methods more private.
      * Fix a minor bug of LinearRegression.
      * Fix some other issues.
      
      ## How was this patch tested?
      Existing tests.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #13129 from yanboliang/spark-15339.
      c94b34eb
    • sethah's avatar
      [SPARK-15394][ML][DOCS] User guide typos and grammar audit · 5e203505
      sethah authored
      ## What changes were proposed in this pull request?
      
      Correct some typos and incorrectly worded sentences.
      
      ## How was this patch tested?
      
      Doc changes only.
      
      Note that many of these changes were identified by whomfire01
      
      Author: sethah <seth.hendrickson16@gmail.com>
      
      Closes #13180 from sethah/ml_guide_audit.
      5e203505
    • Zheng RuiFeng's avatar
      [SPARK-15398][ML] Update the warning message to recommend ML usage · 47a2940d
      Zheng RuiFeng authored
      ## What changes were proposed in this pull request?
      MLlib are not recommended to use, and some methods are even deprecated.
      Update the warning message to recommend ML usage.
      ```
        def showWarning() {
          System.err.println(
            """WARN: This is a naive implementation of Logistic Regression and is given as an example!
              |Please use either org.apache.spark.mllib.classification.LogisticRegressionWithSGD or
              |org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS
              |for more conventional use.
            """.stripMargin)
        }
      ```
      To
      ```
        def showWarning() {
          System.err.println(
            """WARN: This is a naive implementation of Logistic Regression and is given as an example!
              |Please use org.apache.spark.ml.classification.LogisticRegression
              |for more conventional use.
            """.stripMargin)
        }
      ```
      
      ## How was this patch tested?
      local build
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #13190 from zhengruifeng/update_recd.
      47a2940d
    • wm624@hotmail.com's avatar
      [SPARK-15363][ML][EXAMPLE] Example code shouldn't use VectorImplicits._, asML/fromML · 4c7a6b38
      wm624@hotmail.com authored
      ## What changes were proposed in this pull request?
      
      (Please fill in changes proposed in this fix)
      In this DataFrame example, we use VectorImplicits._, which is private API.
      
      Since Vectors object has public API, we use Vectors.fromML instead of implicts.
      
      ## How was this patch tested?
      
      (Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
      
      Manually run the example.
      
      Author: wm624@hotmail.com <wm624@hotmail.com>
      
      Closes #13213 from wangmiao1981/ml.
      4c7a6b38
    • Lianhui Wang's avatar
      [SPARK-15335][SQL] Implement TRUNCATE TABLE Command · 09a00510
      Lianhui Wang authored
      ## What changes were proposed in this pull request?
      
      Like TRUNCATE TABLE Command in Hive, TRUNCATE TABLE is also supported by Hive. See the link: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL
      Below is the related Hive JIRA: https://issues.apache.org/jira/browse/HIVE-446
      This PR is to implement such a command for truncate table excluded column truncation(HIVE-4005).
      
      ## How was this patch tested?
      Added a test case.
      
      Author: Lianhui Wang <lianhuiwang09@gmail.com>
      
      Closes #13170 from lianhuiwang/truncate.
      09a00510
    • Takuya UESHIN's avatar
      [SPARK-15313][SQL] EmbedSerializerInFilter rule should keep exprIds of output... · d5e1c5ac
      Takuya UESHIN authored
      [SPARK-15313][SQL] EmbedSerializerInFilter rule should keep exprIds of output of surrounded SerializeFromObject.
      
      ## What changes were proposed in this pull request?
      
      The following code:
      
      ```
      val ds = Seq(("a", 1), ("b", 2), ("c", 3)).toDS()
      ds.filter(_._1 == "b").select(expr("_1").as[String]).foreach(println(_))
      ```
      
      throws an Exception:
      
      ```
      org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Binding attribute, tree: _1#420
       at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:50)
       at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:88)
       at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:87)
      
      ...
       Cause: java.lang.RuntimeException: Couldn't find _1#420 in [_1#416,_2#417]
       at scala.sys.package$.error(package.scala:27)
       at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1$$anonfun$applyOrElse$1.apply(BoundAttribute.scala:94)
       at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1$$anonfun$applyOrElse$1.apply(BoundAttribute.scala:88)
       at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:49)
       at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:88)
       at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:87)
      ...
      ```
      
      This is because `EmbedSerializerInFilter` rule drops the `exprId`s of output of surrounded `SerializeFromObject`.
      
      The analyzed and optimized plans of the above example are as follows:
      
      ```
      == Analyzed Logical Plan ==
      _1: string
      Project [_1#420]
      +- SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, input[0, scala.Tuple2]._1, true) AS _1#420,input[0, scala.Tuple2]._2 AS _2#421]
         +- Filter <function1>.apply
            +- DeserializeToObject newInstance(class scala.Tuple2), obj#419: scala.Tuple2
               +- LocalRelation [_1#416,_2#417], [[0,1800000001,1,61],[0,1800000001,2,62],[0,1800000001,3,63]]
      
      == Optimized Logical Plan ==
      !Project [_1#420]
      +- Filter <function1>.apply
         +- LocalRelation [_1#416,_2#417], [[0,1800000001,1,61],[0,1800000001,2,62],[0,1800000001,3,63]]
      ```
      
      This PR fixes `EmbedSerializerInFilter` rule to keep `exprId`s of output of surrounded `SerializeFromObject`.
      
      The plans after this patch are as follows:
      
      ```
      == Analyzed Logical Plan ==
      _1: string
      Project [_1#420]
      +- SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, input[0, scala.Tuple2]._1, true) AS _1#420,input[0, scala.Tuple2]._2 AS _2#421]
         +- Filter <function1>.apply
            +- DeserializeToObject newInstance(class scala.Tuple2), obj#419: scala.Tuple2
               +- LocalRelation [_1#416,_2#417], [[0,1800000001,1,61],[0,1800000001,2,62],[0,1800000001,3,63]]
      
      == Optimized Logical Plan ==
      Project [_1#416]
      +- Filter <function1>.apply
         +- LocalRelation [_1#416,_2#417], [[0,1800000001,1,61],[0,1800000001,2,62],[0,1800000001,3,63]]
      ```
      
      ## How was this patch tested?
      
      Existing tests and I added a test to check if `filter and then select` works.
      
      Author: Takuya UESHIN <ueshin@happy-camper.st>
      
      Closes #13096 from ueshin/issues/SPARK-15313.
      d5e1c5ac
    • Oleg Danilov's avatar
      [SPARK-14261][SQL] Memory leak in Spark Thrift Server · e384c7fb
      Oleg Danilov authored
      Fixed memory leak (HiveConf in the CommandProcessorFactory)
      
      Author: Oleg Danilov <oleg.danilov@wandisco.com>
      
      Closes #12932 from dosoft/SPARK-14261.
      e384c7fb
    • Reynold Xin's avatar
      [SPARK-14990][SQL] Fix checkForSameTypeInputExpr (ignore nullability) · 3ba34d43
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch fixes a bug in TypeUtils.checkForSameTypeInputExpr. Previously the code was testing on strict equality, which does not taking nullability into account.
      
      This is based on https://github.com/apache/spark/pull/12768. This patch fixed a bug there (with empty expression) and added a test case.
      
      ## How was this patch tested?
      Added a new test suite and test case.
      
      Closes #12768.
      
      Author: Reynold Xin <rxin@databricks.com>
      Author: Oleg Danilov <oleg.danilov@wandisco.com>
      
      Closes #13208 from rxin/SPARK-14990.
      3ba34d43
  2. May 19, 2016
    • Reynold Xin's avatar
      [SPARK-15075][SPARK-15345][SQL] Clean up SparkSession builder and propagate... · f2ee0ed4
      Reynold Xin authored
      [SPARK-15075][SPARK-15345][SQL] Clean up SparkSession builder and propagate config options to existing sessions if specified
      
      ## What changes were proposed in this pull request?
      Currently SparkSession.Builder use SQLContext.getOrCreate. It should probably the the other way around, i.e. all the core logic goes in SparkSession, and SQLContext just calls that. This patch does that.
      
      This patch also makes sure config options specified in the builder are propagated to the existing (and of course the new) SparkSession.
      
      ## How was this patch tested?
      Updated tests to reflect the change, and also introduced a new SparkSessionBuilderSuite that should cover all the branches.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #13200 from rxin/SPARK-15075.
      f2ee0ed4
    • Kevin Yu's avatar
      [SPARK-11827][SQL] Adding java.math.BigInteger support in Java type inference... · 17591d90
      Kevin Yu authored
      [SPARK-11827][SQL] Adding java.math.BigInteger support in Java type inference for POJOs and Java collections
      
      Hello : Can you help check this PR? I am adding support for the java.math.BigInteger for java bean code path. I saw internally spark is converting the BigInteger to BigDecimal in ColumnType.scala and CatalystRowConverter.scala. I use the similar way and convert the BigInteger to the BigDecimal. .
      
      Author: Kevin Yu <qyu@us.ibm.com>
      
      Closes #10125 from kevinyu98/working_on_spark-11827.
      17591d90
    • Sumedh Mungee's avatar
      [SPARK-15321] Fix bug where Array[Timestamp] cannot be encoded/decoded correctly · d5c47f8f
      Sumedh Mungee authored
      ## What changes were proposed in this pull request?
      
      Fix `MapObjects.itemAccessorMethod` to handle `TimestampType`. Without this fix, `Array[Timestamp]` cannot be properly encoded or decoded. To reproduce this, in `ExpressionEncoderSuite`, if you add the following test case:
      
      `encodeDecodeTest(Array(Timestamp.valueOf("2016-01-29 10:00:00")), "array of timestamp")
      `
      ... you will see that (without this fix) it fails with the following output:
      
      ```
      - encode/decode for array of timestamp: [Ljava.sql.Timestamp;fd9ebde *** FAILED ***
        Exception thrown while decoding
        Converted: [0,1000000010,800000001,52a7ccdc36800]
        Schema: value#61615
        root
        -- value: array (nullable = true)
            |-- element: timestamp (containsNull = true)
        Encoder:
        class[value[0]: array<timestamp>] (ExpressionEncoderSuite.scala:312)
      ```
      
      ## How was this patch tested?
      
      Existing tests
      
      Author: Sumedh Mungee <smungee@gmail.com>
      
      Closes #13108 from smungee/fix-itemAccessorMethod.
      d5c47f8f
    • Xiangrui Meng's avatar
      Closes #11915 · 66ec2494
      Xiangrui Meng authored
      Closes #8648
      Closes #13089
      66ec2494
    • Sandeep Singh's avatar
      [SPARK-15296][MLLIB] Refactor All Java Tests that use SparkSession · 01cf649c
      Sandeep Singh authored
      ## What changes were proposed in this pull request?
      Refactor All Java Tests that use SparkSession, to extend SharedSparkSesion
      
      ## How was this patch tested?
      Existing Tests
      
      Author: Sandeep Singh <sandeep@techaddict.me>
      
      Closes #13101 from techaddict/SPARK-15296.
      01cf649c
    • Shixiong Zhu's avatar
      [SPARK-15416][SQL] Display a better message for not finding classes removed in Spark 2.0 · 16ba71ab
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      If finding `NoClassDefFoundError` or `ClassNotFoundException`, check if the class name is removed in Spark 2.0. If so, the user must be using an incompatible library and we can provide a better message.
      
      ## How was this patch tested?
      
      1. Run `bin/pyspark --packages com.databricks:spark-avro_2.10:2.0.1`
      2. type `sqlContext.read.format("com.databricks.spark.avro").load("src/test/resources/episodes.avro")`.
      
      It will show `java.lang.ClassNotFoundException: org.apache.spark.sql.sources.HadoopFsRelationProvider is removed in Spark 2.0. Please check if your library is compatible with Spark 2.0`
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #13201 from zsxwing/better-message.
      16ba71ab
    • Yanbo Liang's avatar
      [MINOR][ML][PYSPARK] ml.evaluation Scala and Python API sync · 66436778
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      ```ml.evaluation``` Scala and Python API sync.
      
      ## How was this patch tested?
      Only API docs change, no new tests.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #13195 from yanboliang/evaluation-doc.
      66436778
    • Yanbo Liang's avatar
      [SPARK-15341][DOC][ML] Add documentation for "model.write" to clarify "summary" was not saved · f8107c78
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      Currently in ```model.write```, we don't save ```summary```(if applicable). We should add documentation to clarify it.
      We fixed the incorrect link ```[[MLWriter]]``` to ```[[org.apache.spark.ml.util.MLWriter]]``` BTW.
      
      ## How was this patch tested?
      Documentation update, no unit test.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #13131 from yanboliang/spark-15341.
      f8107c78
    • jerryshao's avatar
      [SPARK-15375][SQL][STREAMING] Add ConsoleSink to structure streaming · dcf407de
      jerryshao authored
      ## What changes were proposed in this pull request?
      
      Add ConsoleSink to structure streaming, user could use it to display dataframes on the console (useful for debugging and demostrating), similar to the functionality of `DStream#print`, to use it:
      
      ```
          val query = result.write
            .format("console")
            .trigger(ProcessingTime("2 seconds"))
            .startStream()
      ```
      
      ## How was this patch tested?
      
      local verified.
      
      Not sure it is suitable to add into structure streaming, please review and help to comment, thanks a lot.
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #13162 from jerryshao/SPARK-15375.
      dcf407de
    • Sandeep Singh's avatar
      [SPARK-15414][MLLIB] Make the mllib,ml linalg type conversion APIs public · ef43a5fe
      Sandeep Singh authored
      ## What changes were proposed in this pull request?
      Open up APIs for converting between new, old linear algebra types (in spark.mllib.linalg):
      `Sparse`/`Dense` X `Vector`/`Matrices` `.asML` and `.fromML`
      
      ## How was this patch tested?
      Existing Tests
      
      Author: Sandeep Singh <sandeep@techaddict.me>
      
      Closes #13202 from techaddict/SPARK-15414.
      ef43a5fe
    • Yanbo Liang's avatar
      [SPARK-15361][ML] ML 2.0 QA: Scala APIs audit for ml.clustering · 59e6c556
      Yanbo Liang authored
      ## What changes were proposed in this pull request?
      Audit Scala API for ml.clustering.
      Fix some wrong API documentations and update outdated one.
      
      ## How was this patch tested?
      Existing unit tests.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #13148 from yanboliang/spark-15361.
      59e6c556
    • DB Tsai's avatar
      [SPARK-15411][ML] Add @since to ml.stat.MultivariateOnlineSummarizer.scala · 5255e55c
      DB Tsai authored
      ## What changes were proposed in this pull request?
      
      Add since to ml.stat.MultivariateOnlineSummarizer.scala
      
      ## How was this patch tested?
      
      unit tests
      
      Author: DB Tsai <dbt@netflix.com>
      
      Closes #13197 from dbtsai/cleanup.
      5255e55c
    • Shixiong Zhu's avatar
    • Davies Liu's avatar
      [SPARK-15392][SQL] fix default value of size estimation of logical plan · 5ccecc07
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      We use autoBroadcastJoinThreshold + 1L as the default value of size estimation, that is not good in 2.0, because we will calculate the size based on size of schema, then the estimation could be less than autoBroadcastJoinThreshold if you have an SELECT on top of an DataFrame created from RDD.
      
      This PR change the default value to Long.MaxValue.
      
      ## How was this patch tested?
      
      Added regression tests.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #13183 from davies/fix_default_size.
      5ccecc07
    • Shixiong Zhu's avatar
      [SPARK-15317][CORE] Don't store accumulators for every task in listeners · 4e3cb7a5
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      In general, the Web UI doesn't need to store the Accumulator/AccumulableInfo for every task. It only needs the Accumulator values.
      
      In this PR, it creates new UIData classes to store the necessary fields and make `JobProgressListener` store only these new classes, so that `JobProgressListener` won't store Accumulator/AccumulableInfo and the size of `JobProgressListener` becomes pretty small. I also eliminates `AccumulableInfo` from `SQLListener` so that we don't keep any references for those unused `AccumulableInfo`s.
      
      ## How was this patch tested?
      
      I ran two tests reported in JIRA locally:
      
      The first one is:
      ```
      val data = spark.range(0, 10000, 1, 10000)
      data.cache().count()
      ```
      The retained size of JobProgressListener decreases from 60.7M to 6.9M.
      
      The second one is:
      ```
      import org.apache.spark.ml.CC
      import org.apache.spark.sql.SQLContext
      val sqlContext = SQLContext.getOrCreate(sc)
      CC.runTest(sqlContext)
      ```
      
      This test won't cause OOM after applying this patch.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #13153 from zsxwing/memory.
      4e3cb7a5
Loading