Skip to content
Snippets Groups Projects
  1. Feb 07, 2017
    • zuotingbing's avatar
      [SPARK-19260] Spaces or "%20" in path parameter are not correctly handled with… · 8fd178d2
      zuotingbing authored
      JIRA Issue: https://issues.apache.org/jira/browse/SPARK-19260
      
      ## What changes were proposed in this pull request?
      
      1. “spark.history.fs.logDirectory” supports with space character and “%20” characters.
      2. As usually, if the run classpath includes hdfs-site.xml and core-site.xml files, the supplied path eg."/test" which does not contain a scheme should be taken as a HDFS path rather than a local path since the path parameter is a Hadoop dir.
      
      ## How was this patch tested?
      Update Unit Test and take some manual tests
      
      local:
      .sbin/start-history-server.sh "file:/a b"
      .sbin/start-history-server.sh "/abc%20c" (without hdfs-site.xml,core-site.xml)
      .sbin/start-history-server.sh "/a b" (without hdfs-site.xml,core-site.xml)
      .sbin/start-history-server.sh "/a b/a bc%20c" (without hdfs-site.xml,core-site.xml)
      
      hdfs:
      .sbin/start-history-server.sh "hdfs:/namenode:9000/a b"
      .sbin/start-history-server.sh "/a b" (with hdfs-site.xml,core-site.xml)
      .sbin/start-history-server.sh "/a b/a bc%20c" (with hdfs-site.xml,core-site.xml)
      
      Author: zuotingbing <zuo.tingbing9@zte.com.cn>
      
      Closes #16614 from zuotingbing/SPARK-19260.
      Unverified
      8fd178d2
    • Aseem Bansal's avatar
      [SPARK-19444][ML][DOCUMENTATION] Fix imports not being present in documentation · aee2bd2c
      Aseem Bansal authored
      ## What changes were proposed in this pull request?
      
      SPARK-19444 imports not being present in documentation
      
      ## How was this patch tested?
      
      Manual
      
      ## Disclaimer
      
      Contribution is original work and I license the work to the project under the project’s open source license
      
      Author: Aseem Bansal <anshbansal@users.noreply.github.com>
      
      Closes #16789 from anshbansal/patch-1.
      Unverified
      aee2bd2c
    • Eyal Farago's avatar
      [SPARK-18601][SQL] Simplify Create/Get complex expression pairs in optimizer · a97edc2c
      Eyal Farago authored
      ## What changes were proposed in this pull request?
      It often happens that a complex object (struct/map/array) is created only to get elements from it in an subsequent expression. We can add an optimizer rule for this.
      
      ## How was this patch tested?
      unit-tests
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: Eyal Farago <eyal@nrgene.com>
      Author: eyal farago <eyal.farago@gmail.com>
      
      Closes #16043 from eyalfa/SPARK-18601.
      a97edc2c
    • Imran Rashid's avatar
      [SPARK-18967][SCHEDULER] compute locality levels even if delay = 0 · d9043092
      Imran Rashid authored
      ## What changes were proposed in this pull request?
      
      Before this change, with delay scheduling off, spark would effectively
      ignore locality preferences for bulk scheduling.  With this change,
      locality preferences are used when multiple offers are made
      simultaneously.
      
      ## How was this patch tested?
      
      Test case added which fails without this change.  All unit tests run via jenkins.
      
      Author: Imran Rashid <irashid@cloudera.com>
      
      Closes #16376 from squito/locality_without_delay.
      d9043092
  2. Feb 06, 2017
    • uncleGen's avatar
      [SPARK-19407][SS] defaultFS is used FileSystem.get instead of getting it from uri scheme · 7a0a630e
      uncleGen authored
      ## What changes were proposed in this pull request?
      
      ```
      Caused by: java.lang.IllegalArgumentException: Wrong FS: s3a://**************/checkpoint/7b2231a3-d845-4740-bfa3-681850e5987f/metadata, expected: file:///
      	at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:649)
      	at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:82)
      	at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606)
      	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824)
      	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601)
      	at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
      	at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426)
      	at org.apache.spark.sql.execution.streaming.StreamMetadata$.read(StreamMetadata.scala:51)
      	at org.apache.spark.sql.execution.streaming.StreamExecution.<init>(StreamExecution.scala:100)
      	at org.apache.spark.sql.streaming.StreamingQueryManager.createQuery(StreamingQueryManager.scala:232)
      	at org.apache.spark.sql.streaming.StreamingQueryManager.startQuery(StreamingQueryManager.scala:269)
      	at org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:262)
      ```
      
      Can easily replicate on spark standalone cluster by providing checkpoint location uri scheme anything other than "file://" and not overriding in config.
      
      WorkAround  --conf spark.hadoop.fs.defaultFS=s3a://somebucket or set it in sparkConf or spark-default.conf
      
      ## How was this patch tested?
      
      existing ut
      
      Author: uncleGen <hustyugm@gmail.com>
      
      Closes #16815 from uncleGen/SPARK-19407.
      7a0a630e
    • zero323's avatar
      [SPARK-19467][ML][PYTHON] Remove cyclic imports from pyspark.ml.pipeline · fab0d62a
      zero323 authored
      ## What changes were proposed in this pull request?
      
      Remove cyclic imports between `pyspark.ml.pipeline` and `pyspark.ml`.
      
      ## How was this patch tested?
      
      Existing unit tests.
      
      Author: zero323 <zero323@users.noreply.github.com>
      
      Closes #16814 from zero323/SPARK-19467.
      fab0d62a
    • gatorsmile's avatar
      [SPARK-19441][SQL] Remove IN type coercion from PromoteStrings · d6dc603e
      gatorsmile authored
      ### What changes were proposed in this pull request?
      The removed codes for `IN` are not reachable, because the previous rule `InConversion` already resolves the type coercion issues.
      
      ### How was this patch tested?
      N/A
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #16783 from gatorsmile/typeCoercionIn.
      d6dc603e
    • Herman van Hovell's avatar
      [SPARK-19472][SQL] Parser should not mistake CASE WHEN(...) for a function call · cb2677b8
      Herman van Hovell authored
      ## What changes were proposed in this pull request?
      The SQL parser can mistake a `WHEN (...)` used in `CASE` for a function call. This happens in cases like the following:
      ```sql
      select case when (1) + case when 1 > 0 then 1 else 0 end = 2 then 1 else 0 end
      from tb
      ```
      This PR fixes this by re-organizing the case related parsing rules.
      
      ## How was this patch tested?
      Added a regression test to the `ExpressionParserSuite`.
      
      Author: Herman van Hovell <hvanhovell@databricks.com>
      
      Closes #16821 from hvanhovell/SPARK-19472.
      cb2677b8
    • Jin Xing's avatar
      [SPARK-19398] Change one misleading log in TaskSetManager. · d33021b3
      Jin Xing authored
      ## What changes were proposed in this pull request?
      
      Log below is misleading:
      
      ```
      if (successful(index)) {
        logInfo(
          s"Task ${info.id} in stage ${taskSet.id} (TID $tid) failed, " +
          "but another instance of the task has already succeeded, " +
          "so not re-queuing the task to be re-executed.")
      }
      ```
      
      If fetch failed, the task is marked as successful in `TaskSetManager:: handleFailedTask`. Then log above will be printed. The `successful` just means task will not be scheduled any longer, not a real success.
      
      ## How was this patch tested?
      Existing unit tests can cover this.
      
      Author: jinxing <jinxing@meituan.com>
      
      Closes #16738 from jinxing64/SPARK-19398.
      d33021b3
    • Wenchen Fan's avatar
      [SPARK-19080][SQL] simplify data source analysis · aff53021
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      The current way of resolving `InsertIntoTable` and `CreateTable` is convoluted: sometimes we replace them with concrete implementation commands during analysis, sometimes during planning phase.
      
      And the error checking logic is also a mess: we may put it in extended analyzer rules, or extended checking rules, or `CheckAnalysis`.
      
      This PR simplifies the data source analysis:
      
      1.  `InsertIntoTable` and `CreateTable` are always unresolved and need to be replaced by concrete implementation commands during analysis.
      2. The error checking logic is mainly in 2 rules: `PreprocessTableCreation` and `PreprocessTableInsertion`.
      
      ## How was this patch tested?
      
      existing test.
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #16269 from cloud-fan/ddl.
      aff53021
    • hyukjinkwon's avatar
      [SPARK-17213][SQL][FOLLOWUP] Re-enable Parquet filter tests for binary and string · 0f16ff5b
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR proposes to enable the tests for Parquet filter pushdown with binary and string.
      
      This was disabled in https://github.com/apache/spark/pull/16106 due to Parquet's issue but it is now revived in https://github.com/apache/spark/pull/16791 after upgrading Parquet to 1.8.2.
      
      ## How was this patch tested?
      
      Manually tested `ParquetFilterSuite` via IDE.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #16817 from HyukjinKwon/SPARK-17213.
      0f16ff5b
    • erenavsarogullari's avatar
      [SPARK-17663][CORE] SchedulableBuilder should handle invalid data access via... · 7beb227c
      erenavsarogullari authored
      [SPARK-17663][CORE] SchedulableBuilder should handle invalid data access via scheduler.allocation.file
      
      ## What changes were proposed in this pull request?
      
      If `spark.scheduler.allocation.file` has invalid `minShare` or/and `weight` values, these cause :
      - `NumberFormatException` due to `toInt` function
      - `SparkContext` can not be initialized.
      - It does not show meaningful error message to user.
      
      In a nutshell, this functionality can be more robust by selecting one of the following flows :
      
      **1-** Currently, if `schedulingMode` has an invalid value, a warning message is logged and default value is set as `FIFO`. Same pattern can be used for `minShare`(default: 0) and `weight`(default: 1) as well
      **2-** Meaningful error message can be shown to the user for all invalid cases.
      
      PR offers :
      - `schedulingMode` handles just empty values. It also needs to be supported for **whitespace**, **non-uppercase**(fair, FaIr etc...) or `SchedulingMode.NONE` cases by setting default value(`FIFO`)
      - `minShare` and `weight` handle just empty values. They also need to be supported for **non-integer** cases by setting default values.
      - Some refactoring of `PoolSuite`.
      
      **Code to Reproduce :**
      
      ```
      val conf = new SparkConf().setAppName("spark-fairscheduler").setMaster("local")
      conf.set("spark.scheduler.mode", "FAIR")
      conf.set("spark.scheduler.allocation.file", "src/main/resources/fairscheduler-invalid-data.xml")
      val sc = new SparkContext(conf)
      ```
      
      **fairscheduler-invalid-data.xml :**
      
      ```
      <allocations>
          <pool name="production">
              <schedulingMode>FIFO</schedulingMode>
              <weight>invalid_weight</weight>
              <minShare>2</minShare>
          </pool>
      </allocations>
      ```
      
      **Stacktrace :**
      
      ```
      Exception in thread "main" java.lang.NumberFormatException: For input string: "invalid_weight"
          at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
          at java.lang.Integer.parseInt(Integer.java:580)
          at java.lang.Integer.parseInt(Integer.java:615)
          at scala.collection.immutable.StringLike$class.toInt(StringLike.scala:272)
          at scala.collection.immutable.StringOps.toInt(StringOps.scala:29)
          at org.apache.spark.scheduler.FairSchedulableBuilder$$anonfun$org$apache$spark$scheduler$FairSchedulableBuilder$$buildFairSchedulerPool$1.apply(SchedulableBuilder.scala:127)
          at org.apache.spark.scheduler.FairSchedulableBuilder$$anonfun$org$apache$spark$scheduler$FairSchedulableBuilder$$buildFairSchedulerPool$1.apply(SchedulableBuilder.scala:102)
      ```
      ## How was this patch tested?
      
      Added Unit Test Case.
      
      Author: erenavsarogullari <erenavsarogullari@gmail.com>
      
      Closes #15237 from erenavsarogullari/SPARK-17663.
      7beb227c
    • Cheng Lian's avatar
      [SPARK-19409][SPARK-17213] Cleanup Parquet workarounds/hacks due to bugs of old Parquet versions · 7730426c
      Cheng Lian authored
      ## What changes were proposed in this pull request?
      
      We've already upgraded parquet-mr to 1.8.2. This PR does some further cleanup by removing a workaround of PARQUET-686 and a hack due to PARQUET-363 and PARQUET-278. All three Parquet issues are fixed in parquet-mr 1.8.2.
      
      ## How was this patch tested?
      
      Existing unit tests.
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #16791 from liancheng/parquet-1.8.2-cleanup.
      7730426c
  3. Feb 05, 2017
    • gatorsmile's avatar
      [SPARK-19279][SQL] Infer Schema for Hive Serde Tables and Block Creating a... · 65b10ffb
      gatorsmile authored
      [SPARK-19279][SQL] Infer Schema for Hive Serde Tables and Block Creating a Hive Table With an Empty Schema
      
      ### What changes were proposed in this pull request?
      So far, we allow users to create a table with an empty schema: `CREATE TABLE tab1`. This could break many code paths if we enable it. Thus, we should follow Hive to block it.
      
      For Hive serde tables, some serde libraries require the specified schema and record it in the metastore. To get the list, we need to check `hive.serdes.using.metastore.for.schema,` which contains a list of serdes that require user-specified schema. The default values are
      
      - org.apache.hadoop.hive.ql.io.orc.OrcSerde
      - org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
      - org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe
      - org.apache.hadoop.hive.serde2.dynamic_type.DynamicSerDe
      - org.apache.hadoop.hive.serde2.MetadataTypedColumnsetSerDe
      - org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe
      - org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe
      - org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe
      
      ### How was this patch tested?
      Added test cases for both Hive and data source tables
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #16636 from gatorsmile/fixEmptyTableSchema.
      65b10ffb
    • Zheng RuiFeng's avatar
      [SPARK-19421][ML][PYSPARK] Remove numClasses and numFeatures methods in LinearSVC · 317fa750
      Zheng RuiFeng authored
      ## What changes were proposed in this pull request?
      Methods `numClasses` and `numFeatures` in LinearSVCModel are already usable by inheriting `JavaClassificationModel`
      we should not explicitly add them.
      
      ## How was this patch tested?
      existing tests
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #16727 from zhengruifeng/nits_in_linearSVC.
      317fa750
    • Asher Krim's avatar
      [SPARK-19247][ML] Save large word2vec models · b3e89802
      Asher Krim authored
      ## What changes were proposed in this pull request?
      
      * save word2vec models as distributed files rather than as one large datum. Backwards compatibility with the previous save format is maintained by checking for the "wordIndex" column
      * migrate the fix for loading large models (SPARK-11994) to ml word2vec
      
      ## How was this patch tested?
      
      Tested loading the new and old formats locally
      
      srowen yanboliang MLnick
      
      Author: Asher Krim <akrim@hubspot.com>
      
      Closes #16607 from Krimit/saveLargeModels.
      b3e89802
    • actuaryzhang's avatar
      [SPARK-19452][SPARKR] Fix bug in the name assignment method · b94f4b6f
      actuaryzhang authored
      ## What changes were proposed in this pull request?
      The names method fails to check for validity of the assignment values. This can be fixed by calling colnames within names.
      
      ## How was this patch tested?
      new tests.
      
      Author: actuaryzhang <actuaryzhang10@gmail.com>
      
      Closes #16794 from actuaryzhang/sparkRNames.
      b94f4b6f
  4. Feb 04, 2017
    • Liang-Chi Hsieh's avatar
      [SPARK-19425][SQL] Make ExtractEquiJoinKeys support UDT columns · 0674e7eb
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      DataFrame.except doesn't work for UDT columns. It is because `ExtractEquiJoinKeys` will run `Literal.default` against UDT. However, we don't handle UDT in `Literal.default` and an exception will throw like:
      
          java.lang.RuntimeException: no default for type
          org.apache.spark.ml.linalg.VectorUDT3bfc3ba7
            at org.apache.spark.sql.catalyst.expressions.Literal$.default(literals.scala:179)
            at org.apache.spark.sql.catalyst.planning.ExtractEquiJoinKeys$$anonfun$4.apply(patterns.scala:117)
            at org.apache.spark.sql.catalyst.planning.ExtractEquiJoinKeys$$anonfun$4.apply(patterns.scala:110)
      
      More simple fix is just let `Literal.default` handle UDT by its sql type. So we can use more efficient join type on UDT.
      
      Besides `except`, this also fixes other similar scenarios, so in summary this fixes:
      
      * `except` on two Datasets with UDT
      * `intersect` on two Datasets with UDT
      * `Join` with the join conditions using `<=>` on UDT columns
      
      ## How was this patch tested?
      
      Jenkins tests.
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #16765 from viirya/df-except-for-udt.
      0674e7eb
    • hyukjinkwon's avatar
      [SPARK-19446][SQL] Remove unused findTightestCommonType in TypeCoercion · 2f3c20bb
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR proposes to
      
      - remove unused `findTightestCommonType` in `TypeCoercion` as suggested in https://github.com/apache/spark/pull/16777#discussion_r99283834
      - rename `findTightestCommonTypeOfTwo ` to `findTightestCommonType`.
      - fix comments accordingly
      
      The usage was removed while refactoring/fixing in several JIRAs such as SPARK-16714, SPARK-16735 and SPARK-16646
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #16786 from HyukjinKwon/SPARK-19446.
      2f3c20bb
  5. Feb 03, 2017
    • Reynold Xin's avatar
      [SPARK-10063] Follow-up: remove dead code related to an old output committer. · 22d4aae8
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      DirectParquetOutputCommitter was removed from Spark as it was deemed unsafe to use. We however still have some code to generate warning. This patch removes those code as well.
      
      ## How was this patch tested?
      N/A
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #16796 from rxin/remove-direct.
      22d4aae8
    • actuaryzhang's avatar
      [SPARK-19386][SPARKR][FOLLOWUP] fix error in vignettes · 050c20cc
      actuaryzhang authored
      ## What changes were proposed in this pull request?
      
      Current version has error in vignettes:
      ```
      model <- spark.bisectingKmeans(df, Sepal_Length ~ Sepal_Width, k = 4)
      summary(kmeansModel)
      ```
      
      `kmeansModel` does not exist...
      
      felixcheung wangmiao1981
      
      Author: actuaryzhang <actuaryzhang10@gmail.com>
      
      Closes #16799 from actuaryzhang/sparkRVignettes.
      050c20cc
    • krishnakalyan3's avatar
      [SPARK-19386][SPARKR][DOC] Bisecting k-means in SparkR documentation · 48aafeda
      krishnakalyan3 authored
      ## What changes were proposed in this pull request?
      Update programming guide, example and vignette with Bisecting k-means.
      
      Author: krishnakalyan3 <krishnakalyan3@gmail.com>
      
      Closes #16767 from krishnakalyan3/bisecting-kmeans.
      48aafeda
    • Liang-Chi Hsieh's avatar
      [SPARK-19244][CORE] Sort MemoryConsumers according to their memory usage when spilling · 2f523fa0
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      In `TaskMemoryManager `, when we acquire memory by calling `acquireExecutionMemory` and we can't acquire required memory, we will try to spill other memory consumers.
      
      Currently, we simply iterates the memory consumers in a hash set. Normally each time the consumer will be iterated in the same order.
      
      The first issue is that we might spill additional consumers. For example, if consumer 1 uses 10MB, consumer 2 uses 50MB, then consumer 3 acquires 100MB but we can only get 60MB and spilling is needed. We might spill both consumer 1 and consumer 2. But we actually just need to spill consumer 2 and get the required 100MB.
      
      The second issue is that if we spill consumer 1 in first time spilling. After a while, consumer 1 now uses 5MB. Then consumer 4 may acquire some memory and spilling is needed again. Because we iterate the memory consumers in the same order, we will spill consumer 1 again. So for consumer 1, we will produce many small spilling files.
      
      This patch modifies the way iterating the memory consumers. It sorts the memory consumers by their memory usage. So the consumer using more memory will spill first. Once it is spilled, even it acquires few memory again, in next time spilling happens it will not be the consumers to spill again if there are other consumers using more memory than it.
      
      ## How was this patch tested?
      
      Jenkins tests.
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #16603 from viirya/sort-memoryconsumer-when-spill.
      2f523fa0
    • Dongjoon Hyun's avatar
      [SPARK-18909][SQL] The error messages in `ExpressionEncoder.toRow/fromRow` are too verbose · 52d4f619
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      In `ExpressionEncoder.toRow` and `fromRow`, we catch the exception and output `treeString` of serializer/deserializer expressions in the error message. However, encoder can be very complex and the serializer/deserializer expressions can be very large trees and blow up the log files(e.g. generate over 500mb logs for this single error message.) As a first attempt, this PR try to use `simpleString` instead.
      
      **BEFORE**
      
      ```scala
      scala> :paste
      // Entering paste mode (ctrl-D to finish)
      
      case class TestCaseClass(value: Int)
      import spark.implicits._
      Seq(TestCaseClass(1)).toDS().collect()
      
      // Exiting paste mode, now interpreting.
      
      java.lang.RuntimeException: Error while decoding: java.lang.NullPointerException
      newInstance(class TestCaseClass)
      +- assertnotnull(input[0, int, false], - field (class: "scala.Int", name: "value"), - root class: "TestCaseClass")
         +- input[0, int, false]
      
        at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.fromRow(ExpressionEncoder.scala:303)
      ...
      ```
      
      **AFTER**
      
      ```scala
      ...
      // Exiting paste mode, now interpreting.
      
      java.lang.RuntimeException: Error while decoding: java.lang.NullPointerException
      newInstance(class TestCaseClass)
        at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.fromRow(ExpressionEncoder.scala:303)
      ...
      ```
      
      ## How was this patch tested?
      
      Manual.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #16701 from dongjoon-hyun/SPARK-18909-EXPR-ERROR.
      52d4f619
    • Sean Owen's avatar
      [BUILD] Close stale PRs · 20b4ca14
      Sean Owen authored
      Closes #15736
      Closes #16309
      Closes #16485
      Closes #16502
      Closes #16196
      Closes #16498
      Closes #12380
      Closes #16764
      
      Closes #14394
      Closes #14204
      Closes #14027
      Closes #13690
      Closes #16279
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #16778 from srowen/CloseStalePRs.
      Unverified
      20b4ca14
    • Liang-Chi Hsieh's avatar
      [SPARK-19411][SQL] Remove the metadata used to mark optional columns in merged... · bf493686
      Liang-Chi Hsieh authored
      [SPARK-19411][SQL] Remove the metadata used to mark optional columns in merged Parquet schema for filter predicate pushdown
      
      ## What changes were proposed in this pull request?
      
      There is a metadata introduced before to mark the optional columns in merged Parquet schema for filter predicate pushdown. As we upgrade to Parquet 1.8.2 which includes the fix for the pushdown of optional columns, we don't need this metadata now.
      
      ## How was this patch tested?
      
      Jenkins tests.
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #16756 from viirya/remove-optional-metadata.
      bf493686
    • jinxing's avatar
      [SPARK-19437] Rectify spark executor id in HeartbeatReceiverSuite. · c86a57f4
      jinxing authored
      ## What changes were proposed in this pull request?
      
      The current code in `HeartbeatReceiverSuite`, executorId is set as below:
      ```
        private val executorId1 = "executor-1"
        private val executorId2 = "executor-2"
      ```
      
      The executorId is sent to driver when register as below:
      
      ```
      test("expire dead hosts should kill executors with replacement (SPARK-8119)")  {
        ...
        fakeSchedulerBackend.driverEndpoint.askSync[Boolean](
            RegisterExecutor(executorId1, dummyExecutorEndpointRef1, "1.2.3.4", 0, Map.empty))
        ...
      }
      ```
      
      Receiving `RegisterExecutor` in `CoarseGrainedSchedulerBackend`, the executorId will be compared with `currentExecutorIdCounter` as below:
      ```
      case RegisterExecutor(executorId, executorRef, hostname, cores, logUrls)  =>
        if (executorDataMap.contains(executorId)) {
          executorRef.send(RegisterExecutorFailed("Duplicate executor ID: " + executorId))
          context.reply(true)
        } else {
        ...
        executorDataMap.put(executorId, data)
        if (currentExecutorIdCounter < executorId.toInt) {
          currentExecutorIdCounter = executorId.toInt
        }
        ...
      ```
      
      `executorId.toInt` will cause NumberformatException.
      
      This unit test can pass currently because of `askWithRetry`, when catching exception, RPC will call again, thus it will go `if` branch and return true.
      
      **To fix**
      Rectify executorId and replace `askWithRetry` with `askSync`, refer to https://github.com/apache/spark/pull/16690
      ## How was this patch tested?
      This fix is for unit test and no need to add another one.(If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
      
      Author: jinxing <jinxing@meituan.com>
      
      Closes #16779 from jinxing64/SPARK-19437.
      c86a57f4
  6. Feb 02, 2017
  7. Feb 01, 2017
    • Shixiong Zhu's avatar
      [SPARK-19432][CORE] Fix an unexpected failure when connecting timeout · 8303e20c
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      When connecting timeout, `ask` may fail with a confusing message:
      
      ```
      17/02/01 23:15:19 INFO Worker: Connecting to master ...
      java.lang.IllegalArgumentException: requirement failed: TransportClient has not yet been set.
              at scala.Predef$.require(Predef.scala:224)
              at org.apache.spark.rpc.netty.RpcOutboxMessage.onTimeout(Outbox.scala:70)
              at org.apache.spark.rpc.netty.NettyRpcEnv$$anonfun$ask$1.applyOrElse(NettyRpcEnv.scala:232)
              at org.apache.spark.rpc.netty.NettyRpcEnv$$anonfun$ask$1.applyOrElse(NettyRpcEnv.scala:231)
              at scala.concurrent.Future$$anonfun$onFailure$1.apply(Future.scala:138)
              at scala.concurrent.Future$$anonfun$onFailure$1.apply(Future.scala:136)
              at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
      ```
      
      It's better to provide a meaningful message.
      
      ## How was this patch tested?
      
      Jenkins
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #16773 from zsxwing/connect-timeout.
      8303e20c
    • Zheng RuiFeng's avatar
      [SPARK-14352][SQL] approxQuantile should support multi columns · b0985764
      Zheng RuiFeng authored
      ## What changes were proposed in this pull request?
      
      1, add the multi-cols support based on current private api
      2, add the multi-cols support to pyspark
      ## How was this patch tested?
      
      unit tests
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      Author: Ruifeng Zheng <ruifengz@foxmail.com>
      
      Closes #12135 from zhengruifeng/quantile4multicols.
      b0985764
    • jinxing's avatar
      [SPARK-19347] ReceiverSupervisorImpl can add block to ReceiverTracker multiple... · c5fcb7f6
      jinxing authored
      [SPARK-19347] ReceiverSupervisorImpl can add block to ReceiverTracker multiple times because of askWithRetry.
      
      ## What changes were proposed in this pull request?
      
      `ReceiverSupervisorImpl` on executor side reports block's meta back to `ReceiverTracker` on driver side. In current code, `askWithRetry` is used. However, for `AddBlock`, `ReceiverTracker` is not idempotent, which may result in messages are processed multiple times.
      
      **To reproduce**:
      
      1. Check if it is the first time receiving `AddBlock` in `ReceiverTracker`, if so sleep long enough(say 200 seconds), thus the first RPC call will be timeout in `askWithRetry`, then `AddBlock` will be resent.
      2. Rebuild Spark and run following job:
      ```
        def streamProcessing(): Unit = {
          val conf = new SparkConf()
            .setAppName("StreamingTest")
            .setMaster(masterUrl)
          val ssc = new StreamingContext(conf, Seconds(200))
          val stream = ssc.socketTextStream("localhost", 1234)
          stream.print()
          ssc.start()
          ssc.awaitTermination()
        }
      ```
      **To fix**:
      
      It makes sense to provide a blocking version `ask` in RpcEndpointRef, as mentioned in SPARK-18113 (https://github.com/apache/spark/pull/16503#event-927953218). Because Netty RPC layer will not drop messages. `askWithRetry` is a leftover from akka days. It imposes restrictions on the caller(e.g. idempotency) and other things that people generally don't pay that much attention to when using it.
      
      ## How was this patch tested?
      Test manually. The scenario described above doesn't happen with this patch.
      
      Author: jinxing <jinxing@meituan.com>
      
      Closes #16690 from jinxing64/SPARK-19347.
      c5fcb7f6
    • Devaraj K's avatar
      [SPARK-19377][WEBUI][CORE] Killed tasks should have the status as KILLED · df4a27cc
      Devaraj K authored
      ## What changes were proposed in this pull request?
      
      Copying of the killed status was missing while getting the newTaskInfo object by dropping the unnecessary details to reduce the memory usage. This patch adds the copying of the killed status to newTaskInfo object, this will correct the display of the status from wrong status to KILLED status in Web UI.
      
      ## How was this patch tested?
      
      Current behaviour of displaying tasks in stage UI page,
      
      | Index | ID | Attempt | Status | Locality Level | Executor ID / Host | Launch Time | Duration | GC Time | Input Size / Records | Write Time | Shuffle Write Size / Records | Errors |
      | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
      |143	|10	|0	|SUCCESS	|NODE_LOCAL	|6 / x.xx.x.x stdout stderr|2017/01/25 07:49:27	|0 ms |		|0.0 B / 0		| |0.0 B / 0	|TaskKilled (killed intentionally)|
      |156	|11	|0	|SUCCESS	|NODE_LOCAL	|5 / x.xx.x.x stdout stderr|2017/01/25 07:49:27	|0 ms |		|0.0 B / 0		| |0.0 B / 0	|TaskKilled (killed intentionally)|
      
      Web UI display after applying the patch,
      
      | Index | ID | Attempt | Status | Locality Level | Executor ID / Host | Launch Time | Duration | GC Time | Input Size / Records | Write Time | Shuffle Write Size / Records | Errors |
      | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
      |143	|10	|0	|KILLED	|NODE_LOCAL	|6 / x.xx.x.x stdout stderr|2017/01/25 07:49:27	|0 ms |		|0.0 B / 0		|  | 0.0 B / 0	| TaskKilled (killed intentionally)|
      |156	|11	|0	|KILLED	|NODE_LOCAL	|5 / x.xx.x.x stdout stderr|2017/01/25 07:49:27	|0 ms |		|0.0 B / 0		|  |0.0 B / 0	| TaskKilled (killed intentionally)|
      
      Author: Devaraj K <devaraj@apache.org>
      
      Closes #16725 from devaraj-kavali/SPARK-19377.
      df4a27cc
    • hyukjinkwon's avatar
      [SPARK-19296][SQL] Deduplicate url and table in JdbcUtils · 5ed397ba
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR deduplicates arguments, `url` and `table` in `JdbcUtils` with `JDBCOptions`.
      
      It avoids to use duplicated arguments, for example, as below:
      
      from
      
      ```scala
      val jdbcOptions = new JDBCOptions(url, table, map)
      JdbcUtils.saveTable(ds, url, table, jdbcOptions)
      ```
      
      to
      
      ```scala
      val jdbcOptions = new JDBCOptions(url, table, map)
      JdbcUtils.saveTable(ds, jdbcOptions)
      ```
      
      ## How was this patch tested?
      
      Running unit test in `JdbcSuite`/`JDBCWriteSuite`
      
      Building with Scala 2.10 as below:
      
      ```
      ./dev/change-scala-version.sh 2.10
      ./build/mvn -Pyarn -Phadoop-2.4 -Dscala-2.10 -DskipTests clean package
      ```
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #16753 from HyukjinKwon/SPARK-19296.
      5ed397ba
    • Zheng RuiFeng's avatar
      [SPARK-19410][DOC] Fix brokens links in ml-pipeline and ml-tuning · 04ee8cf6
      Zheng RuiFeng authored
      ## What changes were proposed in this pull request?
      Fix brokens links in ml-pipeline and ml-tuning
      `<div data-lang="scala">`  ->   `<div data-lang="scala" markdown="1">`
      
      ## How was this patch tested?
      manual tests
      
      Author: Zheng RuiFeng <ruifengz@foxmail.com>
      
      Closes #16754 from zhengruifeng/doc_api_fix.
      Unverified
      04ee8cf6
    • hyukjinkwon's avatar
      [SPARK-19402][DOCS] Support LaTex inline formula correctly and fix warnings in... · f1a1f260
      hyukjinkwon authored
      [SPARK-19402][DOCS] Support LaTex inline formula correctly and fix warnings in Scala/Java APIs generation
      
      ## What changes were proposed in this pull request?
      
      This PR proposes three things as below:
      
      - Support LaTex inline-formula, `\( ... \)` in Scala API documentation
        It seems currently,
      
        ```
        \( ... \)
        ```
      
        are rendered as they are, for example,
      
        <img width="345" alt="2017-01-30 10 01 13" src="https://cloud.githubusercontent.com/assets/6477701/22423960/ab37d54a-e737-11e6-9196-4f6229c0189c.png">
      
        It seems mistakenly more backslashes were added.
      
      - Fix warnings Scaladoc/Javadoc generation
        This PR fixes t two types of warnings as below:
      
        ```
        [warn] .../spark/sql/catalyst/src/main/scala/org/apache/spark/sql/Row.scala:335: Could not find any member to link for "UnsupportedOperationException".
        [warn]   /**
        [warn]   ^
        ```
      
        ```
        [warn] .../spark/sql/core/src/main/scala/org/apache/spark/sql/internal/VariableSubstitution.scala:24: Variable var undefined in comment for class VariableSubstitution in class VariableSubstitution
        [warn]  * `${var}`, `${system:var}` and `${env:var}`.
        [warn]      ^
        ```
      
      - Fix Javadoc8 break
        ```
        [error] .../spark/mllib/target/java/org/apache/spark/ml/PredictionModel.java:7: error: reference not found
        [error]  *                       E.g., {link VectorUDT} for vector features.
        [error]                                       ^
        [error] .../spark/mllib/target/java/org/apache/spark/ml/PredictorParams.java:12: error: reference not found
        [error]    *                          E.g., {link VectorUDT} for vector features.
        [error]                                            ^
        [error] .../spark/mllib/target/java/org/apache/spark/ml/Predictor.java:10: error: reference not found
        [error]  *                       E.g., {link VectorUDT} for vector features.
        [error]                                       ^
        [error] .../spark/sql/hive/target/java/org/apache/spark/sql/hive/HiveAnalysis.java:5: error: reference not found
        [error]  * Note that, this rule must be run after {link PreprocessTableInsertion}.
        [error]                                                  ^
        ```
      
      ## How was this patch tested?
      
      Manually via `sbt unidoc` and `jeykil build`.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #16741 from HyukjinKwon/warn-and-break.
      Unverified
      f1a1f260
  8. Jan 31, 2017
    • wm624@hotmail.com's avatar
      [SPARK-19319][SPARKR] SparkR Kmeans summary returns error when the cluster size doesn't equal to k · 9ac05225
      wm624@hotmail.com authored
      ## What changes were proposed in this pull request
      
      When Kmeans using initMode = "random" and some random seed, it is possible the actual cluster size doesn't equal to the configured `k`.
      
      In this case, summary(model) returns error due to the number of cols of coefficient matrix doesn't equal to k.
      
      Example:
      >  col1 <- c(1, 2, 3, 4, 0, 1, 2, 3, 4, 0)
      >   col2 <- c(1, 2, 3, 4, 0, 1, 2, 3, 4, 0)
      >   col3 <- c(1, 2, 3, 4, 0, 1, 2, 3, 4, 0)
      >   cols <- as.data.frame(cbind(col1, col2, col3))
      >   df <- createDataFrame(cols)
      >
      >   model2 <- spark.kmeans(data = df, ~ ., k = 5, maxIter = 10,  initMode = "random", seed = 22222, tol = 1E-5)
      >
      > summary(model2)
      Error in `colnames<-`(`*tmp*`, value = c("col1", "col2", "col3")) :
        length of 'dimnames' [2] not equal to array extent
      In addition: Warning message:
      In matrix(coefficients, ncol = k) :
        data length [9] is not a sub-multiple or multiple of the number of rows [2]
      
      Fix: Get the actual cluster size in the summary and use it to build the coefficient matrix.
      ## How was this patch tested?
      
      Add unit tests.
      
      Author: wm624@hotmail.com <wm624@hotmail.com>
      
      Closes #16666 from wangmiao1981/kmeans.
      9ac05225
    • zero323's avatar
      [SPARK-19163][PYTHON][SQL] Delay _judf initialization to the __call__ · 90638358
      zero323 authored
      ## What changes were proposed in this pull request?
      
      Defer `UserDefinedFunction._judf` initialization to the first call. This prevents unintended `SparkSession` initialization.  This allows users to define and import UDF without creating a context / session as a side effect.
      
      [SPARK-19163](https://issues.apache.org/jira/browse/SPARK-19163)
      
      ## How was this patch tested?
      
      Unit tests.
      
      Author: zero323 <zero323@users.noreply.github.com>
      
      Closes #16536 from zero323/SPARK-19163.
      90638358
    • Burak Yavuz's avatar
      [SPARK-19378][SS] Ensure continuity of stateOperator and eventTime metrics... · 081b7add
      Burak Yavuz authored
      [SPARK-19378][SS] Ensure continuity of stateOperator and eventTime metrics even if there is no new data in trigger
      
      ## What changes were proposed in this pull request?
      
      In StructuredStreaming, if a new trigger was skipped because no new data arrived, we suddenly report nothing for the metrics `stateOperator`. We could however easily report the metrics from `lastExecution` to ensure continuity of metrics.
      
      ## How was this patch tested?
      
      Regression test in `StreamingQueryStatusAndProgressSuite`
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #16716 from brkyvz/state-agg.
      081b7add
    • Bryan Cutler's avatar
      [SPARK-17161][PYSPARK][ML] Add PySpark-ML JavaWrapper convenience function to... · 57d70d26
      Bryan Cutler authored
      [SPARK-17161][PYSPARK][ML] Add PySpark-ML JavaWrapper convenience function to create Py4J JavaArrays
      
      ## What changes were proposed in this pull request?
      
      Adding convenience function to Python `JavaWrapper` so that it is easy to create a Py4J JavaArray that is compatible with current class constructors that have a Scala `Array` as input so that it is not necessary to have a Java/Python friendly constructor.  The function takes a Java class as input that is used by Py4J to create the Java array of the given class.  As an example, `OneVsRest` has been updated to use this and the alternate constructor is removed.
      
      ## How was this patch tested?
      
      Added unit tests for the new convenience function and updated `OneVsRest` doctests which use this to persist the model.
      
      Author: Bryan Cutler <cutlerb@gmail.com>
      
      Closes #14725 from BryanCutler/pyspark-new_java_array-CountVectorizer-SPARK-17161.
      57d70d26
    • actuaryzhang's avatar
      [SPARK-19395][SPARKR] Convert coefficients in summary to matrix · ce112cec
      actuaryzhang authored
      ## What changes were proposed in this pull request?
      The `coefficients` component in model summary should be 'matrix' but the underlying structure is indeed list. This affects several models except for 'AFTSurvivalRegressionModel' which has the correct implementation. The fix is to first `unlist` the coefficients returned from the `callJMethod` before converting to matrix. An example illustrates the issues:
      
      ```
      data(iris)
      df <- createDataFrame(iris)
      model <- spark.glm(df, Sepal_Length ~ Sepal_Width, family = "gaussian")
      s <- summary(model)
      
      > str(s$coefficients)
      List of 8
       $ : num 6.53
       $ : num -0.223
       $ : num 0.479
       $ : num 0.155
       $ : num 13.6
       $ : num -1.44
       $ : num 0
       $ : num 0.152
       - attr(*, "dim")= int [1:2] 2 4
       - attr(*, "dimnames")=List of 2
        ..$ : chr [1:2] "(Intercept)" "Sepal_Width"
        ..$ : chr [1:4] "Estimate" "Std. Error" "t value" "Pr(>|t|)"
      > s$coefficients[, 2]
      $`(Intercept)`
      [1] 0.4788963
      
      $Sepal_Width
      [1] 0.1550809
      ```
      
      This  shows that the underlying structure of coefficients is still `list`.
      
      felixcheung wangmiao1981
      
      Author: actuaryzhang <actuaryzhang10@gmail.com>
      
      Closes #16730 from actuaryzhang/sparkRCoef.
      ce112cec
Loading