Skip to content
Snippets Groups Projects
  1. Mar 02, 2017
    • Yun Ni's avatar
      [MINOR][ML] Fix comments in LSH Examples and Python API · 3bd8ddf7
      Yun Ni authored
      ## What changes were proposed in this pull request?
      Remove `org.apache.spark.examples.` in
      Add slash in one of the python doc.
      
      ## How was this patch tested?
      Run examples using the commands in the comments.
      
      Author: Yun Ni <yunn@uber.com>
      
      Closes #17104 from Yunni/yunn_minor.
      3bd8ddf7
    • windpiger's avatar
      [SPARK-19583][SQL] CTAS for data source table with a created location should succeed · de2b53df
      windpiger authored
      ## What changes were proposed in this pull request?
      
      ```
        spark.sql(
                s"""
                   |CREATE TABLE t
                   |USING parquet
                   |PARTITIONED BY(a, b)
                   |LOCATION '$dir'
                   |AS SELECT 3 as a, 4 as b, 1 as c, 2 as d
                 """.stripMargin)
      ```
      
      Failed with the error message:
      ```
      path file:/private/var/folders/6r/15tqm8hn3ldb3rmbfqm1gf4c0000gn/T/spark-195cd513-428a-4df9-b196-87db0c73e772 already exists.;
      org.apache.spark.sql.AnalysisException: path file:/private/var/folders/6r/15tqm8hn3ldb3rmbfqm1gf4c0000gn/T/spark-195cd513-428a-4df9-b196-87db0c73e772 already exists.;
      	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:102)
      ```
      while hive table is ok ,so we should fix it for datasource table.
      
      The reason is that the SaveMode check is put in  `InsertIntoHadoopFsRelationCommand` , and the SaveMode check actually use `path`, this is fine when we use `DataFrameWriter.save()`, because this situation of SaveMode act on `path`.
      
      While when we use  `CreateDataSourceAsSelectCommand`, the situation of SaveMode act on table, and
      we have already do SaveMode check in `CreateDataSourceAsSelectCommand` for table , so we should not do SaveMode check in the following logic in `InsertIntoHadoopFsRelationCommand` for path, this is redundant and wrong logic for `CreateDataSourceAsSelectCommand`
      
      After this PR, the following DDL will succeed, when the location has been created we will append it or overwrite it.
      ```
      CREATE TABLE ... (PARTITIONED BY ...) LOCATION path AS SELECT ...
      ```
      
      ## How was this patch tested?
      unit test added
      
      Author: windpiger <songjun@outlook.com>
      
      Closes #16938 from windpiger/CTASDataSourceWitLocation.
      de2b53df
  2. Mar 01, 2017
    • GavinGavinNo1's avatar
      [SPARK-13931] Stage can hang if an executor fails while speculated tasks are running · 89990a01
      GavinGavinNo1 authored
      ## What changes were proposed in this pull request?
      When function 'executorLost' is invoked in class 'TaskSetManager', it's significant to judge whether variable 'isZombie' is set to true.
      
      This pull request fixes the following hang:
      
      1.Open speculation switch in the application.
      2.Run this app and suppose last task of shuffleMapStage 1 finishes. Let's get the record straight, from the eyes of DAG, this stage really finishes, and from the eyes of TaskSetManager, variable 'isZombie' is set to true, but variable runningTasksSet isn't empty because of speculation.
      3.Suddenly, executor 3 is lost. TaskScheduler receiving this signal, invokes all executorLost functions of rootPool's taskSetManagers. DAG receiving this signal, removes all this executor's outputLocs.
      4.TaskSetManager adds all this executor's tasks to pendingTasks and tells DAG they will be resubmitted (Attention: possibly not on time).
      5.DAG starts to submit a new waitingStage, let's say shuffleMapStage 2, and going to find that shuffleMapStage 1 is its missing parent because some outputLocs are removed due to executor lost. Then DAG submits shuffleMapStage 1 again.
      6.DAG still receives Task 'Resubmitted' signal from old taskSetManager, and increases the number of pendingTasks of shuffleMapStage 1 each time. However, old taskSetManager won't resolve new task to submit because its variable 'isZombie' is set to true.
      7.Finally shuffleMapStage 1 never finishes in DAG together with all stages depending on it.
      
      ## How was this patch tested?
      
      It's quite difficult to construct test cases.
      
      Author: GavinGavinNo1 <gavingavinno1@gmail.com>
      Author: 16092929 <16092929@cnsuning.com>
      
      Closes #16855 from GavinGavinNo1/resolve-stage-blocked2.
      89990a01
    • jinxing's avatar
      [SPARK-19777] Scan runningTasksSet when check speculatable tasks in TaskSetManager. · 51be6336
      jinxing authored
      ## What changes were proposed in this pull request?
      
      When check speculatable tasks in `TaskSetManager`, only scan `runningTasksSet` instead of scanning all `taskInfos`.
      
      ## How was this patch tested?
      Existing tests.
      
      Author: jinxing <jinxing6042@126.com>
      
      Closes #17111 from jinxing64/SPARK-19777.
      51be6336
    • Dongjoon Hyun's avatar
      [SPARK-19775][SQL] Remove an obsolete `partitionBy().insertInto()` test case · db0ddce5
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This issue removes [a test case](https://github.com/apache/spark/blame/master/sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertIntoHiveTableSuite.scala#L287-L298) which was introduced by [SPARK-14459](https://github.com/apache/spark/commit/652bbb1bf62722b08a062c7a2bf72019f85e179e) and was superseded by [SPARK-16033](https://github.com/apache/spark/blame/master/sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertIntoHiveTableSuite.scala#L365-L371). Basically, we cannot use `partitionBy` and `insertInto` together.
      
      ```scala
        test("Reject partitioning that does not match table") {
          withSQLConf(("hive.exec.dynamic.partition.mode", "nonstrict")) {
            sql("CREATE TABLE partitioned (id bigint, data string) PARTITIONED BY (part string)")
            val data = (1 to 10).map(i => (i, s"data-$i", if ((i % 2) == 0) "even" else "odd"))
                .toDF("id", "data", "part")
      
            intercept[AnalysisException] {
              // cannot partition by 2 fields when there is only one in the table definition
              data.write.partitionBy("part", "data").insertInto("partitioned")
            }
          }
        }
      ```
      
      ## How was this patch tested?
      
      This only removes a test case. Pass the existing Jenkins test.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #17106 from dongjoon-hyun/SPARK-19775.
      db0ddce5
    • actuaryzhang's avatar
      [DOC][MINOR][SPARKR] Update SparkR doc for names, columns and colnames · 2ff1467d
      actuaryzhang authored
      Update R doc:
      1. columns, names and colnames returns a vector of strings, not **list** as in current doc.
      2. `colnames<-` does allow the subset assignment, so the length of `value` can be less than the number of columns, e.g., `colnames(df)[1] <- "a"`.
      
      felixcheung
      
      Author: actuaryzhang <actuaryzhang10@gmail.com>
      
      Closes #17115 from actuaryzhang/sparkRMinorDoc.
      2ff1467d
    • Vasilis Vryniotis's avatar
      [SPARK-19787][ML] Changing the default parameter of regParam. · 417140e4
      Vasilis Vryniotis authored
      ## What changes were proposed in this pull request?
      
      In the ALS method the default values of regParam do not match within the same file (lines [224](https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/recommendation/ALS.scala#L224) and [714](https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/recommendation/ALS.scala#L714)). In one place we set it to 1.0 and in the other to 0.1.
      
      I changed the one of train() method to 0.1 and now it matches the default value which is visible to Spark users. The method is marked with DeveloperApi so it should not affect the users. Whenever we use the particular method we provide all parameters, so the default does not matter. Only exception is the unit-tests on ALSSuite but the change does not break them.
      
      Note: This PR should get the award of the laziest commit in Spark history. Originally I wanted to correct this on another PR but MLnick [suggested](https://github.com/apache/spark/pull/17059#issuecomment-283333572) to create a separate PR & ticket. If you think this change is too insignificant/minor, you are probably right, so feel free to reject and close this. :)
      
      ## How was this patch tested?
      
      Unit-tests
      
      Author: Vasilis Vryniotis <vvryniotis@hotels.com>
      
      Closes #17121 from datumbox/als_regparam.
      417140e4
    • windpiger's avatar
      [SPARK-19761][SQL] create InMemoryFileIndex with an empty rootPaths when set... · 8aa560b7
      windpiger authored
      [SPARK-19761][SQL] create InMemoryFileIndex with an empty rootPaths when set PARALLEL_PARTITION_DISCOVERY_THRESHOLD to zero failed
      
      ## What changes were proposed in this pull request?
      
      If we create a InMemoryFileIndex with an empty rootPaths when set PARALLEL_PARTITION_DISCOVERY_THRESHOLD to zero, it will throw an  exception:
      
      ```
      Positive number of slices required
      java.lang.IllegalArgumentException: Positive number of slices required
              at org.apache.spark.rdd.ParallelCollectionRDD$.slice(ParallelCollectionRDD.scala:119)
              at org.apache.spark.rdd.ParallelCollectionRDD.getPartitions(ParallelCollectionRDD.scala:97)
              at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
              at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
              at scala.Option.getOrElse(Option.scala:121)
              at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
              at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
              at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
              at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
              at scala.Option.getOrElse(Option.scala:121)
              at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
              at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
              at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
              at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
              at scala.Option.getOrElse(Option.scala:121)
              at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
              at org.apache.spark.SparkContext.runJob(SparkContext.scala:2084)
              at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
              at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
              at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
              at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
              at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
              at org.apache.spark.sql.execution.datasources.PartitioningAwareFileIndex$.org$apache$spark$sql$execution$datasources$PartitioningAwareFileIndex$$bulkListLeafFiles(PartitioningAwareFileIndex.scala:357)
              at org.apache.spark.sql.execution.datasources.PartitioningAwareFileIndex.listLeafFiles(PartitioningAwareFileIndex.scala:256)
              at org.apache.spark.sql.execution.datasources.InMemoryFileIndex.refresh0(InMemoryFileIndex.scala:74)
              at org.apache.spark.sql.execution.datasources.InMemoryFileIndex.<init>(InMemoryFileIndex.scala:50)
              at org.apache.spark.sql.execution.datasources.FileIndexSuite$$anonfun$9$$anonfun$apply$mcV$sp$2.apply$mcV$sp(FileIndexSuite.scala:186)
              at org.apache.spark.sql.test.SQLTestUtils$class.withSQLConf(SQLTestUtils.scala:105)
              at org.apache.spark.sql.execution.datasources.FileIndexSuite.withSQLConf(FileIndexSuite.scala:33)
              at org.apache.spark.sql.execution.datasources.FileIndexSuite$$anonfun$9.apply$mcV$sp(FileIndexSuite.scala:185)
              at org.apache.spark.sql.execution.datasources.FileIndexSuite$$anonfun$9.apply(FileIndexSuite.scala:185)
              at org.apache.spark.sql.execution.datasources.FileIndexSuite$$anonfun$9.apply(FileIndexSuite.scala:185)
              at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
              at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
      ```
      
      ## How was this patch tested?
      unit test added
      
      Author: windpiger <songjun@outlook.com>
      
      Closes #17093 from windpiger/fixEmptiPathInBulkListFiles.
      8aa560b7
    • Stan Zhai's avatar
      [SPARK-19766][SQL] Constant alias columns in INNER JOIN should not be folded... · 5502a9cf
      Stan Zhai authored
      [SPARK-19766][SQL] Constant alias columns in INNER JOIN should not be folded by FoldablePropagation rule
      
      ## What changes were proposed in this pull request?
      This PR fixes the code in Optimizer phase where the constant alias columns of a `INNER JOIN` query are folded in Rule `FoldablePropagation`.
      
      For the following query():
      
      ```
      val sqlA =
        """
          |create temporary view ta as
          |select a, 'a' as tag from t1 union all
          |select a, 'b' as tag from t2
        """.stripMargin
      
      val sqlB =
        """
          |create temporary view tb as
          |select a, 'a' as tag from t3 union all
          |select a, 'b' as tag from t4
        """.stripMargin
      
      val sql =
        """
          |select tb.* from ta inner join tb on
          |ta.a = tb.a and
          |ta.tag = tb.tag
        """.stripMargin
      ```
      
      The tag column is an constant alias column, it's folded by `FoldablePropagation` like this:
      
      ```
      TRACE SparkOptimizer:
      === Applying Rule org.apache.spark.sql.catalyst.optimizer.FoldablePropagation ===
       Project [a#4, tag#14]                              Project [a#4, tag#14]
      !+- Join Inner, ((a#0 = a#4) && (tag#8 = tag#14))   +- Join Inner, ((a#0 = a#4) && (a = a))
          :- Union                                           :- Union
          :  :- Project [a#0, a AS tag#8]                    :  :- Project [a#0, a AS tag#8]
          :  :  +- LocalRelation [a#0]                       :  :  +- LocalRelation [a#0]
          :  +- Project [a#2, b AS tag#9]                    :  +- Project [a#2, b AS tag#9]
          :     +- LocalRelation [a#2]                       :     +- LocalRelation [a#2]
          +- Union                                           +- Union
             :- Project [a#4, a AS tag#14]                      :- Project [a#4, a AS tag#14]
             :  +- LocalRelation [a#4]                          :  +- LocalRelation [a#4]
             +- Project [a#6, b AS tag#15]                      +- Project [a#6, b AS tag#15]
                +- LocalRelation [a#6]                             +- LocalRelation [a#6]
      ```
      
      Finally the Result of Batch Operator Optimizations is:
      
      ```
      Project [a#4, tag#14]                              Project [a#4, tag#14]
      !+- Join Inner, ((a#0 = a#4) && (tag#8 = tag#14))   +- Join Inner, (a#0 = a#4)
      !   :- SubqueryAlias ta, `ta`                          :- Union
      !   :  +- Union                                        :  :- LocalRelation [a#0]
      !   :     :- Project [a#0, a AS tag#8]                 :  +- LocalRelation [a#2]
      !   :     :  +- SubqueryAlias t1, `t1`                 +- Union
      !   :     :     +- Project [a#0]                          :- LocalRelation [a#4, tag#14]
      !   :     :        +- SubqueryAlias grouping              +- LocalRelation [a#6, tag#15]
      !   :     :           +- LocalRelation [a#0]
      !   :     +- Project [a#2, b AS tag#9]
      !   :        +- SubqueryAlias t2, `t2`
      !   :           +- Project [a#2]
      !   :              +- SubqueryAlias grouping
      !   :                 +- LocalRelation [a#2]
      !   +- SubqueryAlias tb, `tb`
      !      +- Union
      !         :- Project [a#4, a AS tag#14]
      !         :  +- SubqueryAlias t3, `t3`
      !         :     +- Project [a#4]
      !         :        +- SubqueryAlias grouping
      !         :           +- LocalRelation [a#4]
      !         +- Project [a#6, b AS tag#15]
      !            +- SubqueryAlias t4, `t4`
      !               +- Project [a#6]
      !                  +- SubqueryAlias grouping
      !                     +- LocalRelation [a#6]
      ```
      
      The condition `tag#8 = tag#14` of INNER JOIN has been removed. This leads to the data of inner join being wrong.
      
      After fix:
      
      ```
      === Result of Batch LocalRelation ===
       GlobalLimit 21                                           GlobalLimit 21
       +- LocalLimit 21                                         +- LocalLimit 21
          +- Project [a#4, tag#11]                                 +- Project [a#4, tag#11]
             +- Join Inner, ((a#0 = a#4) && (tag#8 = tag#11))         +- Join Inner, ((a#0 = a#4) && (tag#8 = tag#11))
      !         :- SubqueryAlias ta                                      :- Union
      !         :  +- Union                                              :  :- LocalRelation [a#0, tag#8]
      !         :     :- Project [a#0, a AS tag#8]                       :  +- LocalRelation [a#2, tag#9]
      !         :     :  +- SubqueryAlias t1                             +- Union
      !         :     :     +- Project [a#0]                                :- LocalRelation [a#4, tag#11]
      !         :     :        +- SubqueryAlias grouping                    +- LocalRelation [a#6, tag#12]
      !         :     :           +- LocalRelation [a#0]
      !         :     +- Project [a#2, b AS tag#9]
      !         :        +- SubqueryAlias t2
      !         :           +- Project [a#2]
      !         :              +- SubqueryAlias grouping
      !         :                 +- LocalRelation [a#2]
      !         +- SubqueryAlias tb
      !            +- Union
      !               :- Project [a#4, a AS tag#11]
      !               :  +- SubqueryAlias t3
      !               :     +- Project [a#4]
      !               :        +- SubqueryAlias grouping
      !               :           +- LocalRelation [a#4]
      !               +- Project [a#6, b AS tag#12]
      !                  +- SubqueryAlias t4
      !                     +- Project [a#6]
      !                        +- SubqueryAlias grouping
      !                           +- LocalRelation [a#6]
      ```
      
      ## How was this patch tested?
      
      add sql-tests/inputs/inner-join.sql
      All tests passed.
      
      Author: Stan Zhai <zhaishidan@haizhi.com>
      
      Closes #17099 from stanzhai/fix-inner-join.
      5502a9cf
    • Liang-Chi Hsieh's avatar
      [SPARK-19736][SQL] refreshByPath should clear all cached plans with the specified path · 38e78353
      Liang-Chi Hsieh authored
      ## What changes were proposed in this pull request?
      
      `Catalog.refreshByPath` can refresh the cache entry and the associated metadata for all dataframes (if any), that contain the given data source path.
      
      However, `CacheManager.invalidateCachedPath` doesn't clear all cached plans with the specified path. It causes some strange behaviors reported in SPARK-15678.
      
      ## How was this patch tested?
      
      Jenkins tests.
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #17064 from viirya/fix-refreshByPath.
      38e78353
    • Liwei Lin's avatar
      [SPARK-19633][SS] FileSource read from FileSink · 4913c92c
      Liwei Lin authored
      ## What changes were proposed in this pull request?
      
      Right now file source always uses `InMemoryFileIndex` to scan files from a given path.
      
      But when reading the outputs from another streaming query, the file source should use `MetadataFileIndex` to list files from the sink log. This patch adds this support.
      
      ## `MetadataFileIndex` or `InMemoryFileIndex`
      ```scala
      spark
        .readStream
        .format(...)
        .load("/some/path") // for a non-glob path:
                            //   - use `MetadataFileIndex` when `/some/path/_spark_meta` exists
                            //   - fall back to `InMemoryFileIndex` otherwise
      ```
      ```scala
      spark
        .readStream
        .format(...)
        .load("/some/path/*/*") // for a glob path: always use `InMemoryFileIndex`
      ```
      
      ## How was this patch tested?
      
      two newly added tests
      
      Author: Liwei Lin <lwlin7@gmail.com>
      
      Closes #16987 from lw-lin/source-read-from-sink.
      4913c92c
    • wm624@hotmail.com's avatar
      [SPARK-19460][SPARKR] Update dataset used in R documentation, examples to... · 89cd3845
      wm624@hotmail.com authored
      [SPARK-19460][SPARKR] Update dataset used in R documentation, examples to reduce warning noise and confusions
      
      ## What changes were proposed in this pull request?
      
      Replace `iris` dataset with `Titanic` or other dataset in example and document.
      
      ## How was this patch tested?
      
      Manual and existing test
      
      Author: wm624@hotmail.com <wm624@hotmail.com>
      
      Closes #17032 from wangmiao1981/example.
      89cd3845
    • Jeff Zhang's avatar
      [SPARK-19572][SPARKR] Allow to disable hive in sparkR shell · 73158805
      Jeff Zhang authored
      ## What changes were proposed in this pull request?
      SPARK-15236 do this for scala shell, this ticket is for sparkR shell. This is not only for sparkR itself, but can also benefit downstream project like livy which use shell.R for its interactive session. For now, livy has no control of whether enable hive or not.
      
      ## How was this patch tested?
      
      Tested it manually, run `bin/sparkR --master local --conf spark.sql.catalogImplementation=in-memory` and verify hive is not enabled.
      
      Author: Jeff Zhang <zjffdu@apache.org>
      
      Closes #16907 from zjffdu/SPARK-19572.
      73158805
  3. Feb 28, 2017
    • Yuhao's avatar
      [SPARK-14503][ML] spark.ml API for FPGrowth · 0fe8020f
      Yuhao authored
      ## What changes were proposed in this pull request?
      
      jira: https://issues.apache.org/jira/browse/SPARK-14503
      Function parity: Add FPGrowth and AssociationRules to ML.
      
      design doc: https://docs.google.com/document/d/1bVhABn5DiEj8bw0upqGMJT2L4nvO_0_cXdwu4uMT6uU/pub
      
      Currently I make FPGrowthModel a transformer. For each association rule,  it will just examine the input items against antecedents and summarize the consequents.
      
      Update:
      Thinking again, FPGrowth is only the algorithm to find the frequent itemsets, and can be replaced by other algorithms. The frequent itemsets are used by AssociationRules to generate the association rules. Then we can use the association rules to predict with other records.
      
      ![drawing1](https://cloud.githubusercontent.com/assets/7981698/22489294/76b9302c-e7cb-11e6-8d2d-3fc53f407b2f.png)
      
      **For reviewers**, Let's first decide if the current `transform` function meets your expectation.
      
      Current options:
      
      1. Current implementation: Use Estimator and Transformer pattern in ML, the `transform` function will examine the input items against all the association rules and summarize the consequents. Users can also access frequent items and association rules via other model members.
      
      2. Keep the Estimator and Transformer pattern. But AssociationRulesModel and FPGrowthModel will have empty `transform` function, meaning DataFrame has no change after transform. But users can access frequent items and association rules via other model members.
      
      3. (mentioned by zhengruifeng) Keep the Estimator and Transformer pattern. But `FPGrowthModel` and `AssociationRulesModel` will just return frequent itemsets and association rules DataFrame in the `transform` function. Meaning the resulting DataFrame after `transform` will not be related to the input DataFrame.
      
      4. Discard the Estimator and Transformer pattern. Both FPGrowth and FPGrowthModel will directly extend from PipelineStage, thus we don't need to have a `transform` function.
      
       I'd like to hear more concrete suggestions. I would prefer option 1 or 2.
      
      update 2:
      
      As discussed  in the jira, we will not expose AssociationRules as a public API for now.
      
      ## How was this patch tested?
      
      new unit test suites
      
      Author: Yuhao <yuhao.yang@intel.com>
      Author: Yuhao Yang <yuhao.yang@intel.com>
      Author: Yuhao Yang <hhbyyh@gmail.com>
      
      Closes #15415 from hhbyyh/mlfpm.
      0fe8020f
    • Michael Gummelt's avatar
      [SPARK-19373][MESOS] Base spark.scheduler.minRegisteredResourceRatio on... · ca3864d6
      Michael Gummelt authored
      [SPARK-19373][MESOS] Base spark.scheduler.minRegisteredResourceRatio on registered cores rather than accepted cores
      
      ## What changes were proposed in this pull request?
      
      See JIRA
      
      ## How was this patch tested?
      
      Unit tests, Mesos/Spark integration tests
      
      cc skonto susanxhuynh
      
      Author: Michael Gummelt <mgummelt@mesosphere.io>
      
      Closes #17045 from mgummelt/SPARK-19373-registered-resources.
      ca3864d6
    • Michael McCune's avatar
      [SPARK-19769][DOCS] Update quickstart instructions · bf5987cb
      Michael McCune authored
      ## What changes were proposed in this pull request?
      
      This change addresses the renaming of the `simple.sbt` build file to
      `build.sbt`. Newer versions of the sbt tool are not finding the older
      named file and are looking for the `build.sbt`. The quickstart
      instructions for self-contained applications is updated with this
      change.
      
      ## How was this patch tested?
      
      As this is a relatively minor change of a few words, the markdown was checked for syntax and spelling. Site was built with `SKIP_API=1 jekyll serve` for testing purposes.
      
      Author: Michael McCune <msm@redhat.com>
      
      Closes #17101 from elmiko/spark-19769.
      bf5987cb
    • actuaryzhang's avatar
      [MINOR][DOC] Update GLM doc to include tweedie distribution · d743ea4c
      actuaryzhang authored
      Update GLM documentation to include the Tweedie distribution. #16344
      
      jkbradley yanboliang
      
      Author: actuaryzhang <actuaryzhang10@gmail.com>
      
      Closes #17103 from actuaryzhang/doc.
      d743ea4c
    • hyukjinkwon's avatar
      [SPARK-19610][SQL] Support parsing multiline CSV files · 7e5359be
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR proposes the support for multiple lines for CSV by resembling the multiline supports in JSON datasource (in case of JSON, per file).
      
      So, this PR introduces `wholeFile` option which makes the format not splittable and reads each whole file. Since Univocity parser can produces each row from a stream, it should be capable of parsing very large documents when the internal rows are fix in the memory.
      
      ## How was this patch tested?
      
      Unit tests in `CSVSuite` and `tests.py`
      
      Manual tests with a single 9GB CSV file in local file system, for example,
      
      ```scala
      spark.read.option("wholeFile", true).option("inferSchema", true).csv("tmp.csv").count()
      ```
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #16976 from HyukjinKwon/SPARK-19610.
      7e5359be
    • windpiger's avatar
      [SPARK-19463][SQL] refresh cache after the InsertIntoHadoopFsRelationCommand · ce233f18
      windpiger authored
      ## What changes were proposed in this pull request?
      
      If we first cache a DataSource table, then we insert some data into the table, we should refresh the data in the cache after the insert command.
      
      ## How was this patch tested?
      unit test added
      
      Author: windpiger <songjun@outlook.com>
      
      Closes #16809 from windpiger/refreshCacheAfterInsert.
      ce233f18
    • Roberto Agostino Vitillo's avatar
      [SPARK-19677][SS] Committing a delta file atop an existing one should not fail on HDFS · 9734a928
      Roberto Agostino Vitillo authored
      ## What changes were proposed in this pull request?
      
      HDFSBackedStateStoreProvider fails to rename files on HDFS but not on the local filesystem. According to the [implementation notes](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/filesystem/filesystem.html) of `rename()`, the behavior of the local filesystem and HDFS varies:
      
      > Destination exists and is a file
      > Renaming a file atop an existing file is specified as failing, raising an exception.
      >    - Local FileSystem : the rename succeeds; the destination file is replaced by the source file.
      >    - HDFS : The rename fails, no exception is raised. Instead the method call simply returns false.
      
      This patch ensures that `rename()` isn't called if the destination file already exists. It's still semantically correct because Structured Streaming requires that rerunning a batch should generate the same output.
      
      ## How was this patch tested?
      
      This patch was tested by running `StateStoreSuite`.
      
      Author: Roberto Agostino Vitillo <ra.vitillo@gmail.com>
      
      Closes #17012 from vitillo/fix_rename.
      9734a928
    • Wenchen Fan's avatar
      [SPARK-19678][SQL] remove MetastoreRelation · 7c7fc30b
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      `MetastoreRelation` is used to represent table relation for hive tables, and provides some hive related information. We will resolve `SimpleCatalogRelation` to `MetastoreRelation` for hive tables, which is unnecessary as these 2 are the same essentially. This PR merges `SimpleCatalogRelation` and `MetastoreRelation`
      
      ## How was this patch tested?
      
      existing tests
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #17015 from cloud-fan/table-relation.
      7c7fc30b
    • Nick Pentreath's avatar
      [SPARK-14489][ML][PYSPARK] ALS unknown user/item prediction strategy · b4054665
      Nick Pentreath authored
      This PR adds a param to `ALS`/`ALSModel` to set the strategy used when encountering unknown users or items at prediction time in `transform`. This can occur in 2 scenarios: (a) production scoring, and (b) cross-validation & evaluation.
      
      The current behavior returns `NaN` if a user/item is unknown. In scenario (b), this can easily occur when using `CrossValidator` or `TrainValidationSplit` since some users/items may only occur in the test set and not in the training set. In this case, the evaluator returns `NaN` for all metrics, making model selection impossible.
      
      The new param, `coldStartStrategy`, defaults to `nan` (the current behavior). The other option supported initially is `drop`, which drops all rows with `NaN` predictions. This flag allows users to use `ALS` in cross-validation settings. It is made an `expertParam`. The param is made a string so that the set of strategies can be extended in future (some options are discussed in [SPARK-14489](https://issues.apache.org/jira/browse/SPARK-14489)).
      ## How was this patch tested?
      
      New unit tests, and manual "before and after" tests for Scala & Python using MovieLens `ml-latest-small` as example data. Here, using `CrossValidator` or `TrainValidationSplit` with the default param setting results in metrics that are all `NaN`, while setting `coldStartStrategy` to `drop` results in valid metrics.
      
      Author: Nick Pentreath <nickp@za.ibm.com>
      
      Closes #12896 from MLnick/SPARK-14489-als-nan.
      b4054665
    • Yuming Wang's avatar
      [SPARK-19660][CORE][SQL] Replace the configuration property names that are... · 9b8eca65
      Yuming Wang authored
      [SPARK-19660][CORE][SQL] Replace the configuration property names that are deprecated in the version of Hadoop 2.6
      
      ## What changes were proposed in this pull request?
      
      Replace all the Hadoop deprecated configuration property names according to [DeprecatedProperties](https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/DeprecatedProperties.html).
      
      except:
      https://github.com/apache/spark/blob/v2.1.0/python/pyspark/sql/tests.py#L1533
      https://github.com/apache/spark/blob/v2.1.0/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala#L987
      https://github.com/apache/spark/blob/v2.1.0/sql/core/src/main/scala/org/apache/spark/sql/execution/command/SetCommand.scala#L45
      https://github.com/apache/spark/blob/v2.1.0/sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala#L614
      
      ## How was this patch tested?
      
      Existing tests
      
      Author: Yuming Wang <wgyumg@gmail.com>
      
      Closes #16990 from wangyum/HadoopDeprecatedProperties.
      9b8eca65
    • windpiger's avatar
      [SPARK-19748][SQL] refresh function has a wrong order to do cache invalidate... · a350bc16
      windpiger authored
      [SPARK-19748][SQL] refresh function has a wrong order to do cache invalidate and regenerate the inmemory var for InMemoryFileIndex with FileStatusCache
      
      ## What changes were proposed in this pull request?
      
      If we refresh a InMemoryFileIndex with a FileStatusCache, it will first use the FileStatusCache to re-generate the cachedLeafFiles etc, then call FileStatusCache.invalidateAll.
      
      While the order to do these two actions is wrong, this lead to the refresh action does not take effect.
      
      ```
        override def refresh(): Unit = {
          refresh0()
          fileStatusCache.invalidateAll()
        }
      
        private def refresh0(): Unit = {
          val files = listLeafFiles(rootPaths)
          cachedLeafFiles =
            new mutable.LinkedHashMap[Path, FileStatus]() ++= files.map(f => f.getPath -> f)
          cachedLeafDirToChildrenFiles = files.toArray.groupBy(_.getPath.getParent)
          cachedPartitionSpec = null
        }
      ```
      ## How was this patch tested?
      unit test added
      
      Author: windpiger <songjun@outlook.com>
      
      Closes #17079 from windpiger/fixInMemoryFileIndexRefresh.
      a350bc16
  4. Feb 27, 2017
    • uncleGen's avatar
      [SPARK-19749][SS] Name socket source with a meaningful name · 73530383
      uncleGen authored
      ## What changes were proposed in this pull request?
      
      Name socket source with a meaningful name
      
      ## How was this patch tested?
      
      Jenkins
      
      Author: uncleGen <hustyugm@gmail.com>
      
      Closes #17082 from uncleGen/SPARK-19749.
      73530383
    • sethah's avatar
      [SPARK-19746][ML] Faster indexing for logistic aggregator · 16d8472f
      sethah authored
      ## What changes were proposed in this pull request?
      
      JIRA: [SPARK-19746](https://issues.apache.org/jira/browse/SPARK-19746)
      
      The following code is inefficient:
      
      ````scala
          val localCoefficients: Vector = bcCoefficients.value
      
          features.foreachActive { (index, value) =>
            val stdValue = value / localFeaturesStd(index)
            var j = 0
            while (j < numClasses) {
              margins(j) += localCoefficients(index * numClasses + j) * stdValue
              j += 1
            }
          }
      ````
      
      `localCoefficients(index * numClasses + j)` calls `Vector.apply` which creates a new Breeze vector and indexes that. Even if it is not that slow to create the object, we will generate a lot of extra garbage that may result in longer GC pauses. This is a hot inner loop, so we should optimize wherever possible.
      
      ## How was this patch tested?
      
      I don't think there's a great way to test this patch. It's purely performance related, so unit tests should guarantee that we haven't made any unwanted changes. Empirically I observed between 10-40% speedups just running short local tests. I suspect the big differences will be seen when large data/coefficient sizes have to pause for GC more often. I welcome other ideas for testing.
      
      Author: sethah <seth.hendrickson16@gmail.com>
      
      Closes #17078 from sethah/logistic_agg_indexing.
      Unverified
      16d8472f
    • hyukjinkwon's avatar
      [SPARK-15615][SQL][BUILD][FOLLOW-UP] Replace deprecated usage of json(RDD[String]) API · 8a5a5850
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR proposes to replace the deprecated `json(RDD[String])` usage to `json(Dataset[String])`.
      
      This currently produces so many warnings.
      
      ## How was this patch tested?
      
      Fixed tests.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #17071 from HyukjinKwon/SPARK-15615-followup.
      8a5a5850
    • hyukjinkwon's avatar
      [MINOR][BUILD] Fix lint-java breaks in Java · 4ba9c6c4
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR proposes to fix the lint-breaks as below:
      
      ```
      [ERROR] src/test/java/org/apache/spark/network/TransportResponseHandlerSuite.java:[29,8] (imports) UnusedImports: Unused import - org.apache.spark.network.buffer.ManagedBuffer.
      [ERROR] src/main/java/org/apache/spark/unsafe/types/UTF8String.java:[156,10] (modifier) ModifierOrder: 'Nonnull' annotation modifier does not precede non-annotation modifiers.
      [ERROR] src/main/java/org/apache/spark/SparkFirehoseListener.java:[122] (sizes) LineLength: Line is longer than 100 characters (found 105).
      [ERROR] src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeExternalSorter.java:[164,78] (coding) OneStatementPerLine: Only one statement per line allowed.
      [ERROR] src/test/java/test/org/apache/spark/JavaAPISuite.java:[1157] (sizes) LineLength: Line is longer than 100 characters (found 121).
      [ERROR] src/test/java/org/apache/spark/streaming/JavaMapWithStateSuite.java:[149] (sizes) LineLength: Line is longer than 100 characters (found 113).
      [ERROR] src/test/java/test/org/apache/spark/streaming/Java8APISuite.java:[146] (sizes) LineLength: Line is longer than 100 characters (found 122).
      [ERROR] src/test/java/test/org/apache/spark/streaming/JavaAPISuite.java:[32,8] (imports) UnusedImports: Unused import - org.apache.spark.streaming.Time.
      [ERROR] src/test/java/test/org/apache/spark/streaming/JavaAPISuite.java:[611] (sizes) LineLength: Line is longer than 100 characters (found 101).
      [ERROR] src/test/java/test/org/apache/spark/streaming/JavaAPISuite.java:[1317] (sizes) LineLength: Line is longer than 100 characters (found 102).
      [ERROR] src/test/java/test/org/apache/spark/sql/JavaDatasetAggregatorSuite.java:[91] (sizes) LineLength: Line is longer than 100 characters (found 102).
      [ERROR] src/test/java/test/org/apache/spark/sql/JavaDatasetSuite.java:[113] (sizes) LineLength: Line is longer than 100 characters (found 101).
      [ERROR] src/test/java/test/org/apache/spark/sql/JavaDatasetSuite.java:[164] (sizes) LineLength: Line is longer than 100 characters (found 110).
      [ERROR] src/test/java/test/org/apache/spark/sql/JavaDatasetSuite.java:[212] (sizes) LineLength: Line is longer than 100 characters (found 114).
      [ERROR] src/test/java/org/apache/spark/mllib/tree/JavaDecisionTreeSuite.java:[36] (sizes) LineLength: Line is longer than 100 characters (found 101).
      [ERROR] src/main/java/org/apache/spark/examples/streaming/JavaKinesisWordCountASL.java:[26,8] (imports) UnusedImports: Unused import - com.amazonaws.regions.RegionUtils.
      [ERROR] src/test/java/org/apache/spark/streaming/kinesis/JavaKinesisStreamSuite.java:[20,8] (imports) UnusedImports: Unused import - com.amazonaws.regions.RegionUtils.
      [ERROR] src/test/java/org/apache/spark/streaming/kinesis/JavaKinesisStreamSuite.java:[94] (sizes) LineLength: Line is longer than 100 characters (found 103).
      [ERROR] src/main/java/org/apache/spark/examples/ml/JavaTokenizerExample.java:[30,8] (imports) UnusedImports: Unused import - org.apache.spark.sql.api.java.UDF1.
      [ERROR] src/main/java/org/apache/spark/examples/ml/JavaTokenizerExample.java:[72] (sizes) LineLength: Line is longer than 100 characters (found 104).
      [ERROR] src/main/java/org/apache/spark/examples/mllib/JavaRankingMetricsExample.java:[121] (sizes) LineLength: Line is longer than 100 characters (found 101).
      [ERROR] src/main/java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java:[28,8] (imports) UnusedImports: Unused import - org.apache.spark.api.java.JavaRDD.
      [ERROR] src/main/java/org/apache/spark/examples/sql/JavaSQLDataSourceExample.java:[29,8] (imports) UnusedImports: Unused import - org.apache.spark.api.java.JavaSparkContext.
      ```
      
      ## How was this patch tested?
      
      Manually via
      
      ```bash
      ./dev/lint-java
      ```
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #17072 from HyukjinKwon/java-lint.
      4ba9c6c4
  5. Feb 26, 2017
    • Eyal Zituny's avatar
      [SPARK-19594][STRUCTURED STREAMING] StreamingQueryListener fails to handle... · 9f8e3921
      Eyal Zituny authored
      [SPARK-19594][STRUCTURED STREAMING] StreamingQueryListener fails to handle QueryTerminatedEvent if more then one listeners exists
      
      ## What changes were proposed in this pull request?
      
      currently if multiple streaming queries listeners exists, when a QueryTerminatedEvent is triggered, only one of the listeners will be invoked while the rest of the listeners will ignore the event.
      this is caused since the the streaming queries listeners bus holds a set of running queries ids and when a termination event is triggered, after the first listeners is handling the event, the terminated query id is being removed from the set.
      in this PR, the query id will be removed from the set only after all the listeners handles the event
      
      ## How was this patch tested?
      
      a test with multiple listeners has been added to StreamingQueryListenerSuite
      
      Author: Eyal Zituny <eyal.zituny@equalum.io>
      
      Closes #16991 from eyalzit/master.
      9f8e3921
    • Dilip Biswal's avatar
      [SQL] Duplicate test exception in SQLQueryTestSuite due to meta files(.DS_Store) on Mac · 68f2142c
      Dilip Biswal authored
      ## What changes were proposed in this pull request?
      After adding the tests for subquery, we now have multiple level of directories under "sql-tests/inputs".  Some times on Mac while using Finder application it creates the meta data files called ".DS_Store". When these files are present at different levels in directory hierarchy, we get duplicate test exception while running the tests  as we just use the file name as the test case name. In this PR, we use the relative file path from the base directory along with the test file as the test name. Also after this change, we can have the same test file name under different directory like exists/basic.sql , in/basic.sql. Here is the truncated output of the test run after the change.
      
      ```SQL
      info] SQLQueryTestSuite:
      [info] - arithmetic.sql (5 seconds, 235 milliseconds)
      [info] - array.sql (536 milliseconds)
      [info] - blacklist.sql !!! IGNORED !!!
      [info] - cast.sql (550 milliseconds)
      ....
      ....
      ....
      [info] - union.sql (315 milliseconds)
      [info] - subquery/.DS_Store !!! IGNORED !!!
      [info] - subquery/exists-subquery/.DS_Store !!! IGNORED !!!
      [info] - subquery/exists-subquery/exists-aggregate.sql (2 seconds, 451 milliseconds)
      ....
      ....
      [info] - subquery/in-subquery/in-group-by.sql (12 seconds, 264 milliseconds)
      ....
      ....
      [info] - subquery/scalar-subquery/scalar-subquery-predicate.sql (7 seconds, 769 milliseconds)
      [info] - subquery/scalar-subquery/scalar-subquery-select.sql (4 seconds, 119 milliseconds)
      ```
      Since this is a simple change, i haven't created a JIRA for it.
      ## How was this patch tested?
      Manually verified. This is change to test infrastructure
      
      Author: Dilip Biswal <dbiswal@us.ibm.com>
      
      Closes #17060 from dilipbiswal/sqlquerytestsuite.
      68f2142c
    • Wenchen Fan's avatar
      [SPARK-17075][SQL][FOLLOWUP] fix some minor issues and clean up the code · 89608cf2
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      This is a follow-up of https://github.com/apache/spark/pull/16395. It fixes some code style issues, naming issues, some missing cases in pattern match, etc.
      
      ## How was this patch tested?
      
      existing tests.
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #17065 from cloud-fan/follow-up.
      89608cf2
    • Joseph K. Bradley's avatar
      [MINOR][ML][DOC] Document default value for GeneralizedLinearRegression.linkPower · 6ab60542
      Joseph K. Bradley authored
      Add Scaladoc for GeneralizedLinearRegression.linkPower default value
      
      Follow-up to https://github.com/apache/spark/pull/16344
      
      Author: Joseph K. Bradley <joseph@databricks.com>
      
      Closes #17069 from jkbradley/tweedie-comment.
      6ab60542
  6. Feb 25, 2017
    • Devaraj K's avatar
      [SPARK-15288][MESOS] Mesos dispatcher should handle gracefully when any thread... · 410392ed
      Devaraj K authored
      [SPARK-15288][MESOS] Mesos dispatcher should handle gracefully when any thread gets UncaughtException
      
      ## What changes were proposed in this pull request?
      
      Adding the default UncaughtExceptionHandler to the MesosClusterDispatcher.
      ## How was this patch tested?
      
      I verified it manually, when any of the dispatcher thread gets uncaught exceptions then the default UncaughtExceptionHandler will handle those exceptions.
      
      Author: Devaraj K <devaraj@apache.org>
      
      Closes #13072 from devaraj-kavali/SPARK-15288.
      410392ed
    • lvdongr's avatar
      [SPARK-19673][SQL] "ThriftServer default app name is changed wrong" · fe07de95
      lvdongr authored
      ## What changes were proposed in this pull request?
      In spark 1.x ,the name of ThriftServer is SparkSQL:localHostName. While the ThriftServer default name is changed to the className of HiveThfift2 , which is not appropriate.
      
      ## How was this patch tested?
      manual tests
      
      Please review http://spark.apache.org/contributing.html before opening a pull request.
      
      Author: lvdongr <lv.dongdong@zte.com.cn>
      
      Closes #17010 from lvdongr/ThriftserverName.
      fe07de95
    • Boaz Mohar's avatar
      [MINOR][DOCS] Fixes two problems in the SQL programing guide page · 061bcfb8
      Boaz Mohar authored
      ## What changes were proposed in this pull request?
      
      Removed duplicated lines in sql python example and found a typo.
      
      ## How was this patch tested?
      
      Searched for other typo's in the page to minimize PR's.
      
      Author: Boaz Mohar <boazmohar@gmail.com>
      
      Closes #17066 from boazmohar/doc-fix.
      061bcfb8
    • Herman van Hovell's avatar
      [SPARK-19650] Commands should not trigger a Spark job · 8f0511ed
      Herman van Hovell authored
      Spark executes SQL commands eagerly. It does this by creating an RDD which contains the command's results. The downside to this is that any action on this RDD triggers a Spark job which is expensive and is unnecessary.
      
      This PR fixes this by avoiding the materialization of an `RDD` for `Command`s; it just materializes the result and puts them in a `LocalRelation`.
      
      Added a regression test to `SQLQuerySuite`.
      
      Author: Herman van Hovell <hvanhovell@databricks.com>
      
      Closes #17027 from hvanhovell/no-job-command.
      8f0511ed
    • Xiao Li's avatar
      [SPARK-19735][SQL] Remove HOLD_DDLTIME from Catalog APIs · 4cb025af
      Xiao Li authored
      ### What changes were proposed in this pull request?
      As explained in Hive JIRA https://issues.apache.org/jira/browse/HIVE-12224, HOLD_DDLTIME was broken as soon as it landed. Hive 2.0 removes HOLD_DDLTIME from the API. In Spark SQL, we always set it to FALSE. Like Hive, we should also remove it from our Catalog APIs.
      
      ### How was this patch tested?
      N/A
      
      Author: Xiao Li <gatorsmile@gmail.com>
      
      Closes #17063 from gatorsmile/removalHoldDDLTime.
      4cb025af
  7. Feb 24, 2017
    • Ramkumar Venkataraman's avatar
      [MINOR][DOCS] Fix few typos in structured streaming doc · 1b9ba258
      Ramkumar Venkataraman authored
      ## What changes were proposed in this pull request?
      
      Minor typo in `even-time`, which is changed to `event-time` and a couple of grammatical errors fix.
      
      ## How was this patch tested?
      
      N/A - since this is a doc fix. I did a jekyll build locally though.
      
      Author: Ramkumar Venkataraman <rvenkataraman@paypal.com>
      
      Closes #17037 from ramkumarvenkat/doc-fix.
      1b9ba258
    • Shubham Chopra's avatar
      [SPARK-15355][CORE] Proactive block replication · fa7c582e
      Shubham Chopra authored
      ## What changes were proposed in this pull request?
      
      We are proposing addition of pro-active block replication in case of executor failures. BlockManagerMasterEndpoint does all the book-keeping to keep a track of all the executors and the blocks they hold. It also keeps a track of which executors are alive through heartbeats. When an executor is removed, all this book-keeping state is updated to reflect the lost executor. This step can be used to identify executors that are still in possession of a copy of the cached data and a message could be sent to them to use the existing "replicate" function to find and place new replicas on other suitable hosts. Blocks replicated this way will let the master know of their existence.
      
      This can happen when an executor is lost, and would that way be pro-active as opposed be being done at query time.
      ## How was this patch tested?
      
      This patch was tested with existing unit tests along with new unit tests added to test the functionality.
      
      Author: Shubham Chopra <schopra31@bloomberg.net>
      
      Closes #14412 from shubhamchopra/ProactiveBlockReplication.
      fa7c582e
    • Jeff Zhang's avatar
      [SPARK-13330][PYSPARK] PYTHONHASHSEED is not propgated to python worker · 330c3e33
      Jeff Zhang authored
      ## What changes were proposed in this pull request?
      self.environment will be propagated to executor. Should set PYTHONHASHSEED as long as the python version is greater than 3.3
      
      ## How was this patch tested?
      Manually tested it.
      
      Author: Jeff Zhang <zjffdu@apache.org>
      
      Closes #11211 from zjffdu/SPARK-13330.
      330c3e33
Loading