Skip to content
Snippets Groups Projects
  1. Sep 20, 2016
  2. Sep 19, 2016
    • sethah's avatar
      [SPARK-17163][ML] Unified LogisticRegression interface · 26145a5a
      sethah authored
      ## What changes were proposed in this pull request?
      
      Merge `MultinomialLogisticRegression` into `LogisticRegression` and remove `MultinomialLogisticRegression`.
      
      Marked as WIP because we should discuss the coefficients API in the model. See discussion below.
      
      JIRA: [SPARK-17163](https://issues.apache.org/jira/browse/SPARK-17163)
      
      ## How was this patch tested?
      
      Merged test suites and added some new unit tests.
      
      ## Design
      
      ### Switching between binomial and multinomial
      
      We default to automatically detecting whether we should run binomial or multinomial lor. We expose a new parameter called `family` which defaults to auto. When "auto" is used, we run normal binomial lor with pivoting if there are 1 or 2 label classes. Otherwise, we run multinomial. If the user explicitly sets the family, then we abide by that setting. In the case where "binomial" is set but multiclass lor is detected, we throw an error.
      
      ### coefficients/intercept model API (TODO)
      
      This is the biggest design point remaining, IMO. We need to decide how to store the coefficients and intercepts in the model, and in turn how to expose them via the API. Two important points:
      
      * We must maintain compatibility with the old API, i.e. we must expose `def coefficients: Vector` and `def intercept: Double`
      * There are two separate cases: binomial lr where we have a single set of coefficients and a single intercept and multinomial lr where we have `numClasses` sets of coefficients and `numClasses` intercepts.
      
      Some options:
      
      1. **Store the binomial coefficients as a `2 x numFeatures` matrix.** This means that we would center the model coefficients before storing them in the model. The BLOR algorithm gives `1 * numFeatures` coefficients, but we would convert them to `2 x numFeatures` coefficients before storing them, effectively doubling the storage in the model. This has the advantage that we can make the code cleaner (i.e. less `if (isMultinomial) ... else ...`) and we don't have to reason about the different cases as much. It has the disadvantage that we double the storage space and we could see small regressions at prediction time since there are 2x the number of operations in the prediction algorithms. Additionally, we still have to produce the uncentered coefficients/intercept via the API, so we will have to either ALSO store the uncentered version, or compute it in `def coefficients: Vector` every time.
      
      2. **Store the binomial coefficients as a `1 x numFeatures` matrix.** We still store the coefficients as a matrix and the intercepts as a vector. When users call `coefficients` we return them a `Vector` that is backed by the same underlying array as the `coefficientMatrix`, so we don't duplicate any data. At prediction time, we use the old prediction methods that are specialized for binary LOR. The benefits here are that we don't store extra data, and we won't see any regressions in performance. The cost of this is that we have separate implementations for predict methods in the binary vs multiclass case. The duplicated code is really not very high, but it's still a bit messy.
      
      If we do decide to store the 2x coefficients, we would likely want to see some performance tests to understand the potential regressions.
      
      **Update:** We have chosen option 2
      
      ### Threshold/thresholds (TODO)
      
      Currently, when `threshold` is set we clear whatever value is in `thresholds` and when `thresholds` is set we clear whatever value is in `threshold`. [SPARK-11543](https://issues.apache.org/jira/browse/SPARK-11543) was created to prefer thresholds over threshold. We should decide if we should implement this behavior now or if we want to do it in a separate JIRA.
      
      **Update:** Let's leave it for a follow up PR
      
      ## Follow up
      
      * Summary model for multiclass logistic regression [SPARK-17139](https://issues.apache.org/jira/browse/SPARK-17139)
      * Thresholds vs threshold [SPARK-11543](https://issues.apache.org/jira/browse/SPARK-11543)
      
      Author: sethah <seth.hendrickson16@gmail.com>
      
      Closes #14834 from sethah/SPARK-17163.
      26145a5a
    • Josh Rosen's avatar
      [SPARK-17160] Properly escape field names in code-generated error messages · e719b1c0
      Josh Rosen authored
      This patch addresses a corner-case escaping bug where field names which contain special characters were unsafely interpolated into error message string literals in generated Java code, leading to compilation errors.
      
      This patch addresses these issues by using `addReferenceObj` to store the error messages as string fields rather than inline string constants.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #15156 from JoshRosen/SPARK-17160.
      e719b1c0
    • Davies Liu's avatar
      [SPARK-17100] [SQL] fix Python udf in filter on top of outer join · d8104158
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      In optimizer, we try to evaluate the condition to see whether it's nullable or not, but some expressions are not evaluable, we should check that before evaluate it.
      
      ## How was this patch tested?
      
      Added regression tests.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #15103 from davies/udf_join.
      d8104158
    • Davies Liu's avatar
      [SPARK-16439] [SQL] bring back the separator in SQL UI · e0632062
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      Currently, the SQL metrics looks like `number of rows: 111111111111`, it's very hard to read how large the number is. So a separator was added by #12425, but removed by #14142, because the separator is weird in some locales (for example, pl_PL), this PR will add that back, but always use "," as the separator, since the SQL UI are all in English.
      
      ## How was this patch tested?
      
      Existing tests.
      ![metrics](https://cloud.githubusercontent.com/assets/40902/14573908/21ad2f00-030d-11e6-9e2c-c544f30039ea.png)
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #15106 from davies/metric_sep.
      e0632062
    • Shixiong Zhu's avatar
      [SPARK-17438][WEBUI] Show Application.executorLimit in the application page · 80d66559
      Shixiong Zhu authored
      ## What changes were proposed in this pull request?
      
      This PR adds `Application.executorLimit` to the applicatino page
      
      ## How was this patch tested?
      
      Checked the UI manually.
      
      Screenshots:
      
      1. Dynamic allocation is disabled
      
      <img width="484" alt="screen shot 2016-09-07 at 4 21 49 pm" src="https://cloud.githubusercontent.com/assets/1000778/18332029/210056ea-7518-11e6-9f52-76d96046c1c0.png">
      
      2. Dynamic allocation is enabled.
      
      <img width="466" alt="screen shot 2016-09-07 at 4 25 30 pm" src="https://cloud.githubusercontent.com/assets/1000778/18332034/2c07700a-7518-11e6-8fce-aebe25014902.png">
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #15001 from zsxwing/fix-core-info.
      80d66559
    • sureshthalamati's avatar
      [SPARK-17473][SQL] fixing docker integration tests error due to different versions of jars. · cdea1d13
      sureshthalamati authored
      ## What changes were proposed in this pull request?
      Docker tests are using older version  of jersey jars (1.19),  which was used in older releases of spark.  In 2.0 releases Spark was upgraded to use 2.x verison of Jersey. After  upgrade to new versions, docker tests  are  failing with AbstractMethodError.  Now that spark is upgraded  to 2.x jersey version, using of  shaded docker jars  may not be required any more.  Removed the exclusions/overrides of jersey related classes from pom file, and changed the docker-client to use regular jar instead of shaded one.
      
      ## How was this patch tested?
      
      Tested  using existing  docker-integration-tests
      
      Author: sureshthalamati <suresh.thalamati@gmail.com>
      
      Closes #15114 from sureshthalamati/docker_testfix-spark-17473.
      cdea1d13
    • Sean Owen's avatar
      [SPARK-17297][DOCS] Clarify window/slide duration as absolute time, not relative to a calendar · d720a401
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Clarify that slide and window duration are absolute, and not relative to a calendar.
      
      ## How was this patch tested?
      
      Doc build (no functional change)
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #15142 from srowen/SPARK-17297.
      Unverified
      d720a401
  3. Sep 18, 2016
    • petermaxlee's avatar
      [SPARK-17571][SQL] AssertOnQuery.condition should always return Boolean value · 8f0c35a4
      petermaxlee authored
      ## What changes were proposed in this pull request?
      AssertOnQuery has two apply constructor: one that accepts a closure that returns boolean, and another that accepts a closure that returns Unit. This is actually very confusing because developers could mistakenly think that AssertOnQuery always require a boolean return type and verifies the return result, when indeed the value of the last statement is ignored in one of the constructors.
      
      This pull request makes the two constructor consistent and always require boolean value. It will overall make the test suites more robust against developer errors.
      
      As an evidence for the confusing behavior, this change also identified a bug with an existing test case due to file system time granularity. This pull request fixes that test case as well.
      
      ## How was this patch tested?
      This is a test only change.
      
      Author: petermaxlee <petermaxlee@gmail.com>
      
      Closes #15127 from petermaxlee/SPARK-17571.
      8f0c35a4
    • Liwei Lin's avatar
      [SPARK-16462][SPARK-16460][SPARK-15144][SQL] Make CSV cast null values properly · 1dbb725d
      Liwei Lin authored
      ## Problem
      
      CSV in Spark 2.0.0:
      -  does not read null values back correctly for certain data types such as `Boolean`, `TimestampType`, `DateType` -- this is a regression comparing to 1.6;
      - does not read empty values (specified by `options.nullValue`) as `null`s for `StringType` -- this is compatible with 1.6 but leads to problems like SPARK-16903.
      
      ## What changes were proposed in this pull request?
      
      This patch makes changes to read all empty values back as `null`s.
      
      ## How was this patch tested?
      
      New test cases.
      
      Author: Liwei Lin <lwlin7@gmail.com>
      
      Closes #14118 from lw-lin/csv-cast-null.
      Unverified
      1dbb725d
    • hyukjinkwon's avatar
      [SPARK-17586][BUILD] Do not call static member via instance reference · 7151011b
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR fixes a warning message as below:
      
      ```
      [WARNING] .../UnsafeInMemorySorter.java:284: warning: [static] static method should be qualified by type name, TaskMemoryManager, instead of by an expression
      [WARNING]       currentPageNumber = memoryManager.decodePageNumber(recordPointer)
      ```
      
      by referencing the static member via class not instance reference.
      
      ## How was this patch tested?
      
      Existing tests should cover this - Jenkins tests.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #15141 from HyukjinKwon/SPARK-17586.
      Unverified
      7151011b
    • Sean Owen's avatar
      [SPARK-17546][DEPLOY] start-* scripts should use hostname -f · 342c0e65
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Call `hostname -f` to get fully qualified host name
      
      ## How was this patch tested?
      
      Jenkins tests of course, but also verified output of command on OS X and Linux
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #15129 from srowen/SPARK-17546.
      Unverified
      342c0e65
    • jiangxingbo's avatar
      [SPARK-17506][SQL] Improve the check double values equality rule. · 5d3f4615
      jiangxingbo authored
      ## What changes were proposed in this pull request?
      
      In `ExpressionEvalHelper`, we check the equality between two double values by comparing whether the expected value is within the range [target - tolerance, target + tolerance], but this can cause a negative false when the compared numerics are very large.
      Before:
      ```
      val1 = 1.6358558070241E306
      val2 = 1.6358558070240974E306
      ExpressionEvalHelper.compareResults(val1, val2)
      false
      ```
      In fact, `val1` and `val2` are but with different precisions, we should tolerant this case by comparing with percentage range, eg.,expected is within range [target - target * tolerance_percentage, target + target * tolerance_percentage].
      After:
      ```
      val1 = 1.6358558070241E306
      val2 = 1.6358558070240974E306
      ExpressionEvalHelper.compareResults(val1, val2)
      true
      ```
      
      ## How was this patch tested?
      
      Exsiting testcases.
      
      Author: jiangxingbo <jiangxb1987@gmail.com>
      
      Closes #15059 from jiangxb1987/deq.
      Unverified
      5d3f4615
    • Wenchen Fan's avatar
      [SPARK-17541][SQL] fix some DDL bugs about table management when same-name temp view exists · 3fe630d3
      Wenchen Fan authored
      ## What changes were proposed in this pull request?
      
      In `SessionCatalog`, we have several operations(`tableExists`, `dropTable`, `loopupRelation`, etc) that handle both temp views and metastore tables/views. This brings some bugs to DDL commands that want to handle temp view only or metastore table/view only. These bugs are:
      
      1. `CREATE TABLE USING` will fail if a same-name temp view exists
      2. `Catalog.dropTempView`will un-cache and drop metastore table if a same-name table exists
      3. `saveAsTable` will fail or have unexpected behaviour if a same-name temp view exists.
      
      These bug fixes are pulled out from https://github.com/apache/spark/pull/14962 and targets both master and 2.0 branch
      
      ## How was this patch tested?
      
      new regression tests
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #15099 from cloud-fan/fix-view.
      3fe630d3
    • gatorsmile's avatar
      [SPARK-17518][SQL] Block Users to Specify the Internal Data Source Provider Hive · 3a3c9ffb
      gatorsmile authored
      ### What changes were proposed in this pull request?
      In Spark 2.1, we introduced a new internal provider `hive` for telling Hive serde tables from data source tables. This PR is to block users to specify this in `DataFrameWriter` and SQL APIs.
      
      ### How was this patch tested?
      Added a test case
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #15073 from gatorsmile/formatHive.
      3a3c9ffb
  4. Sep 17, 2016
  5. Sep 16, 2016
    • Marcelo Vanzin's avatar
      [SPARK-17549][SQL] Only collect table size stat in driver for cached relation. · 39e2bad6
      Marcelo Vanzin authored
      The existing code caches all stats for all columns for each partition
      in the driver; for a large relation, this causes extreme memory usage,
      which leads to gc hell and application failures.
      
      It seems that only the size in bytes of the data is actually used in the
      driver, so instead just colllect that. In executors, the full stats are
      still kept, but that's not a big problem; we expect the data to be distributed
      and thus not really incur in too much memory pressure in each individual
      executor.
      
      There are also potential improvements on the executor side, since the data
      being stored currently is very wasteful (e.g. storing boxed types vs.
      primitive types for stats). But that's a separate issue.
      
      On a mildly related change, I'm also adding code to catch exceptions in the
      code generator since Janino was breaking with the test data I tried this
      patch on.
      
      Tested with unit tests and by doing a count a very wide table (20k columns)
      with many partitions.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #15112 from vanzin/SPARK-17549.
      39e2bad6
    • Sean Owen's avatar
      [SPARK-17561][DOCS] DataFrameWriter documentation formatting problems · b9323fc9
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Fix `<ul> / <li>` problems in SQL scaladoc.
      
      ## How was this patch tested?
      
      Scaladoc build and manual verification of generated HTML.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #15117 from srowen/SPARK-17561.
      b9323fc9
    • Reynold Xin's avatar
      [SPARK-17558] Bump Hadoop 2.7 version from 2.7.2 to 2.7.3 · dca771be
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch bumps the Hadoop version in hadoop-2.7 profile from 2.7.2 to 2.7.3, which was recently released and contained a number of bug fixes.
      
      ## How was this patch tested?
      The change should be covered by existing tests.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #15115 from rxin/SPARK-17558.
      dca771be
    • Sean Zhong's avatar
      [SPARK-17426][SQL] Refactor `TreeNode.toJSON` to avoid OOM when converting unknown fields to JSON · a425a37a
      Sean Zhong authored
      ## What changes were proposed in this pull request?
      
      This PR is a follow up of SPARK-17356. Current implementation of `TreeNode.toJSON` recursively converts all fields of TreeNode to JSON, even if the field is of type `Seq` or type Map. This may trigger out of memory exception in cases like:
      
      1. the Seq or Map can be very big. Converting them to JSON may take huge memory, which may trigger out of memory error.
      2. Some user space input may also be propagated to the Plan. The user space input can be of arbitrary type, and may also be self-referencing. Trying to print user space input to JSON may trigger out of memory error or stack overflow error.
      
      For a code example, please check the Jira description of SPARK-17426.
      
      In this PR, we refactor the `TreeNode.toJSON` so that we only convert a field to JSON string if the field is a safe type.
      
      ## How was this patch tested?
      
      Unit test.
      
      Author: Sean Zhong <seanzhong@databricks.com>
      
      Closes #14990 from clockfly/json_oom2.
      a425a37a
    • Adam Roberts's avatar
      [SPARK-17534][TESTS] Increase timeouts for DirectKafkaStreamSuite tests · fc1efb72
      Adam Roberts authored
      **## What changes were proposed in this pull request?**
      There are two tests in this suite that are particularly flaky on the following hardware:
      
      2x Intel(R) Xeon(R) CPU E5-2697 v2  2.70GHz and 16 GB of RAM, 1 TB HDD
      
      This simple PR increases the timeout times and batch duration so they can reliably pass
      
      **## How was this patch tested?**
      Existing unit tests with the two core box where I was seeing the failures often
      
      Author: Adam Roberts <aroberts@uk.ibm.com>
      
      Closes #15094 from a-roberts/patch-6.
      fc1efb72
    • Jagadeesan's avatar
      [SPARK-17543] Missing log4j config file for tests in common/network-… · b2e27262
      Jagadeesan authored
      ## What changes were proposed in this pull request?
      
      The Maven module `common/network-shuffle` does not have a log4j configuration file for its test cases. So, added `log4j.properties` in the directory `src/test/resources`.
      
      …shuffle]
      
      Author: Jagadeesan <as2@us.ibm.com>
      
      Closes #15108 from jagadeesanas2/SPARK-17543.
      b2e27262
  6. Sep 15, 2016
    • Andrew Ray's avatar
      [SPARK-17458][SQL] Alias specified for aggregates in a pivot are not honored · b72486f8
      Andrew Ray authored
      ## What changes were proposed in this pull request?
      
      This change preserves aliases that are given for pivot aggregations
      
      ## How was this patch tested?
      
      New unit test
      
      Author: Andrew Ray <ray.andrew@gmail.com>
      
      Closes #15111 from aray/SPARK-17458.
      b72486f8
    • Josh Rosen's avatar
      [SPARK-17484] Prevent invalid block locations from being reported after put() exceptions · 1202075c
      Josh Rosen authored
      ## What changes were proposed in this pull request?
      
      If a BlockManager `put()` call failed after the BlockManagerMaster was notified of a block's availability then incomplete cleanup logic in a `finally` block would never send a second block status method to inform the master of the block's unavailability. This, in turn, leads to fetch failures and used to be capable of causing complete job failures before #15037 was fixed.
      
      This patch addresses this issue via multiple small changes:
      
      - The `finally` block now calls `removeBlockInternal` when cleaning up from a failed `put()`; in addition to removing the `BlockInfo` entry (which was _all_ that the old cleanup logic did), this code (redundantly) tries to remove the block from the memory and disk stores (as an added layer of defense against bugs lower down in the stack) and optionally notifies the master of block removal (which now happens during exception-triggered cleanup).
      - When a BlockManager receives a request for a block that it does not have it will now notify the master to update its block locations. This ensures that bad metadata pointing to non-existent blocks will eventually be fixed. Note that I could have implemented this logic in the block manager client (rather than in the remote server), but that would introduce the problem of distinguishing between transient and permanent failures; on the server, however, we know definitively that the block isn't present.
      - Catch `NonFatal` instead of `Exception` to avoid swallowing `InterruptedException`s thrown from synchronous block replication calls.
      
      This patch depends upon the refactorings in #15036, so that other patch will also have to be backported when backporting this fix.
      
      For more background on this issue, including example logs from a real production failure, see [SPARK-17484](https://issues.apache.org/jira/browse/SPARK-17484).
      
      ## How was this patch tested?
      
      Two new regression tests in BlockManagerSuite.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #15085 from JoshRosen/SPARK-17484.
      1202075c
    • Sean Zhong's avatar
      [SPARK-17364][SQL] Antlr lexer wrongly treats full qualified identifier as a... · a6b81820
      Sean Zhong authored
      [SPARK-17364][SQL] Antlr lexer wrongly treats full qualified identifier as a decimal number token when parsing SQL string
      
      ## What changes were proposed in this pull request?
      
      The Antlr lexer we use to tokenize a SQL string may wrongly tokenize a fully qualified identifier as a decimal number token. For example, table identifier `default.123_table` is wrongly tokenized as
      ```
      default // Matches lexer rule IDENTIFIER
      .123 // Matches lexer rule DECIMAL_VALUE
      _TABLE // Matches lexer rule IDENTIFIER
      ```
      
      The correct tokenization for `default.123_table` should be:
      ```
      default // Matches lexer rule IDENTIFIER,
      . // Matches a single dot
      123_TABLE // Matches lexer rule IDENTIFIER
      ```
      
      This PR fix the Antlr grammar so that it can tokenize fully qualified identifier correctly:
      1. Fully qualified table name can be parsed correctly. For example, `select * from database.123_suffix`.
      2. Fully qualified column name can be parsed correctly, for example `select a.123_suffix from a`.
      
      ### Before change
      
      #### Case 1: Failed to parse fully qualified column name
      
      ```
      scala> spark.sql("select a.123_column from a").show
      org.apache.spark.sql.catalyst.parser.ParseException:
      extraneous input '.123' expecting {<EOF>,
      ...
      , IDENTIFIER, BACKQUOTED_IDENTIFIER}(line 1, pos 8)
      == SQL ==
      select a.123_column from a
      --------^^^
      ```
      
      #### Case 2: Failed to parse fully qualified table name
      ```
      scala> spark.sql("select * from default.123_table")
      org.apache.spark.sql.catalyst.parser.ParseException:
      extraneous input '.123' expecting {<EOF>,
      ...
      IDENTIFIER, BACKQUOTED_IDENTIFIER}(line 1, pos 21)
      
      == SQL ==
      select * from default.123_table
      ---------------------^^^
      ```
      
      ### After Change
      
      #### Case 1: fully qualified column name, no ParseException thrown
      ```
      scala> spark.sql("select a.123_column from a").show
      ```
      
      #### Case 2: fully qualified table name, no ParseException thrown
      ```
      scala> spark.sql("select * from default.123_table")
      ```
      
      ## How was this patch tested?
      
      Unit test.
      
      Author: Sean Zhong <seanzhong@databricks.com>
      
      Closes #15006 from clockfly/SPARK-17364.
      a6b81820
    • 岑玉海's avatar
      [SPARK-17429][SQL] use ImplicitCastInputTypes with function Length · fe767395
      岑玉海 authored
      ## What changes were proposed in this pull request?
      select length(11);
      select length(2.0);
      these sql will return errors, but hive is ok.
      this PR will support casting input types implicitly for function length
      the correct result is:
      select length(11) return 2
      select length(2.0) return 3
      
      Author: 岑玉海 <261810726@qq.com>
      Author: cenyuhai <cenyuhai@didichuxing.com>
      
      Closes #15014 from cenyuhai/SPARK-17429.
      fe767395
    • Herman van Hovell's avatar
      [SPARK-17114][SQL] Fix aggregates grouped by literals with empty input · d403562e
      Herman van Hovell authored
      ## What changes were proposed in this pull request?
      This PR fixes an issue with aggregates that have an empty input, and use a literals as their grouping keys. These aggregates are currently interpreted as aggregates **without** grouping keys, this triggers the ungrouped code path (which aways returns a single row).
      
      This PR fixes the `RemoveLiteralFromGroupExpressions` optimizer rule, which changes the semantics of the Aggregate by eliminating all literal grouping keys.
      
      ## How was this patch tested?
      Added tests to `SQLQueryTestSuite`.
      
      Author: Herman van Hovell <hvanhovell@databricks.com>
      
      Closes #15101 from hvanhovell/SPARK-17114-3.
      d403562e
    • Josh Rosen's avatar
      [SPARK-17547] Ensure temp shuffle data file is cleaned up after error · 5b8f7377
      Josh Rosen authored
      SPARK-8029 (#9610) modified shuffle writers to first stage their data to a temporary file in the same directory as the final destination file and then to atomically rename this temporary file at the end of the write job. However, this change introduced the potential for the temporary output file to be leaked if an exception occurs during the write because the shuffle writers' existing error cleanup code doesn't handle deletion of the temp file.
      
      This patch avoids this potential cause of disk-space leaks by adding `finally` blocks to ensure that temp files are always deleted if they haven't been renamed.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #15104 from JoshRosen/cleanup-tmp-data-file-in-shuffle-writer.
      5b8f7377
    • Adam Roberts's avatar
      [SPARK-17379][BUILD] Upgrade netty-all to 4.0.41 final for bug fixes · 0ad8eeb4
      Adam Roberts authored
      ## What changes were proposed in this pull request?
      Upgrade netty-all to latest in the 4.0.x line which is 4.0.41, mentions several bug fixes and performance improvements we may find useful, see netty.io/news/2016/08/29/4-0-41-Final-4-1-5-Final.html. Initially tried to use 4.1.5 but noticed it's not backwards compatible.
      
      ## How was this patch tested?
      Existing unit tests against branch-1.6 and branch-2.0 using IBM Java 8 on Intel, Power and Z architectures
      
      Author: Adam Roberts <aroberts@uk.ibm.com>
      
      Closes #14961 from a-roberts/netty.
      0ad8eeb4
    • Tejas Patil's avatar
      [SPARK-17451][CORE] CoarseGrainedExecutorBackend should inform driver before self-kill · b4792781
      Tejas Patil authored
      ## What changes were proposed in this pull request?
      
      Jira : https://issues.apache.org/jira/browse/SPARK-17451
      
      `CoarseGrainedExecutorBackend` in some failure cases exits the JVM. While this does not have any issue, from the driver UI there is no specific reason captured for this. In this PR, I am adding functionality to `exitExecutor` to notify driver that the executor is exiting.
      
      ## How was this patch tested?
      
      Ran the change over a test env and took down shuffle service before the executor could register to it. In the driver logs, where the job failure reason is mentioned (ie. `Job aborted due to stage ...` it gives the correct reason:
      
      Before:
      `ExecutorLostFailure (executor ZZZZZZZZZ exited caused by one of the running tasks) Reason: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages.`
      
      After:
      `ExecutorLostFailure (executor ZZZZZZZZZ exited caused by one of the running tasks) Reason: Unable to create executor due to java.util.concurrent.TimeoutException: Timeout waiting for task.`
      
      Author: Tejas Patil <tejasp@fb.com>
      
      Closes #15013 from tejasapatil/SPARK-17451_inform_driver.
      b4792781
    • Sean Owen's avatar
      [SPARK-17406][BUILD][HOTFIX] MiMa excludes fix · 2ad27695
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Following https://github.com/apache/spark/pull/14969 for some reason the MiMa excludes weren't complete, but still passed the PR builder. This adds 3 more excludes from https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-sbt-hadoop-2.2/1749/consoleFull
      
      It also moves the excludes to their own Seq in the build, as they probably should have been.
      Even though this is merged to 2.1.x only / master, I left the exclude in for 2.0.x in case we back port. It's a private API so is always a false positive.
      
      ## How was this patch tested?
      
      Jenkins build
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #15110 from srowen/SPARK-17406.2.
      2ad27695
    • John Muller's avatar
      [SPARK-17536][SQL] Minor performance improvement to JDBC batch inserts · 71a65825
      John Muller authored
      ## What changes were proposed in this pull request?
      
      Optimize a while loop during batch inserts
      
      ## How was this patch tested?
      
      Unit tests were done, specifically "mvn  test" for sql
      
      Author: John Muller <jmuller@us.imshealth.com>
      
      Closes #15098 from blue666man/SPARK-17536.
      71a65825
    • cenyuhai's avatar
      [SPARK-17406][WEB UI] limit timeline executor events · ad79fc0a
      cenyuhai authored
      ## What changes were proposed in this pull request?
      The job page will be too slow to open when there are thousands of executor events(added or removed). I found that in ExecutorsTab file, executorIdToData will not remove elements, it will increase all the time.Before this pr, it looks like [timeline1.png](https://issues.apache.org/jira/secure/attachment/12827112/timeline1.png). After this pr, it looks like [timeline2.png](https://issues.apache.org/jira/secure/attachment/12827113/timeline2.png)(we can set how many executor events will be displayed)
      
      Author: cenyuhai <cenyuhai@didichuxing.com>
      
      Closes #14969 from cenyuhai/SPARK-17406.
      ad79fc0a
Loading