Skip to content
Snippets Groups Projects
Commit 9fc16a82 authored by Kay Ousterhout's avatar Kay Ousterhout
Browse files

[SPARK-11306] Fix hang when JVM exits.

This commit fixes a bug where, in Standalone mode, if a task fails and crashes the JVM, the
failure is considered a "normal failure" (meaning it's considered unrelated to the task), so
the failure isn't counted against the task's maximum number of failures:
https://github.com/apache/spark/commit/af3bc59d1f5d9d952c2d7ad1af599c49f1dbdaf0#diff-a755f3d892ff2506a7aa7db52022d77cL138.
As a result, if a task fails in a way that results in it crashing the JVM, it will continuously be
re-launched, resulting in a hang. This commit fixes that problem.

This bug was introduced by #8007; andrewor14 mccheah vanzin can you take a look at this?

This error is hard to trigger because we handle executor losses through 2 code paths (the second is via Akka, where Akka notices that the executor endpoint is disconnected).  In my setup, the Akka code path completes first, and doesn't have this bug, so things work fine (see my recent email to the dev list about this).  If I manually disable the Akka code path, I can see the hang (and this commit fixes the issue).

Author: Kay Ousterhout <kayousterhout@gmail.com>

Closes #9273 from kayousterhout/SPARK-11306.
parent 360ed832
No related branches found
No related tags found
No related merge requests found
......@@ -137,7 +137,7 @@ private[spark] class SparkDeploySchedulerBackend(
override def executorRemoved(fullId: String, message: String, exitStatus: Option[Int]) {
val reason: ExecutorLossReason = exitStatus match {
case Some(code) => ExecutorExited(code, isNormalExit = true, message)
case Some(code) => ExecutorExited(code, isNormalExit = false, message)
case None => SlaveLost(message)
}
logInfo("Executor %s removed: %s".format(fullId, message))
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment