-
- Downloads
End runJob() with a SparkException when a task fails too many times in
one of the cluster schedulers.
Showing
- core/src/main/scala/spark/scheduler/DAGScheduler.scala 58 additions, 12 deletionscore/src/main/scala/spark/scheduler/DAGScheduler.scala
- core/src/main/scala/spark/scheduler/DAGSchedulerEvent.scala 2 additions, 0 deletionscore/src/main/scala/spark/scheduler/DAGSchedulerEvent.scala
- core/src/main/scala/spark/scheduler/TaskSchedulerListener.scala 3 additions, 0 deletions...rc/main/scala/spark/scheduler/TaskSchedulerListener.scala
- core/src/main/scala/spark/scheduler/TaskSet.scala 2 additions, 0 deletionscore/src/main/scala/spark/scheduler/TaskSet.scala
- core/src/main/scala/spark/scheduler/cluster/TaskSetManager.scala 1 addition, 0 deletions...c/main/scala/spark/scheduler/cluster/TaskSetManager.scala
Loading
Please register or sign in to comment