-
- Downloads
[SPARK-13162] Standalone mode does not respect initial executors
Currently the Master would always set an application's initial executor limit to infinity. If the user specified `spark.dynamicAllocation.initialExecutors`, the config would not take effect. This is similar to #11047 but for standalone mode. Author: Andrew Or <andrew@databricks.com> Closes #11054 from andrewor14/standalone-da-initial.
Showing
- core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala 2 additions, 0 deletions...in/scala/org/apache/spark/ExecutorAllocationManager.scala
- core/src/main/scala/org/apache/spark/deploy/ApplicationDescription.scala 3 additions, 0 deletions...cala/org/apache/spark/deploy/ApplicationDescription.scala
- core/src/main/scala/org/apache/spark/deploy/master/ApplicationInfo.scala 1 addition, 1 deletion...cala/org/apache/spark/deploy/master/ApplicationInfo.scala
- core/src/main/scala/org/apache/spark/scheduler/cluster/SparkDeploySchedulerBackend.scala 12 additions, 4 deletions...spark/scheduler/cluster/SparkDeploySchedulerBackend.scala
- core/src/test/scala/org/apache/spark/deploy/StandaloneDynamicAllocationSuite.scala 16 additions, 1 deletion...pache/spark/deploy/StandaloneDynamicAllocationSuite.scala
Loading
Please register or sign in to comment