Skip to content
Snippets Groups Projects
Commit 5bbc621f authored by Andrew Or's avatar Andrew Or
Browse files

[SPARK-3653] Respect SPARK_*_MEMORY for cluster mode

`SPARK_DRIVER_MEMORY` was only used to start the `SparkSubmit` JVM, which becomes the driver only in client mode but not cluster mode. In cluster mode, this property is simply not propagated to the worker nodes.

`SPARK_EXECUTOR_MEMORY` is picked up from `SparkContext`, but in cluster mode the driver runs on one of the worker machines, where this environment variable may not be set.

Author: Andrew Or <andrewor14@gmail.com>

Closes #2500 from andrewor14/memory-env-vars and squashes the following commits:

6217b38 [Andrew Or] Respect SPARK_*_MEMORY for cluster mode

Conflicts:
	core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala
parent ffd97be3
No related branches found
No related tags found
No related merge requests found
......@@ -57,6 +57,10 @@ private[spark] class SparkSubmitArguments(args: Seq[String]) {
var pyFiles: String = null
val sparkProperties: HashMap[String, String] = new HashMap[String, String]()
// Respect SPARK_*_MEMORY for cluster mode
driverMemory = sys.env.get("SPARK_DRIVER_MEMORY").orNull
executorMemory = sys.env.get("SPARK_EXECUTOR_MEMORY").orNull
parseOpts(args.toList)
mergeSparkProperties()
checkRequiredArguments()
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment