From 188ea348fdcf877d86f3c433cd15f6468fe3b42a Mon Sep 17 00:00:00 2001
From: guoxi <guoxi@us.ibm.com>
Date: Thu, 22 Oct 2015 13:56:18 -0700
Subject: [PATCH] [SPARK-11242][SQL] In conf/spark-env.sh.template
 SPARK_DRIVER_MEMORY is documented incorrectly

Minor fix on the comment

Author: guoxi <guoxi@us.ibm.com>

Closes #9201 from xguo27/SPARK-11242.
---
 conf/spark-env.sh.template | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/conf/spark-env.sh.template b/conf/spark-env.sh.template
index 990ded420b..771251f90e 100755
--- a/conf/spark-env.sh.template
+++ b/conf/spark-env.sh.template
@@ -36,10 +36,10 @@
 
 # Options read in YARN client mode
 # - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
-# - SPARK_EXECUTOR_INSTANCES, Number of workers to start (Default: 2)
-# - SPARK_EXECUTOR_CORES, Number of cores for the workers (Default: 1).
-# - SPARK_EXECUTOR_MEMORY, Memory per Worker (e.g. 1000M, 2G) (Default: 1G)
-# - SPARK_DRIVER_MEMORY, Memory for Master (e.g. 1000M, 2G) (Default: 1G)
+# - SPARK_EXECUTOR_INSTANCES, Number of executors to start (Default: 2)
+# - SPARK_EXECUTOR_CORES, Number of cores for the executors (Default: 1).
+# - SPARK_EXECUTOR_MEMORY, Memory per Executor (e.g. 1000M, 2G) (Default: 1G)
+# - SPARK_DRIVER_MEMORY, Memory for Driver (e.g. 1000M, 2G) (Default: 1G)
 # - SPARK_YARN_APP_NAME, The name of your application (Default: Spark)
 # - SPARK_YARN_QUEUE, The hadoop queue to use for allocation requests (Default: ‘default’)
 # - SPARK_YARN_DIST_FILES, Comma separated list of files to be distributed with the job.
-- 
GitLab