Skip to content
Snippets Groups Projects
Unverified Commit a3cac7bd authored by Weiqing Yang's avatar Weiqing Yang Committed by Sean Owen
Browse files

[YARN][DOC] Remove non-Yarn specific configurations from running-on-yarn.md

## What changes were proposed in this pull request?

Remove `spark.driver.memory`, `spark.executor.memory`,  `spark.driver.cores`, and `spark.executor.cores` from `running-on-yarn.md` as they are not Yarn-specific, and they are also defined in`configuration.md`.

## How was this patch tested?
Build passed & Manually check.

Author: Weiqing Yang <yangweiqing001@gmail.com>

Closes #15869 from weiqingy/yarnDoc.
parent 07b3f045
No related branches found
No related tags found
No related merge requests found
......@@ -117,28 +117,6 @@ To use a custom metrics.properties for the application master and executors, upd
Use lower-case suffixes, e.g. <code>k</code>, <code>m</code>, <code>g</code>, <code>t</code>, and <code>p</code>, for kibi-, mebi-, gibi-, tebi-, and pebibytes, respectively.
</td>
</tr>
<tr>
<td><code>spark.driver.memory</code></td>
<td>1g</td>
<td>
Amount of memory to use for the driver process, i.e. where SparkContext is initialized.
(e.g. <code>1g</code>, <code>2g</code>).
<br /><em>Note:</em> In client mode, this config must not be set through the <code>SparkConf</code>
directly in your application, because the driver JVM has already started at that point.
Instead, please set this through the <code>--driver-memory</code> command line option
or in your default properties file.
</td>
</tr>
<tr>
<td><code>spark.driver.cores</code></td>
<td><code>1</code></td>
<td>
Number of cores used by the driver in YARN cluster mode.
Since the driver is run in the same JVM as the YARN Application Master in cluster mode, this also controls the cores used by the YARN Application Master.
In client mode, use <code>spark.yarn.am.cores</code> to control the number of cores used by the YARN Application Master instead.
</td>
</tr>
<tr>
<td><code>spark.yarn.am.cores</code></td>
<td><code>1</code></td>
......@@ -233,13 +211,6 @@ To use a custom metrics.properties for the application master and executors, upd
Comma-separated list of jars to be placed in the working directory of each executor.
</td>
</tr>
<tr>
<td><code>spark.executor.cores</code></td>
<td>1 in YARN mode, all the available cores on the worker in standalone mode.</td>
<td>
The number of cores to use on each executor. For YARN and standalone mode only.
</td>
</tr>
<tr>
<td><code>spark.executor.instances</code></td>
<td><code>2</code></td>
......@@ -247,13 +218,6 @@ To use a custom metrics.properties for the application master and executors, upd
The number of executors for static allocation. With <code>spark.dynamicAllocation.enabled</code>, the initial set of executors will be at least this large.
</td>
</tr>
<tr>
<td><code>spark.executor.memory</code></td>
<td>1g</td>
<td>
Amount of memory to use per executor process (e.g. <code>2g</code>, <code>8g</code>).
</td>
</tr>
<tr>
<td><code>spark.yarn.executor.memoryOverhead</code></td>
<td>executorMemory * 0.10, with minimum of 384 </td>
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment