From e7f4ea8a52f0d3d56684b4f9caadce978eac4816 Mon Sep 17 00:00:00 2001 From: WangTaoTheTonic <barneystinson@aliyun.com> Date: Thu, 16 Oct 2014 19:12:39 -0700 Subject: [PATCH] [SPARK-3890][Docs]remove redundant spark.executor.memory in doc Introduced in https://github.com/pwendell/spark/commit/f7e79bc42c1635686c3af01eef147dae92de2529, I'm not sure why we need two spark.executor.memory here. Author: WangTaoTheTonic <barneystinson@aliyun.com> Author: WangTao <barneystinson@aliyun.com> Closes #2745 from WangTaoTheTonic/redundantconfig and squashes the following commits: e7564dc [WangTao] too long line fdbdb1f [WangTaoTheTonic] trivial workaround d06b6e5 [WangTaoTheTonic] remove redundant spark.executor.memory in doc --- docs/configuration.md | 16 ++++------------ 1 file changed, 4 insertions(+), 12 deletions(-) diff --git a/docs/configuration.md b/docs/configuration.md index 8515ee0451..f0204c640b 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -161,14 +161,6 @@ Apart from these, the following properties are also available, and may be useful #### Runtime Environment <table class="table"> <tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr> -<tr> - <td><code>spark.executor.memory</code></td> - <td>512m</td> - <td> - Amount of memory to use per executor process, in the same format as JVM memory strings - (e.g. <code>512m</code>, <code>2g</code>). - </td> -</tr> <tr> <td><code>spark.executor.extraJavaOptions</code></td> <td>(none)</td> @@ -365,7 +357,7 @@ Apart from these, the following properties are also available, and may be useful <td><code>spark.ui.port</code></td> <td>4040</td> <td> - Port for your application's dashboard, which shows memory and workload data + Port for your application's dashboard, which shows memory and workload data. </td> </tr> <tr> @@ -880,8 +872,8 @@ Apart from these, the following properties are also available, and may be useful <td><code>spark.scheduler.revive.interval</code></td> <td>1000</td> <td> - The interval length for the scheduler to revive the worker resource offers to run tasks. - (in milliseconds) + The interval length for the scheduler to revive the worker resource offers to run tasks + (in milliseconds). </td> </tr> </tr> @@ -893,7 +885,7 @@ Apart from these, the following properties are also available, and may be useful to wait for before scheduling begins. Specified as a double between 0 and 1. Regardless of whether the minimum ratio of resources has been reached, the maximum amount of time it will wait before scheduling begins is controlled by config - <code>spark.scheduler.maxRegisteredResourcesWaitingTime</code> + <code>spark.scheduler.maxRegisteredResourcesWaitingTime</code>. </td> </tr> <tr> -- GitLab