From 960298ee66b9b8a80f84df679ce5b4b3846267f4 Mon Sep 17 00:00:00 2001 From: sadikovi <ivan.sadikov@lincolnuni.ac.nz> Date: Wed, 5 Jul 2017 14:40:44 +0100 Subject: [PATCH] [SPARK-20858][DOC][MINOR] Document ListenerBus event queue size ## What changes were proposed in this pull request? This change adds a new configuration option `spark.scheduler.listenerbus.eventqueue.size` to the configuration docs to specify the capacity of the spark listener bus event queue. Default value is 10000. This is doc PR for [SPARK-15703](https://issues.apache.org/jira/browse/SPARK-15703). I added option to the `Scheduling` section, however it might be more related to `Spark UI` section. ## How was this patch tested? Manually verified correct rendering of configuration option. Author: sadikovi <ivan.sadikov@lincolnuni.ac.nz> Author: Ivan Sadikov <ivan.sadikov@team.telstra.com> Closes #18476 from sadikovi/SPARK-20858. --- docs/configuration.md | 19 ++++++++++++++----- 1 file changed, 14 insertions(+), 5 deletions(-) diff --git a/docs/configuration.md b/docs/configuration.md index bd6a1f9e24..c785a664c6 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -725,7 +725,7 @@ Apart from these, the following properties are also available, and may be useful <td><code>spark.ui.retainedJobs</code></td> <td>1000</td> <td> - How many jobs the Spark UI and status APIs remember before garbage collecting. + How many jobs the Spark UI and status APIs remember before garbage collecting. This is a target maximum, and fewer elements may be retained in some circumstances. </td> </tr> @@ -733,7 +733,7 @@ Apart from these, the following properties are also available, and may be useful <td><code>spark.ui.retainedStages</code></td> <td>1000</td> <td> - How many stages the Spark UI and status APIs remember before garbage collecting. + How many stages the Spark UI and status APIs remember before garbage collecting. This is a target maximum, and fewer elements may be retained in some circumstances. </td> </tr> @@ -741,7 +741,7 @@ Apart from these, the following properties are also available, and may be useful <td><code>spark.ui.retainedTasks</code></td> <td>100000</td> <td> - How many tasks the Spark UI and status APIs remember before garbage collecting. + How many tasks the Spark UI and status APIs remember before garbage collecting. This is a target maximum, and fewer elements may be retained in some circumstances. </td> </tr> @@ -1389,6 +1389,15 @@ Apart from these, the following properties are also available, and may be useful The interval length for the scheduler to revive the worker resource offers to run tasks. </td> </tr> +<tr> + <td><code>spark.scheduler.listenerbus.eventqueue.capacity</code></td> + <td>10000</td> + <td> + Capacity for event queue in Spark listener bus, must be greater than 0. Consider increasing + value (e.g. 20000) if listener events are dropped. Increasing this value may result in the + driver using more memory. + </td> +</tr> <tr> <td><code>spark.blacklist.enabled</code></td> <td> @@ -1475,8 +1484,8 @@ Apart from these, the following properties are also available, and may be useful <td><code>spark.blacklist.application.fetchFailure.enabled</code></td> <td>false</td> <td> - (Experimental) If set to "true", Spark will blacklist the executor immediately when a fetch - failure happenes. If external shuffle service is enabled, then the whole node will be + (Experimental) If set to "true", Spark will blacklist the executor immediately when a fetch + failure happenes. If external shuffle service is enabled, then the whole node will be blacklisted. </td> </tr> -- GitLab