Skip to content
Snippets Groups Projects
Commit 523abfe1 authored by Artur Sukhenko's avatar Artur Sukhenko Committed by Reynold Xin
Browse files

[YARN][DOC] Increasing NodeManager's heap size with External Shuffle Service

## What changes were proposed in this pull request?

Suggest users to increase `NodeManager's` heap size if `External Shuffle Service` is enabled as
`NM` can spend a lot of time doing GC resulting in  shuffle operations being a bottleneck due to `Shuffle Read blocked time` bumped up.
Also because of GC  `NodeManager` can use an enormous amount of CPU and cluster performance will suffer.
I have seen NodeManager using 5-13G RAM and up to 2700% CPU with `spark_shuffle` service on.

## How was this patch tested?

#### Added step 5:
![shuffle_service](https://cloud.githubusercontent.com/assets/15244468/20355499/2fec0fde-ac2a-11e6-8f8b-1c80daf71be1.png

)

Author: Artur Sukhenko <artur.sukhenko@gmail.com>

Closes #15906 from Devian-ua/nmHeapSize.

(cherry picked from commit 55589987)
Signed-off-by: default avatarReynold Xin <rxin@databricks.com>
parent 3d4756d5
No related branches found
No related tags found
No related merge requests found
......@@ -559,6 +559,8 @@ pre-packaged distribution.
1. In the `yarn-site.xml` on each node, add `spark_shuffle` to `yarn.nodemanager.aux-services`,
then set `yarn.nodemanager.aux-services.spark_shuffle.class` to
`org.apache.spark.network.yarn.YarnShuffleService`.
1. Increase `NodeManager's` heap size by setting `YARN_HEAPSIZE` (1000 by default) in `etc/hadoop/yarn-env.sh`
to avoid garbage collection issues during shuffle.
1. Restart all `NodeManager`s in your cluster.
The following extra configuration options are available when the shuffle service is running on YARN:
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment