Skip to content
Snippets Groups Projects
Commit 062c336d authored by jinxing's avatar jinxing Committed by Wenchen Fan
Browse files

[SPARK-21343] Refine the document for spark.reducer.maxReqSizeShuffleToMem.

## What changes were proposed in this pull request?

In current code, reducer can break the old shuffle service when `spark.reducer.maxReqSizeShuffleToMem` is enabled. Let's refine document.

Author: jinxing <jinxing6042@126.com>

Closes #18566 from jinxing64/SPARK-21343.
parent 9131bdb7
No related branches found
No related tags found
No related merge requests found
......@@ -323,9 +323,11 @@ package object config {
private[spark] val REDUCER_MAX_REQ_SIZE_SHUFFLE_TO_MEM =
ConfigBuilder("spark.reducer.maxReqSizeShuffleToMem")
.internal()
.doc("The blocks of a shuffle request will be fetched to disk when size of the request is " +
"above this threshold. This is to avoid a giant request takes too much memory.")
"above this threshold. This is to avoid a giant request takes too much memory. We can " +
"enable this config by setting a specific value(e.g. 200m). Note that this config can " +
"be enabled only when the shuffle shuffle service is newer than Spark-2.2 or the shuffle" +
" service is disabled.")
.bytesConf(ByteUnit.BYTE)
.createWithDefault(Long.MaxValue)
......
......@@ -528,6 +528,16 @@ Apart from these, the following properties are also available, and may be useful
By allowing it to limit the number of fetch requests, this scenario can be mitigated.
</td>
</tr>
<tr>
<td><code>spark.reducer.maxReqSizeShuffleToMem</code></td>
<td>Long.MaxValue</td>
<td>
The blocks of a shuffle request will be fetched to disk when size of the request is above
this threshold. This is to avoid a giant request takes too much memory. We can enable this
config by setting a specific value(e.g. 200m). Note that this config can be enabled only when
the shuffle shuffle service is newer than Spark-2.2 or the shuffle service is disabled.
</td>
</tr>
<tr>
<td><code>spark.shuffle.compress</code></td>
<td>true</td>
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment