Skip to content
Snippets Groups Projects
Commit 9d225a91 authored by Chen Chao's avatar Chen Chao Committed by Reynold Xin
Browse files

update proportion of memory

The default value of "spark.storage.memoryFraction" has been changed from 0.66 to 0.6 . So it should be 60% of the memory to cache while 40% used for task execution.

Author: Chen Chao <crazyjvm@gmail.com>

Closes #66 from CrazyJvm/master and squashes the following commits:

0f84d86 [Chen Chao] update proportion of memory
parent 369aad6f
No related branches found
No related tags found
No related merge requests found
......@@ -163,8 +163,8 @@ their work directories), *not* on your driver program.
**Cache Size Tuning**
One important configuration parameter for GC is the amount of memory that should be used for caching RDDs.
By default, Spark uses 66% of the configured executor memory (`spark.executor.memory` or `SPARK_MEM`) to
cache RDDs. This means that 33% of memory is available for any objects created during task execution.
By default, Spark uses 60% of the configured executor memory (`spark.executor.memory` or `SPARK_MEM`) to
cache RDDs. This means that 40% of memory is available for any objects created during task execution.
In case your tasks slow down and you find that your JVM is garbage-collecting frequently or running out of
memory, lowering this value will help reduce the memory consumption. To change this to say 50%, you can call
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment