-
- Downloads
[SPARK-21527][CORE] Use buffer limit in order to use JAVA NIO Util's buffercache
## What changes were proposed in this pull request? Right now, ChunkedByteBuffer#writeFully do not slice bytes first.We observe code in java nio Util#getTemporaryDirectBuffer below: BufferCache cache = bufferCache.get(); ByteBuffer buf = cache.get(size); if (buf != null) { return buf; } else { // No suitable buffer in the cache so we need to allocate a new // one. To avoid the cache growing then we remove the first // buffer from the cache and free it. if (!cache.isEmpty()) { buf = cache.removeFirst(); free(buf); } return ByteBuffer.allocateDirect(size); } If we slice first with a fixed size, we can use buffer cache and only need to allocate at the first write call. Since we allocate new buffer, we can not control the free time of this buffer.This once cause memory issue in our production cluster. In this patch, i supply a new api which will slice with fixed size for buffer writing. ## How was this patch tested? Unit test and test in production. Author: zhoukang <zhoukang199191@gmail.com> Author: zhoukang <zhoukang@xiaomi.com> Closes #18730 from caneGuy/zhoukang/improve-chunkwrite.
Showing
- core/src/main/scala/org/apache/spark/internal/config/package.scala 9 additions, 0 deletions...main/scala/org/apache/spark/internal/config/package.scala
- core/src/main/scala/org/apache/spark/util/io/ChunkedByteBuffer.scala 10 additions, 1 deletion...in/scala/org/apache/spark/util/io/ChunkedByteBuffer.scala
Loading
Please register or sign in to comment