-
- Downloads
[SPARK-21408][CORE] Better default number of RPC dispatch threads.
Instead of using the host's cpu count, use the number of cores allocated for the Spark process when sizing the RPC dispatch thread pool. This avoids creating large thread pools on large machines when the number of allocated cores is small. Tested by verifying number of threads with spark.executor.cores set to 1 and 4; same thing for YARN AM. Author: Marcelo Vanzin <vanzin@cloudera.com> Closes #18639 from vanzin/SPARK-21408.
Showing
- core/src/main/scala/org/apache/spark/SparkEnv.scala 1 addition, 1 deletioncore/src/main/scala/org/apache/spark/SparkEnv.scala
- core/src/main/scala/org/apache/spark/rpc/RpcEnv.scala 4 additions, 2 deletionscore/src/main/scala/org/apache/spark/rpc/RpcEnv.scala
- core/src/main/scala/org/apache/spark/rpc/netty/Dispatcher.scala 7 additions, 2 deletions...rc/main/scala/org/apache/spark/rpc/netty/Dispatcher.scala
- core/src/main/scala/org/apache/spark/rpc/netty/NettyRpcEnv.scala 4 additions, 3 deletions...c/main/scala/org/apache/spark/rpc/netty/NettyRpcEnv.scala
- core/src/test/scala/org/apache/spark/rpc/netty/NettyRpcEnvSuite.scala 2 additions, 2 deletions...t/scala/org/apache/spark/rpc/netty/NettyRpcEnvSuite.scala
- resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala 4 additions, 2 deletions...cala/org/apache/spark/deploy/yarn/ApplicationMaster.scala
Loading
Please register or sign in to comment