Skip to content
Snippets Groups Projects
  • Davies Liu's avatar
    2aea0da8
    [SPARK-3030] [PySpark] Reuse Python worker · 2aea0da8
    Davies Liu authored
    Reuse Python worker to avoid the overhead of fork() Python process for each tasks. It also tracks the broadcasts for each worker, avoid sending repeated broadcasts.
    
    This can reduce the time for dummy task from 22ms to 13ms (-40%). It can help to reduce the latency for Spark Streaming.
    
    For a job with broadcast (43M after compress):
    ```
        b = sc.broadcast(set(range(30000000)))
        print sc.parallelize(range(24000), 100).filter(lambda x: x in b.value).count()
    ```
    It will finish in 281s without reused worker, and it will finish in 65s with reused worker(4 CPUs). After reusing the worker, it can save about 9 seconds for transfer and deserialize the broadcast for each tasks.
    
    It's enabled by default, could be disabled by `spark.python.worker.reuse = false`.
    
    Author: Davies Liu <davies.liu@gmail.com>
    
    Closes #2259 from davies/reuse-worker and squashes the following commits:
    
    f11f617 [Davies Liu] Merge branch 'master' into reuse-worker
    3939f20 [Davies Liu] fix bug in serializer in mllib
    cf1c55e [Davies Liu] address comments
    3133a60 [Davies Liu] fix accumulator with reused worker
    760ab1f [Davies Liu] do not reuse worker if there are any exceptions
    7abb224 [Davies Liu] refactor: sychronized with itself
    ac3206e [Davies Liu] renaming
    8911f44 [Davies Liu] synchronized getWorkerBroadcasts()
    6325fc1 [Davies Liu] bugfix: bid >= 0
    e0131a2 [Davies Liu] fix name of config
    583716e [Davies Liu] only reuse completed and not interrupted worker
    ace2917 [Davies Liu] kill python worker after timeout
    6123d0f [Davies Liu] track broadcasts for each worker
    8d2f08c [Davies Liu] reuse python worker
    2aea0da8
    History
    [SPARK-3030] [PySpark] Reuse Python worker
    Davies Liu authored
    Reuse Python worker to avoid the overhead of fork() Python process for each tasks. It also tracks the broadcasts for each worker, avoid sending repeated broadcasts.
    
    This can reduce the time for dummy task from 22ms to 13ms (-40%). It can help to reduce the latency for Spark Streaming.
    
    For a job with broadcast (43M after compress):
    ```
        b = sc.broadcast(set(range(30000000)))
        print sc.parallelize(range(24000), 100).filter(lambda x: x in b.value).count()
    ```
    It will finish in 281s without reused worker, and it will finish in 65s with reused worker(4 CPUs). After reusing the worker, it can save about 9 seconds for transfer and deserialize the broadcast for each tasks.
    
    It's enabled by default, could be disabled by `spark.python.worker.reuse = false`.
    
    Author: Davies Liu <davies.liu@gmail.com>
    
    Closes #2259 from davies/reuse-worker and squashes the following commits:
    
    f11f617 [Davies Liu] Merge branch 'master' into reuse-worker
    3939f20 [Davies Liu] fix bug in serializer in mllib
    cf1c55e [Davies Liu] address comments
    3133a60 [Davies Liu] fix accumulator with reused worker
    760ab1f [Davies Liu] do not reuse worker if there are any exceptions
    7abb224 [Davies Liu] refactor: sychronized with itself
    ac3206e [Davies Liu] renaming
    8911f44 [Davies Liu] synchronized getWorkerBroadcasts()
    6325fc1 [Davies Liu] bugfix: bid >= 0
    e0131a2 [Davies Liu] fix name of config
    583716e [Davies Liu] only reuse completed and not interrupted worker
    ace2917 [Davies Liu] kill python worker after timeout
    6123d0f [Davies Liu] track broadcasts for each worker
    8d2f08c [Davies Liu] reuse python worker