-
- Downloads
Merge pull request #359 from ScrapCodes/clone-writables
We clone hadoop key and values by default and reuse objects if asked to. We try to clone for most common types of writables and we call WritableUtils.clone otherwise intention is to optimize, for example for NullWritable there is no need and for Long, int and String creating a new object with value set would be faster than doing copy on object hopefully. There is another way to do this PR where we ask for both key and values whether to clone them or not, but could not think of a use case for it except either of them is actually a NullWritable for which I have already worked around. So thought that would be unnecessary.
No related branches found
No related tags found
Showing
- core/src/main/scala/org/apache/spark/SparkContext.scala 44 additions, 34 deletionscore/src/main/scala/org/apache/spark/SparkContext.scala
- core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala 20 additions, 9 deletionscore/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala
- core/src/main/scala/org/apache/spark/rdd/NewHadoopRDD.scala 16 additions, 4 deletionscore/src/main/scala/org/apache/spark/rdd/NewHadoopRDD.scala
- core/src/main/scala/org/apache/spark/util/Utils.scala 26 additions, 2 deletionscore/src/main/scala/org/apache/spark/util/Utils.scala
Loading
Please register or sign in to comment