-
- Downloads
[SPARK-16984][SQL] don't try whole dataset immediately when first partition doesn't have…
## What changes were proposed in this pull request? Try increase number of partitions to try so we don't revert to all. ## How was this patch tested? Empirically. This is common case optimization. Author: Robert Kruszewski <robertk@palantir.com> Closes #14573 from robert3005/robertk/execute-take-backoff.
Showing
- core/src/main/scala/org/apache/spark/rdd/RDD.scala 4 additions, 3 deletionscore/src/main/scala/org/apache/spark/rdd/RDD.scala
- sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala 13 additions, 15 deletions...main/scala/org/apache/spark/sql/execution/SparkPlan.scala
- sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala 10 additions, 0 deletions...rc/main/scala/org/apache/spark/sql/internal/SQLConf.scala
Loading
Please register or sign in to comment