-
- Downloads
[SPARK-20897][SQL] cached self-join should not fail
## What changes were proposed in this pull request? The failed test case is, we have a `SortMergeJoinExec` for a self-join, which means we have a `ReusedExchange` node in the query plan. It works fine without caching, but throws an exception in `SortMergeJoinExec.outputPartitioning` if we cache it. The root cause is, `ReusedExchange` doesn't propagate the output partitioning from its child, so in `SortMergeJoinExec.outputPartitioning` we create `PartitioningCollection` with a hash partitioning and an unknown partitioning, and fail. This bug is mostly fine, because inserting the `ReusedExchange` is the last step to prepare the physical plan, we won't call `SortMergeJoinExec.outputPartitioning` anymore after this. However, if the dataframe is cached, the physical plan of it becomes `InMemoryTableScanExec`, which contains another physical plan representing the cached query, and it has gone through the entire planning phase and may have `ReusedExchange`. Then the planner call `InMemoryTableScanExec.outputPartitioning`, which then calls `SortMergeJoinExec.outputPartitioning` and trigger this bug. ## How was this patch tested? a new regression test Author: Wenchen Fan <wenchen@databricks.com> Closes #18121 from cloud-fan/bug.
Showing
- sql/core/src/main/scala/org/apache/spark/sql/execution/exchange/Exchange.scala 20 additions, 1 deletion...la/org/apache/spark/sql/execution/exchange/Exchange.scala
- sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala 10 additions, 0 deletions.../src/test/scala/org/apache/spark/sql/DataFrameSuite.scala
Please register or sign in to comment