-
- Downloads
[SPARK-16006][SQL] Attemping to write empty DataFrame with no fields throws non-intuitive exception
## What changes were proposed in this pull request? This PR allows `emptyDataFrame.write` since the user didn't specify any partition columns. **Before** ```scala scala> spark.emptyDataFrame.write.parquet("/tmp/t1") org.apache.spark.sql.AnalysisException: Cannot use all columns for partition columns; scala> spark.emptyDataFrame.write.csv("/tmp/t1") org.apache.spark.sql.AnalysisException: Cannot use all columns for partition columns; ``` After this PR, there occurs no exceptions and the created directory has only one file, `_SUCCESS`, as expected. ## How was this patch tested? Pass the Jenkins tests including updated test cases. Author: Dongjoon Hyun <dongjoon@apache.org> Closes #13730 from dongjoon-hyun/SPARK-16006.
Showing
- sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningUtils.scala 1 addition, 1 deletion...e/spark/sql/execution/datasources/PartitioningUtils.scala
- sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala 2 additions, 1 deletion...rg/apache/spark/sql/test/DataFrameReaderWriterSuite.scala
Please register or sign in to comment