-
- Downloads
[SPARK-16034][SQL] Checks the partition columns when calling...
[SPARK-16034][SQL] Checks the partition columns when calling dataFrame.write.mode("append").saveAsTable ## What changes were proposed in this pull request? `DataFrameWriter` can be used to append data to existing data source tables. It becomes tricky when partition columns used in `DataFrameWriter.partitionBy(columns)` don't match the actual partition columns of the underlying table. This pull request enforces the check so that the partition columns of these two always match. ## How was this patch tested? Unit test. Author: Sean Zhong <seanzhong@databricks.com> Closes #13749 from clockfly/SPARK-16034.
Showing
- sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala 7 additions, 2 deletions.../spark/sql/execution/command/createDataSourceTables.scala
- sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala 19 additions, 20 deletions...g/apache/spark/sql/execution/datasources/DataSource.scala
- sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala 24 additions, 0 deletions...ala/org/apache/spark/sql/execution/command/DDLSuite.scala
Please register or sign in to comment