-
- Downloads
[SPARK-21617][SQL] Store correct table metadata when altering schema in Hive metastore.
For Hive tables, the current "replace the schema" code is the correct path, except that an exception in that path should result in an error, and not in retrying in a different way. For data source tables, Spark may generate a non-compatible Hive table; but for that to work with Hive 2.1, the detection of data source tables needs to be fixed in the Hive client, to also consider the raw tables used by code such as `alterTableSchema`. Tested with existing and added unit tests (plus internal tests with a 2.1 metastore). Author: Marcelo Vanzin <vanzin@cloudera.com> Closes #18849 from vanzin/SPARK-21617.
Showing
- sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala 3 additions, 12 deletions...ala/org/apache/spark/sql/execution/command/DDLSuite.scala
- sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala 40 additions, 15 deletions...scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala
- sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala 2 additions, 1 deletion...ala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
- sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/Hive_2_1_DDLSuite.scala 126 additions, 0 deletions...g/apache/spark/sql/hive/execution/Hive_2_1_DDLSuite.scala
Loading
Please register or sign in to comment