Skip to content
Snippets Groups Projects
Commit 2f383788 authored by gatorsmile's avatar gatorsmile Committed by Michael Armbrust
Browse files

[SPARK-11360][DOC] Loss of nullability when writing parquet files

This fix is to add one line to explain the current behavior of Spark SQL when writing Parquet files. All columns are forced to be nullable for compatibility reasons.

Author: gatorsmile <gatorsmile@gmail.com>

Closes #9314 from gatorsmile/lossNull.
parent 9565c246
No related branches found
No related tags found
No related merge requests found
...@@ -982,7 +982,8 @@ when a table is dropped. ...@@ -982,7 +982,8 @@ when a table is dropped.
[Parquet](http://parquet.io) is a columnar format that is supported by many other data processing systems. [Parquet](http://parquet.io) is a columnar format that is supported by many other data processing systems.
Spark SQL provides support for both reading and writing Parquet files that automatically preserves the schema Spark SQL provides support for both reading and writing Parquet files that automatically preserves the schema
of the original data. of the original data. When writing Parquet files, all columns are automatically converted to be nullable for
compatibility reasons.
### Loading Data Programmatically ### Loading Data Programmatically
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment