Skip to content
Snippets Groups Projects
Commit cc4ab37e authored by Hossein's avatar Hossein Committed by Michael Armbrust
Browse files

[SPARK-13754] Keep old data source name for backwards compatibility

## Motivation
CSV data source was contributed by Databricks. It is the inlined version of https://github.com/databricks/spark-csv. The data source name was `com.databricks.spark.csv`. As a result there are many tables created on older versions of spark with that name as the source. For backwards compatibility we should keep the old name.

## Proposed changes
`com.databricks.spark.csv` was added to list of `backwardCompatibilityMap` in `ResolvedDataSource.scala`

## Tests
A unit test was added to `CSVSuite` to parse a csv file using the old name.

Author: Hossein <hossein@databricks.com>

Closes #11589 from falaki/SPARK-13754.
parent 982ef2b8
No related branches found
No related tags found
No related merge requests found
...@@ -75,7 +75,8 @@ case class DataSource( ...@@ -75,7 +75,8 @@ case class DataSource(
"org.apache.spark.sql.json" -> classOf[json.DefaultSource].getCanonicalName, "org.apache.spark.sql.json" -> classOf[json.DefaultSource].getCanonicalName,
"org.apache.spark.sql.json.DefaultSource" -> classOf[json.DefaultSource].getCanonicalName, "org.apache.spark.sql.json.DefaultSource" -> classOf[json.DefaultSource].getCanonicalName,
"org.apache.spark.sql.parquet" -> classOf[parquet.DefaultSource].getCanonicalName, "org.apache.spark.sql.parquet" -> classOf[parquet.DefaultSource].getCanonicalName,
"org.apache.spark.sql.parquet.DefaultSource" -> classOf[parquet.DefaultSource].getCanonicalName "org.apache.spark.sql.parquet.DefaultSource" -> classOf[parquet.DefaultSource].getCanonicalName,
"com.databricks.spark.csv" -> classOf[csv.DefaultSource].getCanonicalName
) )
/** Given a provider name, look up the data source class definition. */ /** Given a provider name, look up the data source class definition. */
......
...@@ -466,4 +466,14 @@ class CSVSuite extends QueryTest with SharedSQLContext with SQLTestUtils { ...@@ -466,4 +466,14 @@ class CSVSuite extends QueryTest with SharedSQLContext with SQLTestUtils {
df.schema.fields.map(field => field.dataType).deep == df.schema.fields.map(field => field.dataType).deep ==
Array(IntegerType, IntegerType, IntegerType, IntegerType).deep) Array(IntegerType, IntegerType, IntegerType, IntegerType).deep)
} }
test("old csv data source name works") {
val cars = sqlContext
.read
.format("com.databricks.spark.csv")
.option("header", "false")
.load(testFile(carsFile))
verifyCars(cars, withHeader = false, checkTypes = false)
}
} }
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment