Skip to content
Snippets Groups Projects
Commit cb8ea9e1 authored by Tathagata Das's avatar Tathagata Das Committed by Michael Armbrust
Browse files

[SPARK-14741][SQL] Fixed error in reading json file stream inside a partitioned directory

## What changes were proposed in this pull request?

Consider the following directory structure
dir/col=X/some-files
If we create a text format streaming dataframe on `dir/col=X/`  then it should not consider as partitioning in columns. Even though the streaming dataframe does not do so, the generated batch dataframes pick up col as a partitioning columns, causing mismatch streaming source schema and generated df schema. This leads to runtime failure:
```
18:55:11.262 ERROR org.apache.spark.sql.execution.streaming.StreamExecution: Query query-0 terminated with error
java.lang.AssertionError: assertion failed: Invalid batch: c#2 != c#7,type#8
```
The reason is that the partition inferring code has no idea of a base path, above which it should not search of partitions. This PR makes sure that the batch DF is generated with the basePath set as the original path on which the file stream source is defined.

## How was this patch tested?

New unit test

Author: Tathagata Das <tathagata.das1565@gmail.com>

Closes #12517 from tdas/SPARK-14741.
parent acc7e592
No related branches found
No related tags found
No related merge requests found
......@@ -186,7 +186,8 @@ case class DataSource(
userSpecifiedSchema = Some(dataSchema),
className = className,
options =
new CaseInsensitiveMap(options.filterKeys(_ != "path"))).resolveRelation()))
new CaseInsensitiveMap(
options.filterKeys(_ != "path") + ("basePath" -> path))).resolveRelation()))
}
new FileStreamSource(
......
......@@ -281,6 +281,30 @@ class FileStreamSourceSuite extends FileStreamSourceTest with SharedSQLContext {
Utils.deleteRecursively(tmp)
}
test("reading from json files inside partitioned directory") {
val src = {
val base = Utils.createTempDir(namePrefix = "streaming.src")
new File(base, "type=X")
}
val tmp = Utils.createTempDir(namePrefix = "streaming.tmp")
src.mkdirs()
// Add a file so that we can infer its schema
stringToFile(new File(src, "existing"), "{'c': 'drop1'}\n{'c': 'keep2'}\n{'c': 'keep3'}")
val textSource = createFileStreamSource("json", src.getCanonicalPath)
// FileStreamSource should infer the column "c"
val filtered = textSource.toDF().filter($"c" contains "keep")
testStream(filtered)(
AddTextFileData(textSource, "{'c': 'drop4'}\n{'c': 'keep5'}\n{'c': 'keep6'}", src, tmp),
CheckAnswer("keep2", "keep3", "keep5", "keep6")
)
}
test("read from parquet files") {
val src = Utils.createTempDir(namePrefix = "streaming.src")
val tmp = Utils.createTempDir(namePrefix = "streaming.tmp")
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment