Skip to content
Snippets Groups Projects
Commit d5ec4a3e authored by Davies Liu's avatar Davies Liu Committed by Shixiong Zhu
Browse files

[SPARK-17738][TEST] Fix flaky test in ColumnTypeSuite

## What changes were proposed in this pull request?

The default buffer size is not big enough for randomly generated MapType.

## How was this patch tested?

Ran the tests in 100 times, it never fail (it fail 8 times before the patch).

Author: Davies Liu <davies@databricks.com>

Closes #15395 from davies/flaky_map.
parent 03c40202
No related branches found
No related tags found
No related merge requests found
......@@ -101,14 +101,15 @@ class ColumnTypeSuite extends SparkFunSuite with Logging {
def testColumnType[JvmType](columnType: ColumnType[JvmType]): Unit = {
val buffer = ByteBuffer.allocate(DEFAULT_BUFFER_SIZE).order(ByteOrder.nativeOrder())
val proj = UnsafeProjection.create(Array[DataType](columnType.dataType))
val converter = CatalystTypeConverters.createToScalaConverter(columnType.dataType)
val seq = (0 until 4).map(_ => proj(makeRandomRow(columnType)).copy())
val totalSize = seq.map(_.getSizeInBytes).sum
val bufferSize = Math.max(DEFAULT_BUFFER_SIZE, totalSize)
test(s"$columnType append/extract") {
buffer.rewind()
seq.foreach(columnType.append(_, 0, buffer))
val buffer = ByteBuffer.allocate(bufferSize).order(ByteOrder.nativeOrder())
seq.foreach(r => columnType.append(columnType.getField(r, 0), buffer))
buffer.rewind()
seq.foreach { row =>
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment