Skip to content
Snippets Groups Projects
  1. Dec 08, 2016
  2. Nov 28, 2016
  3. Nov 14, 2016
    • Michael Armbrust's avatar
      [SPARK-18124] Observed delay based Event Time Watermarks · 27999b36
      Michael Armbrust authored
      
      This PR adds a new method `withWatermark` to the `Dataset` API, which can be used specify an _event time watermark_.  An event time watermark allows the streaming engine to reason about the point in time after which we no longer expect to see late data.  This PR also has augmented `StreamExecution` to use this watermark for several purposes:
        - To know when a given time window aggregation is finalized and thus results can be emitted when using output modes that do not allow updates (e.g. `Append` mode).
        - To minimize the amount of state that we need to keep for on-going aggregations, by evicting state for groups that are no longer expected to change.  Although, we do still maintain all state if the query requires (i.e. if the event time is not present in the `groupBy` or when running in `Complete` mode).
      
      An example that emits windowed counts of records, waiting up to 5 minutes for late data to arrive.
      ```scala
      df.withWatermark("eventTime", "5 minutes")
        .groupBy(window($"eventTime", "1 minute") as 'window)
        .count()
        .writeStream
        .format("console")
        .mode("append") // In append mode, we only output finalized aggregations.
        .start()
      ```
      
      ### Calculating the watermark.
      The current event time is computed by looking at the `MAX(eventTime)` seen this epoch across all of the partitions in the query minus some user defined _delayThreshold_.  An additional constraint is that the watermark must increase monotonically.
      
      Note that since we must coordinate this value across partitions occasionally, the actual watermark used is only guaranteed to be at least `delay` behind the actual event time.  In some cases we may still process records that arrive more than delay late.
      
      This mechanism was chosen for the initial implementation over processing time for two reasons:
        - it is robust to downtime that could affect processing delay
        - it does not require syncing of time or timezones between the producer and the processing engine.
      
      ### Other notable implementation details
       - A new trigger metric `eventTimeWatermark` outputs the current value of the watermark.
       - We mark the event time column in the `Attribute` metadata using the key `spark.watermarkDelay`.  This allows downstream operations to know which column holds the event time.  Operations like `window` propagate this metadata.
       - `explain()` marks the watermark with a suffix of `-T${delayMs}` to ease debugging of how this information is propagated.
       - Currently, we don't filter out late records, but instead rely on the state store to avoid emitting records that are both added and filtered in the same epoch.
      
      ### Remaining in this PR
       - [ ] The test for recovery is currently failing as we don't record the watermark used in the offset log.  We will need to do so to ensure determinism, but this is deferred until #15626 is merged.
      
      ### Other follow-ups
      There are some natural additional features that we should consider for future work:
       - Ability to write records that arrive too late to some external store in case any out-of-band remediation is required.
       - `Update` mode so you can get partial results before a group is evicted.
       - Other mechanisms for calculating the watermark.  In particular a watermark based on quantiles would be more robust to outliers.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #15702 from marmbrus/watermarks.
      
      (cherry picked from commit c0718782)
      Signed-off-by: default avatarTathagata Das <tathagata.das1565@gmail.com>
      27999b36
  4. Oct 04, 2016
  5. Sep 27, 2016
    • Kazuaki Ishizaki's avatar
      [SPARK-15962][SQL] Introduce implementation with a dense format for UnsafeArrayData · 85b0a157
      Kazuaki Ishizaki authored
      ## What changes were proposed in this pull request?
      
      This PR introduces more compact representation for ```UnsafeArrayData```.
      
      ```UnsafeArrayData``` needs to accept ```null``` value in each entry of an array. In the current version, it has three parts
      ```
      [numElements] [offsets] [values]
      ```
      `Offsets` has the number of `numElements`, and represents `null` if its value is negative. It may increase memory footprint, and introduces an indirection for accessing each of `values`.
      
      This PR uses bitvectors to represent nullability for each element like `UnsafeRow`, and eliminates an indirection for accessing each element. The new ```UnsafeArrayData``` has four parts.
      ```
      [numElements][null bits][values or offset&length][variable length portion]
      ```
      In the `null bits` region, we store 1 bit per element, represents whether an element is null. Its total size is ceil(numElements / 8) bytes, and it is aligned to 8-byte boundaries.
      In the `values or offset&length` region, we store the content of elements. For fields that hold fixed-length primitive types, such as long, double, or int, we store the value directly in the field. For fields with non-primitive or variable-length values, we store a relative offset (w.r.t. the base address of the array) that points to the beginning of the variable-length field and length (they are combined into a long). Each is word-aligned. For `variable length portion`, each is aligned to 8-byte boundaries.
      
      The new format can reduce memory footprint and improve performance of accessing each element. An example of memory foot comparison:
      1024x1024 elements integer array
      Size of ```baseObject``` for ```UnsafeArrayData```: 8 + 1024x1024 + 1024x1024 = 2M bytes
      Size of ```baseObject``` for ```UnsafeArrayData```: 8 + 1024x1024/8 + 1024x1024 = 1.25M bytes
      
      In summary, we got 1.0-2.6x performance improvements over the code before applying this PR.
      Here are performance results of [benchmark programs](https://github.com/kiszk/spark/blob/04d2e4b6dbdc4eff43ce18b3c9b776e0129257c7/sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/UnsafeArrayDataBenchmark.scala):
      
      **Read UnsafeArrayData**: 1.7x and 1.6x performance improvements over the code before applying this PR
      ````
      OpenJDK 64-Bit Server VM 1.8.0_91-b14 on Linux 4.4.11-200.fc22.x86_64
      Intel Xeon E3-12xx v2 (Ivy Bridge)
      
      Without SPARK-15962
      Read UnsafeArrayData:                    Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)   Relative
      ------------------------------------------------------------------------------------------------
      Int                                            430 /  436        390.0           2.6       1.0X
      Double                                         456 /  485        367.8           2.7       0.9X
      
      With SPARK-15962
      Read UnsafeArrayData:                    Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)   Relative
      ------------------------------------------------------------------------------------------------
      Int                                            252 /  260        666.1           1.5       1.0X
      Double                                         281 /  292        597.7           1.7       0.9X
      ````
      **Write UnsafeArrayData**: 1.0x and 1.1x performance improvements over the code before applying this PR
      ````
      OpenJDK 64-Bit Server VM 1.8.0_91-b14 on Linux 4.0.4-301.fc22.x86_64
      Intel Xeon E3-12xx v2 (Ivy Bridge)
      
      Without SPARK-15962
      Write UnsafeArrayData:                   Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)   Relative
      ------------------------------------------------------------------------------------------------
      Int                                            203 /  273        103.4           9.7       1.0X
      Double                                         239 /  356         87.9          11.4       0.8X
      
      With SPARK-15962
      Write UnsafeArrayData:                   Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)   Relative
      ------------------------------------------------------------------------------------------------
      Int                                            196 /  249        107.0           9.3       1.0X
      Double                                         227 /  367         92.3          10.8       0.9X
      ````
      
      **Get primitive array from UnsafeArrayData**: 2.6x and 1.6x performance improvements over the code before applying this PR
      ````
      OpenJDK 64-Bit Server VM 1.8.0_91-b14 on Linux 4.0.4-301.fc22.x86_64
      Intel Xeon E3-12xx v2 (Ivy Bridge)
      
      Without SPARK-15962
      Get primitive array from UnsafeArrayData: Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)   Relative
      ------------------------------------------------------------------------------------------------
      Int                                            207 /  217        304.2           3.3       1.0X
      Double                                         257 /  363        245.2           4.1       0.8X
      
      With SPARK-15962
      Get primitive array from UnsafeArrayData: Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)   Relative
      ------------------------------------------------------------------------------------------------
      Int                                            151 /  198        415.8           2.4       1.0X
      Double                                         214 /  394        293.6           3.4       0.7X
      ````
      
      **Create UnsafeArrayData from primitive array**: 1.7x and 2.1x performance improvements over the code before applying this PR
      ````
      OpenJDK 64-Bit Server VM 1.8.0_91-b14 on Linux 4.0.4-301.fc22.x86_64
      Intel Xeon E3-12xx v2 (Ivy Bridge)
      
      Without SPARK-15962
      Create UnsafeArrayData from primitive array: Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)   Relative
      ------------------------------------------------------------------------------------------------
      Int                                            340 /  385        185.1           5.4       1.0X
      Double                                         479 /  705        131.3           7.6       0.7X
      
      With SPARK-15962
      Create UnsafeArrayData from primitive array: Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)   Relative
      ------------------------------------------------------------------------------------------------
      Int                                            206 /  211        306.0           3.3       1.0X
      Double                                         232 /  406        271.6           3.7       0.9X
      ````
      
      1.7x and 1.4x performance improvements in [```UDTSerializationBenchmark```](https://github.com/apache/spark/blob/master/mllib/src/test/scala/org/apache/spark/mllib/linalg/UDTSerializationBenchmark.scala)  over the code before applying this PR
      ````
      OpenJDK 64-Bit Server VM 1.8.0_91-b14 on Linux 4.4.11-200.fc22.x86_64
      Intel Xeon E3-12xx v2 (Ivy Bridge)
      
      Without SPARK-15962
      VectorUDT de/serialization:              Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)   Relative
      ------------------------------------------------------------------------------------------------
      serialize                                      442 /  533          0.0      441927.1       1.0X
      deserialize                                    217 /  274          0.0      217087.6       2.0X
      
      With SPARK-15962
      VectorUDT de/serialization:              Best/Avg Time(ms)    Rate(M/s)   Per Row(ns)   Relative
      ------------------------------------------------------------------------------------------------
      serialize                                      265 /  318          0.0      265138.5       1.0X
      deserialize                                    155 /  197          0.0      154611.4       1.7X
      ````
      
      ## How was this patch tested?
      
      Added unit tests into ```UnsafeArraySuite```
      
      Author: Kazuaki Ishizaki <ishizaki@jp.ibm.com>
      
      Closes #13680 from kiszk/SPARK-15962.
      85b0a157
  6. Sep 06, 2016
  7. Sep 01, 2016
    • Sean Owen's avatar
      [SPARK-17331][CORE][MLLIB] Avoid allocating 0-length arrays · 3893e8c5
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Avoid allocating some 0-length arrays, esp. in UTF8String, and by using Array.empty in Scala over Array[T]()
      
      ## How was this patch tested?
      
      Jenkins
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #14895 from srowen/SPARK-17331.
      3893e8c5
  8. Aug 22, 2016
    • Richael's avatar
      [SPARK-17127] Make unaligned access in unsafe available for AArch64 · 083de00c
      Richael authored
      ## # What changes were proposed in this pull request?
      
      From the spark of version 2.0.0 , when MemoryMode.OFF_HEAP is set , whether the architecture supports unaligned access or not is checked. If the check doesn't pass, exception is raised.
      
      We know that AArch64 also supports unaligned access , but now only i386, x86, amd64, and X86_64 are included.
      
      I think we should include aarch64 when performing the check.
      
      ## How was this patch tested?
      
      Unit test suite
      
      Author: Richael <Richael.Zhuang@arm.com>
      
      Closes #14700 from yimuxi/zym_change_unsafe.
      083de00c
  9. Jul 19, 2016
  10. Jul 13, 2016
    • Xin Ren's avatar
      [MINOR] Fix Java style errors and remove unused imports · f73891e0
      Xin Ren authored
      ## What changes were proposed in this pull request?
      
      Fix Java style errors and remove unused imports, which are randomly found
      
      ## How was this patch tested?
      
      Tested on my local machine.
      
      Author: Xin Ren <iamshrek@126.com>
      
      Closes #14161 from keypointt/SPARK-16437.
      f73891e0
  11. Jul 11, 2016
    • Reynold Xin's avatar
      [SPARK-16477] Bump master version to 2.1.0-SNAPSHOT · ffcb6e05
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      After SPARK-16476 (committed earlier today as #14128), we can finally bump the version number.
      
      ## How was this patch tested?
      N/A
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #14130 from rxin/SPARK-16477.
      ffcb6e05
  12. Jul 06, 2016
  13. Jun 03, 2016
    • Davies Liu's avatar
      [SPARK-15391] [SQL] manage the temporary memory of timsort · 3074f575
      Davies Liu authored
      ## What changes were proposed in this pull request?
      
      Currently, the memory for temporary buffer used by TimSort is always allocated as on-heap without bookkeeping, it could cause OOM both in on-heap and off-heap mode.
      
      This PR will try to manage that by preallocate it together with the pointer array, same with RadixSort. It both works for on-heap and off-heap mode.
      
      This PR also change the loadFactor of BytesToBytesMap to 0.5 (it was 0.70), it enables use to radix sort also makes sure that we have enough memory for timsort.
      
      ## How was this patch tested?
      
      Existing tests.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #13318 from davies/fix_timsort.
      3074f575
  14. May 29, 2016
    • Sean Owen's avatar
      [MINOR] Resolve a number of miscellaneous build warnings · ce1572d1
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      This change resolves a number of build warnings that have accumulated, before 2.x. It does not address a large number of deprecation warnings, especially related to the Accumulator API. That will happen separately.
      
      ## How was this patch tested?
      
      Jenkins
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #13377 from srowen/BuildWarnings.
      ce1572d1
  15. May 17, 2016
  16. Apr 28, 2016
  17. Apr 26, 2016
    • Azeem Jiva's avatar
      [SPARK-14756][CORE] Use parseLong instead of valueOf · de6e6334
      Azeem Jiva authored
      ## What changes were proposed in this pull request?
      
      Use Long.parseLong which returns a primative.
      Use a series of appends() reduces the creation of an extra StringBuilder type
      
      ## How was this patch tested?
      
      Unit tests
      
      Author: Azeem Jiva <azeemj@gmail.com>
      
      Closes #12520 from javawithjiva/minor.
      de6e6334
  18. Apr 01, 2016
    • Josh Rosen's avatar
      [SPARK-13992] Add support for off-heap caching · e41acb75
      Josh Rosen authored
      This patch adds support for caching blocks in the executor processes using direct / off-heap memory.
      
      ## User-facing changes
      
      **Updated semantics of `OFF_HEAP` storage level**: In Spark 1.x, the `OFF_HEAP` storage level indicated that an RDD should be cached in Tachyon. Spark 2.x removed the external block store API that Tachyon caching was based on (see #10752 / SPARK-12667), so `OFF_HEAP` became an alias for `MEMORY_ONLY_SER`. As of this patch, `OFF_HEAP` means "serialized and cached in off-heap memory or on disk". Via the `StorageLevel` constructor, `useOffHeap` can be set if `serialized == true` and can be used to construct custom storage levels which support replication.
      
      **Storage UI reporting**: the storage UI will now report whether in-memory blocks are stored on- or off-heap.
      
      **Only supported by UnifiedMemoryManager**: for simplicity, this feature is only supported when the default UnifiedMemoryManager is used; applications which use the legacy memory manager (`spark.memory.useLegacyMode=true`) are not currently able to allocate off-heap storage memory, so using off-heap caching will fail with an error when legacy memory management is enabled. Given that we plan to eventually remove the legacy memory manager, this is not a significant restriction.
      
      **Memory management policies:** the policies for dividing available memory between execution and storage are the same for both on- and off-heap memory. For off-heap memory, the total amount of memory available for use by Spark is controlled by `spark.memory.offHeap.size`, which is an absolute size. Off-heap storage memory obeys `spark.memory.storageFraction` in order to control the amount of unevictable storage memory. For example, if `spark.memory.offHeap.size` is 1 gigabyte and Spark uses the default `storageFraction` of 0.5, then up to 500 megabytes of off-heap cached blocks will be protected from eviction due to execution memory pressure. If necessary, we can split `spark.memory.storageFraction` into separate on- and off-heap configurations, but this doesn't seem necessary now and can be done later without any breaking changes.
      
      **Use of off-heap memory does not imply use of off-heap execution (or vice-versa)**: for now, the settings controlling the use of off-heap execution memory (`spark.memory.offHeap.enabled`) and off-heap caching are completely independent, so Spark SQL can be configured to use off-heap memory for execution while continuing to cache blocks on-heap. If desired, we can change this in a followup patch so that `spark.memory.offHeap.enabled` affect the default storage level for cached SQL tables.
      
      ## Internal changes
      
      - Rename `ByteArrayChunkOutputStream` to `ChunkedByteBufferOutputStream`
        - It now returns a `ChunkedByteBuffer` instead of an array of byte arrays.
        - Its constructor now accept an `allocator` function which is called to allocate `ByteBuffer`s. This allows us to control whether it allocates regular ByteBuffers or off-heap DirectByteBuffers.
        - Because block serialization is now performed during the unroll process, a `ChunkedByteBufferOutputStream` which is configured with a `DirectByteBuffer` allocator will use off-heap memory for both unroll and storage memory.
      - The `MemoryStore`'s MemoryEntries now tracks whether blocks are stored on- or off-heap.
        - `evictBlocksToFreeSpace()` now accepts a `MemoryMode` parameter so that we don't try to evict off-heap blocks in response to on-heap memory pressure (or vice-versa).
      - Make sure that off-heap buffers are properly de-allocated during MemoryStore eviction.
      - The JVM limits the total size of allocated direct byte buffers using the `-XX:MaxDirectMemorySize` flag and the default tends to be fairly low (< 512 megabytes in some JVMs). To work around this limitation, this patch adds a custom DirectByteBuffer allocator which ignores this memory limit.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #11805 from JoshRosen/off-heap-caching.
      e41acb75
  19. Mar 29, 2016
  20. Mar 21, 2016
    • Dongjoon Hyun's avatar
      [SPARK-14011][CORE][SQL] Enable `LineLength` Java checkstyle rule · 20fd2541
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      [Spark Coding Style Guide](https://cwiki.apache.org/confluence/display/SPARK/Spark+Code+Style+Guide) has 100-character limit on lines, but it's disabled for Java since 11/09/15. This PR enables **LineLength** checkstyle again. To help that, this also introduces **RedundantImport** and **RedundantModifier**, too. The following is the diff on `checkstyle.xml`.
      
      ```xml
      -        <!-- TODO: 11/09/15 disabled - the lengths are currently > 100 in many places -->
      -        <!--
               <module name="LineLength">
                   <property name="max" value="100"/>
                   <property name="ignorePattern" value="^package.*|^import.*|a href|href|http://|https://|ftp://"/>
               </module>
      -        -->
               <module name="NoLineWrap"/>
               <module name="EmptyBlock">
                   <property name="option" value="TEXT"/>
       -167,5 +164,7
               </module>
               <module name="CommentsIndentation"/>
               <module name="UnusedImports"/>
      +        <module name="RedundantImport"/>
      +        <module name="RedundantModifier"/>
      ```
      
      ## How was this patch tested?
      
      Currently, `lint-java` is disabled in Jenkins. It needs a manual test.
      After passing the Jenkins tests, `dev/lint-java` should passes locally.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #11831 from dongjoon-hyun/SPARK-14011.
      20fd2541
  21. Mar 16, 2016
    • Sean Owen's avatar
      [SPARK-13823][SPARK-13397][SPARK-13395][CORE] More warnings, StandardCharset follow up · 3b461d9e
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Follow up to https://github.com/apache/spark/pull/11657
      
      - Also update `String.getBytes("UTF-8")` to use `StandardCharsets.UTF_8`
      - And fix one last new Coverity warning that turned up (use of unguarded `wait()` replaced by simpler/more robust `java.util.concurrent` classes in tests)
      - And while we're here cleaning up Coverity warnings, just fix about 15 more build warnings
      
      ## How was this patch tested?
      
      Jenkins tests
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #11725 from srowen/SPARK-13823.2.
      3b461d9e
  22. Mar 13, 2016
    • Sean Owen's avatar
      [SPARK-13823][CORE][STREAMING][SQL] Always specify Charset in String <->... · 18408528
      Sean Owen authored
      [SPARK-13823][CORE][STREAMING][SQL] Always specify Charset in String <-> byte[] conversions (and remaining Coverity items)
      
      ## What changes were proposed in this pull request?
      
      - Fixes calls to `new String(byte[])` or `String.getBytes()` that rely on platform default encoding, to use UTF-8
      - Same for `InputStreamReader` and `OutputStreamWriter` constructors
      - Standardizes on UTF-8 everywhere
      - Standardizes specifying the encoding with `StandardCharsets.UTF-8`, not the Guava constant or "UTF-8" (which means handling `UnuspportedEncodingException`)
      - (also addresses the other remaining Coverity scan issues, which are pretty trivial; these are separated into commit https://github.com/srowen/spark/commit/1deecd8d9ca986d8adb1a42d315890ce5349d29c )
      
      ## How was this patch tested?
      
      Jenkins tests
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #11657 from srowen/SPARK-13823.
      18408528
  23. Mar 09, 2016
    • Dongjoon Hyun's avatar
      [SPARK-13692][CORE][SQL] Fix trivial Coverity/Checkstyle defects · f3201aee
      Dongjoon Hyun authored
      ## What changes were proposed in this pull request?
      
      This issue fixes the following potential bugs and Java coding style detected by Coverity and Checkstyle.
      
      - Implement both null and type checking in equals functions.
      - Fix wrong type casting logic in SimpleJavaBean2.equals.
      - Add `implement Cloneable` to `UTF8String` and `SortedIterator`.
      - Remove dereferencing before null check in `AbstractBytesToBytesMapSuite`.
      - Fix coding style: Add '{}' to single `for` statement in mllib examples.
      - Remove unused imports in `ColumnarBatch` and `JavaKinesisStreamSuite`.
      - Remove unused fields in `ChunkFetchIntegrationSuite`.
      - Add `stop()` to prevent resource leak.
      
      Please note that the last two checkstyle errors exist on newly added commits after [SPARK-13583](https://issues.apache.org/jira/browse/SPARK-13583).
      
      ## How was this patch tested?
      
      manual via `./dev/lint-java` and Coverity site.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #11530 from dongjoon-hyun/SPARK-13692.
      f3201aee
  24. Mar 03, 2016
    • Sean Owen's avatar
      [SPARK-13423][WIP][CORE][SQL][STREAMING] Static analysis fixes for 2.x · e97fc7f1
      Sean Owen authored
      ## What changes were proposed in this pull request?
      
      Make some cross-cutting code improvements according to static analysis. These are individually up for discussion since they exist in separate commits that can be reverted. The changes are broadly:
      
      - Inner class should be static
      - Mismatched hashCode/equals
      - Overflow in compareTo
      - Unchecked warnings
      - Misuse of assert, vs junit.assert
      - get(a) + getOrElse(b) -> getOrElse(a,b)
      - Array/String .size -> .length (occasionally, -> .isEmpty / .nonEmpty) to avoid implicit conversions
      - Dead code
      - tailrec
      - exists(_ == ) -> contains find + nonEmpty -> exists filter + size -> count
      - reduce(_+_) -> sum map + flatten -> map
      
      The most controversial may be .size -> .length simply because of its size. It is intended to avoid implicits that might be expensive in some places.
      
      ## How was the this patch tested?
      
      Existing Jenkins unit tests.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #11292 from srowen/SPARK-13423.
      e97fc7f1
  25. Mar 01, 2016
    • Reynold Xin's avatar
      [SPARK-13548][BUILD] Move tags and unsafe modules into common · b0ee7d43
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      This patch moves tags and unsafe modules into common directory to remove 2 top level non-user-facing directories.
      
      ## How was this patch tested?
      Jenkins should suffice.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #11426 from rxin/SPARK-13548.
      b0ee7d43
Loading