Skip to content
Snippets Groups Projects
  1. Jun 21, 2017
    • Marcos P's avatar
      [MINOR][DOC] modified issue link and updated status · e92befcb
      Marcos P authored
      ## What changes were proposed in this pull request?
      
      This PR aims to clarify some outdated comments that i found at **spark-catalyst** and **spark-sql** pom files. Maven bug still happening and in order to track it I have updated the issue link and also the status of the issue.
      
      Author: Marcos P <mpenate@stratio.com>
      
      Closes #18374 from mpenate/fix/mng-3559-comment.
      e92befcb
  2. Jun 15, 2017
    • ALeksander Eskilson's avatar
      [SPARK-18016][SQL][CATALYST] Code Generation: Constant Pool Limit - Class Splitting · b32b2123
      ALeksander Eskilson authored
      ## What changes were proposed in this pull request?
      
      This pull-request exclusively includes the class splitting feature described in #16648. When code for a given class would grow beyond 1600k bytes, a private, nested sub-class is generated into which subsequent functions are inlined. Additional sub-classes are generated as the code threshold is met subsequent times. This code includes 3 changes:
      
      1. Includes helper maps, lists, and functions for keeping track of sub-classes during code generation (included in the `CodeGenerator` class). These helper functions allow nested classes and split functions to be initialized/declared/inlined to the appropriate locations in the various projection classes.
      2. Changes `addNewFunction` to return a string to support instances where a split function is inlined to a nested class and not the outer class (and so must be invoked using the class-qualified name). Uses of `addNewFunction` throughout the codebase are modified so that the returned name is properly used.
      3. Removes instances of the `this` keyword when used on data inside generated classes. All state declared in the outer class is by default global and accessible to the nested classes. However, if a reference to global state in a nested class is prepended with the `this` keyword, it would attempt to reference state belonging to the nested class (which would not exist), rather than the correct variable belonging to the outer class.
      
      ## How was this patch tested?
      
      Added a test case to the `GeneratedProjectionSuite` that increases the number of columns tested in various projections to a threshold that would previously have triggered a `JaninoRuntimeException` for the Constant Pool.
      
      Note: This PR does not address the second Constant Pool issue with code generation (also mentioned in #16648): excess global mutable state. A second PR may be opened to resolve that issue.
      
      Author: ALeksander Eskilson <alek.eskilson@cerner.com>
      
      Closes #18075 from bdrillard/class_splitting_only.
      b32b2123
  3. May 07, 2017
    • Xiao Li's avatar
      [SPARK-20557][SQL] Support JDBC data type Time with Time Zone · cafca54c
      Xiao Li authored
      ### What changes were proposed in this pull request?
      
      This PR is to support JDBC data type TIME WITH TIME ZONE. It can be converted to TIMESTAMP
      
      In addition, before this PR, for unsupported data types, we simply output the type number instead of the type name.
      
      ```
      java.sql.SQLException: Unsupported type 2014
      ```
      After this PR, the message is like
      ```
      java.sql.SQLException: Unsupported type TIMESTAMP_WITH_TIMEZONE
      ```
      
      - Also upgrade the H2 version to `1.4.195` which has the type fix for "TIMESTAMP WITH TIMEZONE". However, it is not fully supported. Thus, we capture the exception, but we still need it to partially test the support of "TIMESTAMP WITH TIMEZONE", because Docker tests are not regularly run.
      
      ### How was this patch tested?
      Added test cases.
      
      Author: Xiao Li <gatorsmile@gmail.com>
      
      Closes #17835 from gatorsmile/h2.
      cafca54c
  4. Apr 24, 2017
  5. Apr 09, 2017
    • Reynold Xin's avatar
      [SPARK-20264][SQL] asm should be non-test dependency in sql/core · 7bfa05e0
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      sq/core module currently declares asm as a test scope dependency. Transitively it should actually be a normal dependency since the actual core module defines it. This occasionally confuses IntelliJ.
      
      ## How was this patch tested?
      N/A - This is a build change.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #17574 from rxin/SPARK-20264.
      7bfa05e0
  6. Feb 08, 2017
    • Dongjoon Hyun's avatar
      [SPARK-19409][BUILD][TEST-MAVEN] Fix ParquetAvroCompatibilitySuite failure due... · 0077bfcb
      Dongjoon Hyun authored
      [SPARK-19409][BUILD][TEST-MAVEN] Fix ParquetAvroCompatibilitySuite failure due to test dependency on avro
      
      ## What changes were proposed in this pull request?
      
      After using Apache Parquet 1.8.2, `ParquetAvroCompatibilitySuite` fails on **Maven** test. It is because `org.apache.parquet.avro.AvroParquetWriter` in the test code used new `avro 1.8.0` specific class, `LogicalType`. This PR aims to fix the test dependency of `sql/core` module to use avro 1.8.0.
      
      https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-maven-hadoop-2.7/2530/consoleFull
      
      ```
      ParquetAvroCompatibilitySuite:
      *** RUN ABORTED ***
        java.lang.NoClassDefFoundError: org/apache/avro/LogicalType
        at org.apache.parquet.avro.AvroParquetWriter.writeSupport(AvroParquetWriter.java:144)
      ```
      
      ## How was this patch tested?
      
      Pass the existing test with **Maven**.
      
      ```
      $ build/mvn -Pyarn -Phadoop-2.7 -Pkinesis-asl -Phive -Phive-thriftserver test
      ...
      [INFO] ------------------------------------------------------------------------
      [INFO] BUILD SUCCESS
      [INFO] ------------------------------------------------------------------------
      [INFO] Total time: 02:07 h
      [INFO] Finished at: 2017-02-04T05:41:43+00:00
      [INFO] Final Memory: 77M/987M
      [INFO] ------------------------------------------------------------------------
      ```
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #16795 from dongjoon-hyun/SPARK-19409-2.
      Unverified
      0077bfcb
  7. Dec 21, 2016
    • Ryan Williams's avatar
      [SPARK-17807][CORE] split test-tags into test-JAR · afd9bc1d
      Ryan Williams authored
      Remove spark-tag's compile-scope dependency (and, indirectly, spark-core's compile-scope transitive-dependency) on scalatest by splitting test-oriented tags into spark-tags' test JAR.
      
      Alternative to #16303.
      
      Author: Ryan Williams <ryan.blake.williams@gmail.com>
      
      Closes #16311 from ryan-williams/tt.
      afd9bc1d
  8. Dec 02, 2016
  9. Oct 05, 2016
  10. Sep 21, 2016
  11. Jul 19, 2016
  12. Jul 11, 2016
    • Reynold Xin's avatar
      [SPARK-16477] Bump master version to 2.1.0-SNAPSHOT · ffcb6e05
      Reynold Xin authored
      ## What changes were proposed in this pull request?
      After SPARK-16476 (committed earlier today as #14128), we can finally bump the version number.
      
      ## How was this patch tested?
      N/A
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #14130 from rxin/SPARK-16477.
      ffcb6e05
  13. May 25, 2016
  14. May 17, 2016
  15. May 05, 2016
  16. Apr 28, 2016
  17. Apr 21, 2016
    • Josh Rosen's avatar
      [SPARK-14797][BUILD] Spark SQL POM should not hardcode spark-sketch_2.11 dep. · 649335d6
      Josh Rosen authored
      Spark SQL's POM hardcodes a dependency on `spark-sketch_2.11`, which causes Scala 2.10 builds to include the `_2.11` dependency. This is harmless since `spark-sketch` is a pure-Java module (see #12334 for a discussion of dropping the Scala version suffixes from these modules' artifactIds), but it's confusing to people looking at the published POMs.
      
      This patch fixes this by using `${scala.binary.version}` to substitute the correct suffix, and also adds a set of Maven Enforcer rules to ensure that `_2.11` artifacts are not used in 2.10 builds (and vice-versa).
      
      /cc ahirreddy, who spotted this issue.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #12563 from JoshRosen/fix-sketch-scala-version.
      649335d6
  18. Apr 08, 2016
    • hyukjinkwon's avatar
      [SPARK-14103][SQL] Parse unescaped quotes in CSV data source. · 725b860e
      hyukjinkwon authored
      ## What changes were proposed in this pull request?
      
      This PR resolves the problem during parsing unescaped quotes in input data. For example, currently the data below:
      
      ```
      "a"b,ccc,ddd
      e,f,g
      ```
      
      produces a data below:
      
      - **Before**
      
      ```bash
      ["a"b,ccc,ddd[\n]e,f,g]  <- as a value.
      ```
      
      - **After**
      
      ```bash
      ["a"b], [ccc], [ddd]
      [e], [f], [g]
      ```
      
      This PR bumps up the Univocity parser's version. This was fixed in `2.0.2`, https://github.com/uniVocity/univocity-parsers/issues/60.
      
      ## How was this patch tested?
      
      Unit tests in `CSVSuite` and `sbt/sbt scalastyle`.
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #12226 from HyukjinKwon/SPARK-14103-quote.
      725b860e
  19. Mar 11, 2016
    • Marcelo Vanzin's avatar
      [SPARK-13780][SQL] Add missing dependency to build. · 99b7187c
      Marcelo Vanzin authored
      This is needed to avoid odd compiler errors when building just the
      sql package with maven, because of odd interactions between scalac
      and shaded classes.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #11640 from vanzin/SPARK-13780.
      99b7187c
  20. Jan 30, 2016
    • Josh Rosen's avatar
      [SPARK-6363][BUILD] Make Scala 2.11 the default Scala version · 289373b2
      Josh Rosen authored
      This patch changes Spark's build to make Scala 2.11 the default Scala version. To be clear, this does not mean that Spark will stop supporting Scala 2.10: users will still be able to compile Spark for Scala 2.10 by following the instructions on the "Building Spark" page; however, it does mean that Scala 2.11 will be the default Scala version used by our CI builds (including pull request builds).
      
      The Scala 2.11 compiler is faster than 2.10, so I think we'll be able to look forward to a slight speedup in our CI builds (it looks like it's about 2X faster for the Maven compile-only builds, for instance).
      
      After this patch is merged, I'll update Jenkins to add new compile-only jobs to ensure that Scala 2.10 compilation doesn't break.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #10608 from JoshRosen/SPARK-6363.
      289373b2
  21. Jan 26, 2016
    • Cheng Lian's avatar
      [SPARK-12935][SQL] DataFrame API for Count-Min Sketch · ce38a35b
      Cheng Lian authored
      This PR integrates Count-Min Sketch from spark-sketch into DataFrame. This version resorts to `RDD.aggregate` for building the sketch. A more performant UDAF version can be built in future follow-up PRs.
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #10911 from liancheng/cms-df-api.
      ce38a35b
  22. Jan 15, 2016
    • Hossein's avatar
      [SPARK-12833][SQL] Initial import of spark-csv · 5f83c699
      Hossein authored
      CSV is the most common data format in the "small data" world. It is often the first format people want to try when they see Spark on a single node. Having to rely on a 3rd party component for this leads to poor user experience for new users. This PR merges the popular spark-csv data source package (https://github.com/databricks/spark-csv) with SparkSQL.
      
      This is a first PR to bring the functionality to spark 2.0 master. We will complete items outlines in the design document (see JIRA attachment) in follow up pull requests.
      
      Author: Hossein <hossein@databricks.com>
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #10766 from rxin/csv.
      5f83c699
  23. Dec 19, 2015
  24. Dec 09, 2015
  25. Nov 11, 2015
  26. Oct 07, 2015
  27. Sep 15, 2015
  28. Aug 02, 2015
    • Reynold Xin's avatar
      [SPARK-9531] [SQL] UnsafeFixedWidthAggregationMap.destructAndCreateExternalSorter · 2e981b7b
      Reynold Xin authored
      This pull request adds a destructAndCreateExternalSorter method to UnsafeFixedWidthAggregationMap. The new method does the following:
      
      1. Creates a new external sorter UnsafeKVExternalSorter
      2. Adds all the data into an in-memory sorter, sorts them
      3. Spills the sorted in-memory data to disk
      
      This method can be used to fallback to sort-based aggregation when under memory pressure.
      
      The pull request also includes accounting fixes from JoshRosen.
      
      TODOs (that can be done in follow-up PRs)
      - [x] Address Josh's feedbacks from #7849
      - [x] More documentation and test cases
      - [x] Make sure we are doing memory accounting correctly with test cases (e.g. did we release the memory in BytesToBytesMap twice?)
      - [ ] Look harder at possible memory leaks and exception handling
      - [ ] Randomized tester for the KV sorter as well as the aggregation map
      
      Author: Reynold Xin <rxin@databricks.com>
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #7860 from rxin/kvsorter and squashes the following commits:
      
      986a58c [Reynold Xin] Bug fix.
      599317c [Reynold Xin] Style fix and slightly more compact code.
      fe7bd4e [Reynold Xin] Bug fixes.
      fd71bef [Reynold Xin] Merge remote-tracking branch 'josh/large-records-in-sql-sorter' into kvsorter-with-josh-fix
      3efae38 [Reynold Xin] More fixes and documentation.
      45f1b09 [Josh Rosen] Ensure that spill files are cleaned up
      f6a9bd3 [Reynold Xin] Josh feedback.
      9be8139 [Reynold Xin] Remove testSpillFrequency.
      7cbe759 [Reynold Xin] [SPARK-9531][SQL] UnsafeFixedWidthAggregationMap.destructAndCreateExternalSorter.
      ae4a8af [Josh Rosen] Detect leaked unsafe memory in UnsafeExternalSorterSuite.
      52f9b06 [Josh Rosen] Detect ShuffleMemoryManager leaks in UnsafeExternalSorter.
      2e981b7b
  29. Jul 16, 2015
    • Jan Prach's avatar
      [SPARK-9015] [BUILD] Clean project import in scala ide · b536d5dc
      Jan Prach authored
      Cleanup maven for a clean import in scala-ide / eclipse.
      
      * remove groovy plugin which is really not needed at all
      * add-source from build-helper-maven-plugin is not needed as recent version of scala-maven-plugin do it automatically
      * add lifecycle-mapping plugin to hide a few useless warnings from ide
      
      Author: Jan Prach <jendap@gmail.com>
      
      Closes #7375 from jendap/clean-project-import-in-scala-ide and squashes the following commits:
      
      c4b4c0f [Jan Prach] fix whitespaces
      5a83e07 [Jan Prach] Revert "remove java compiler warnings from java tests"
      312007e [Jan Prach] scala-maven-plugin itself add scala sources by default
      f47d856 [Jan Prach] remove spark-1.4-staging repository
      c8a54db [Jan Prach] remove java compiler warnings from java tests
      999a068 [Jan Prach] remove some maven warnings in scala ide
      80fbdc5 [Jan Prach] remove groovy and gmavenplus plugin
      b536d5dc
  30. Jul 09, 2015
    • Cheng Lian's avatar
      [SPARK-8959] [SQL] [HOTFIX] Removes parquet-thrift and libthrift dependencies · 2d45571f
      Cheng Lian authored
      These two dependencies were introduced in #7231 to help testing Parquet compatibility with `parquet-thrift`. However, they somehow crash the Scala compiler in Maven builds.
      
      This PR fixes this issue by:
      
      1. Removing these two dependencies, and
      2. Instead of generating the testing Parquet file programmatically, checking in an actual testing Parquet file generated by `parquet-thrift` as a test resource.
      
      This is just a quick fix to bring back Maven builds. Need to figure out the root case as binary Parquet files are harder to maintain.
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #7330 from liancheng/spark-8959 and squashes the following commits:
      
      cf69512 [Cheng Lian] Brings back Maven builds
      2d45571f
  31. Jul 08, 2015
    • Cheng Lian's avatar
      [SPARK-6123] [SPARK-6775] [SPARK-6776] [SQL] Refactors Parquet read path for... · 4ffc27ca
      Cheng Lian authored
      [SPARK-6123] [SPARK-6775] [SPARK-6776] [SQL] Refactors Parquet read path for interoperability and backwards-compatibility
      
      This PR is a follow-up of #6617 and is part of [SPARK-6774] [2], which aims to ensure interoperability and backwards-compatibility for Spark SQL Parquet support.  And this one fixes the read path.  Now Spark SQL is expected to be able to read legacy Parquet data files generated by most (if not all) common libraries/tools like parquet-thrift, parquet-avro, and parquet-hive. However, we still need to refactor the write path to write standard Parquet LISTs and MAPs ([SPARK-8848] [4]).
      
      ### Major changes
      
      1. `CatalystConverter` class hierarchy refactoring
      
         - Replaces `CatalystConverter` trait with a much simpler `ParentContainerUpdater`.
      
           Now instead of extending the original `CatalystConverter` trait, every converter class accepts an updater which is responsible for propagating the converted value to some parent container. For example, appending array elements to a parent array buffer, appending a key-value pairs to a parent mutable map, or setting a converted value to some specific field of a parent row. Root converter doesn't have a parent and thus uses a `NoopUpdater`.
      
           This simplifies the design since converters don't need to care about details of their parent converters anymore.
      
         - Unifies `CatalystRootConverter`, `CatalystGroupConverter` and `CatalystPrimitiveRowConverter` into `CatalystRowConverter`
      
           Specifically, now all row objects are represented by `SpecificMutableRow` during conversion.
      
         - Refactors `CatalystArrayConverter`, and removes `CatalystArrayContainsNullConverter` and `CatalystNativeArrayConverter`
      
           `CatalystNativeArrayConverter` was probably designed with the intention of avoiding boxing costs. However, the way it uses Scala generics actually doesn't achieve this goal.
      
           The new `CatalystArrayConverter` handles both nullable and non-nullable array elements in a consistent way.
      
         - Implements backwards-compatibility rules in `CatalystArrayConverter`
      
           When Parquet records are being converted, schema of Parquet files should have already been verified. So we only need to care about the structure rather than field names in the Parquet schema. Since all map objects represented in legacy systems have the same structure as the standard one (see [backwards-compatibility rules for MAP] [1]), we only need to deal with LIST (namely array) in `CatalystArrayConverter`.
      
      2. Requested columns handling
      
         When specifying requested columns in `RowReadSupport`, we used to use a Parquet `MessageType` converted from a Catalyst `StructType` which contains all requested columns.  This is not preferable when taking compatibility and interoperability into consideration.  Because the actual Parquet file may have different physical structure from the converted schema.
      
         In this PR, the schema for requested columns is constructed using the following method:
      
         - For a column that exists in the target Parquet file, we extract the column type by name from the full file schema, and construct a single-field `MessageType` for that column.
         - For a column that doesn't exist in the target Parquet file, we create a single-field `StructType` and convert it to a `MessageType` using `CatalystSchemaConverter`.
         - Unions all single-field `MessageType`s into a full schema containing all requested fields
      
         With this change, we also fix [SPARK-6123] [3] by validating the global schema against each individual Parquet part-files.
      
      ### Testing
      
      This PR also adds compatibility tests for parquet-avro, parquet-thrift, and parquet-hive. Please refer to `README.md` under `sql/core/src/test` for more information about these tests. To avoid build time code generation and adding extra complexity to the build system, Java code generated from testing Thrift schema and Avro IDL is also checked in.
      
      [1]: https://github.com/apache/incubator-parquet-format/blob/master/LogicalTypes.md#backward-compatibility-rules-1
      [2]: https://issues.apache.org/jira/browse/SPARK-6774
      [3]: https://issues.apache.org/jira/browse/SPARK-6123
      [4]: https://issues.apache.org/jira/browse/SPARK-8848
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #7231 from liancheng/spark-6776 and squashes the following commits:
      
      360fe18 [Cheng Lian] Adds ParquetHiveCompatibilitySuite
      c6fbc06 [Cheng Lian] Removes WIP file committed by mistake
      b8c1295 [Cheng Lian] Excludes the whole parquet package from MiMa
      598c3e8 [Cheng Lian] Adds extra Maven repo for hadoop-lzo, which is a transitive dependency of parquet-thrift
      926af87 [Cheng Lian] Simplifies Parquet compatibility test suites
      7946ee1 [Cheng Lian] Fixes Scala styling issues
      3d7ab36 [Cheng Lian] Fixes .rat-excludes
      a8f13bb [Cheng Lian] Using Parquet writer API to do compatibility tests
      f2208cd [Cheng Lian] Adds README.md for Thrift/Avro code generation
      1d390aa [Cheng Lian] Adds parquet-thrift compatibility test
      440f7b3 [Cheng Lian] Adds generated files to .rat-excludes
      13b9121 [Cheng Lian] Adds ParquetAvroCompatibilitySuite
      06cfe9d [Cheng Lian] Adds comments about TimestampType handling
      a099d3e [Cheng Lian] More comments
      0cc1b37 [Cheng Lian] Fixes MiMa checks
      884d3e6 [Cheng Lian] Fixes styling issue and reverts unnecessary changes
      802cbd7 [Cheng Lian] Fixes bugs related to schema merging and empty requested columns
      38fe1e7 [Cheng Lian] Adds explicit return type
      7fb21f1 [Cheng Lian] Reverts an unnecessary debugging change
      1781dff [Cheng Lian] Adds test case for SPARK-8811
      6437d4b [Cheng Lian] Assembles requested schema from Parquet file schema
      bcac49f [Cheng Lian] Removes the 16-byte restriction of decimals
      a74fb2c [Cheng Lian] More comments
      0525346 [Cheng Lian] Removes old Parquet record converters
      03c3bd9 [Cheng Lian] Refactors Parquet read path to implement backwards-compatibility rules
      4ffc27ca
  32. Jun 22, 2015
    • Davies Liu's avatar
      [SPARK-8307] [SQL] improve timestamp from parquet · 6b7f2cea
      Davies Liu authored
      This PR change to convert julian day to unix timestamp directly (without Calendar and Timestamp).
      
      cc adrian-wang rxin
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #6759 from davies/improve_ts and squashes the following commits:
      
      849e301 [Davies Liu] Merge branch 'master' of github.com:apache/spark into improve_ts
      b0e4cad [Davies Liu] Merge branch 'master' of github.com:apache/spark into improve_ts
      8e2d56f [Davies Liu] address comments
      634b9f5 [Davies Liu] fix mima
      4891efb [Davies Liu] address comment
      bfc437c [Davies Liu] fix build
      ae5979c [Davies Liu] Merge branch 'master' of github.com:apache/spark into improve_ts
      602b969 [Davies Liu] remove jodd
      2f2e48c [Davies Liu] fix test
      8ace611 [Davies Liu] fix mima
      212143b [Davies Liu] fix mina
      c834108 [Davies Liu] Merge branch 'master' of github.com:apache/spark into improve_ts
      a3171b8 [Davies Liu] Merge branch 'master' of github.com:apache/spark into improve_ts
      5233974 [Davies Liu] fix scala style
      361fd62 [Davies Liu] address comments
      ea196d4 [Davies Liu] improve timestamp from parquet
      6b7f2cea
  33. Jun 04, 2015
    • Thomas Omans's avatar
      [SPARK-7743] [SQL] Parquet 1.7 · cd3176bd
      Thomas Omans authored
      Resolves [SPARK-7743](https://issues.apache.org/jira/browse/SPARK-7743).
      
      Trivial changes of versions, package names, as well as a small issue in `ParquetTableOperations.scala`
      
      ```diff
      -    val readContext = getReadSupport(configuration).init(
      +    val readContext = ParquetInputFormat.getReadSupportInstance(configuration).init(
      ```
      
      Since ParquetInputFormat.getReadSupport was made package private in the latest release.
      
      Thanks
      -- Thomas Omans
      
      Author: Thomas Omans <tomans@cj.com>
      
      Closes #6597 from eggsby/SPARK-7743 and squashes the following commits:
      
      2df0d1b [Thomas Omans] [SPARK-7743] [SQL] Upgrading parquet version to 1.7.0
      cd3176bd
  34. Jun 03, 2015
    • Patrick Wendell's avatar
      [SPARK-7801] [BUILD] Updating versions to SPARK 1.5.0 · 2c4d550e
      Patrick Wendell authored
      Author: Patrick Wendell <patrick@databricks.com>
      
      Closes #6328 from pwendell/spark-1.5-update and squashes the following commits:
      
      2f42d02 [Patrick Wendell] A few more excludes
      4bebcf0 [Patrick Wendell] Update to RC4
      61aaf46 [Patrick Wendell] Using new release candidate
      55f1610 [Patrick Wendell] Another exclude
      04b4f04 [Patrick Wendell] More issues with transient 1.4 changes
      36f549b [Patrick Wendell] [SPARK-7801] [BUILD] Updating versions to SPARK 1.5.0
      2c4d550e
  35. May 29, 2015
  36. May 12, 2015
    • Marcelo Vanzin's avatar
      [SPARK-7485] [BUILD] Remove pyspark files from assembly. · 82e890fb
      Marcelo Vanzin authored
      The sbt part of the build is hacky; it basically tricks sbt
      into generating the zip by using a generator, but returns
      an empty list for the generated files so that nothing is
      actually added to the assembly.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #6022 from vanzin/SPARK-7485 and squashes the following commits:
      
      22c1e04 [Marcelo Vanzin] Remove unneeded code.
      4893622 [Marcelo Vanzin] [SPARK-7485] [build] Remove pyspark files from assembly.
      82e890fb
  37. May 09, 2015
    • tedyu's avatar
      [BUILD] Reference fasterxml.jackson.version in sql/core/pom.xml · bd74301f
      tedyu authored
      Author: tedyu <yuzhihong@gmail.com>
      
      Closes #6031 from tedyu/master and squashes the following commits:
      
      5c2580c [tedyu] Reference fasterxml.jackson.version in sql/core/pom.xml
      ff2a44f [tedyu] Merge branch 'master' of github.com:apache/spark
      28c8394 [tedyu] Upgrade version of jackson-databind in sql/core/pom.xml
      bd74301f
Loading