Skip to content
Snippets Groups Projects
  1. Dec 27, 2014
    • Brennon York's avatar
      [SPARK-4501][Core] - Create build/mvn to automatically download maven/zinc/scalac · a3e51cc9
      Brennon York authored
      Creates a top level directory script (as `build/mvn`) to automatically download zinc and the specific version of scala used to easily build spark. This will also download and install maven if the user doesn't already have it and all packages are hosted under the `build/` directory. Tested on both Linux and OSX OS's and both work. All commands pass through to the maven binary so it acts exactly as a traditional maven call would.
      
      Author: Brennon York <brennon.york@capitalone.com>
      
      Closes #3707 from brennonyork/SPARK-4501 and squashes the following commits:
      
      0e5a0e4 [Brennon York] minor incorrect doc verbage (with -> this)
      9b79e38 [Brennon York] fixed merge conflicts with dev/run-tests, properly quoted args in sbt/sbt, fixed bug where relative paths would fail if passed in from build/mvn
      d2d41b6 [Brennon York] added blurb about leverging zinc with build/mvn
      b979c58 [Brennon York] updated the merge conflict
      c5634de [Brennon York] updated documentation to overview build/mvn, updated all points where sbt/sbt was referenced with build/sbt
      b8437ba [Brennon York] set progress bars for curl and wget when not run on jenkins, no progress bar when run on jenkins, moved sbt script to build/sbt, wrote stub and warning under sbt/sbt which calls build/sbt, modified build/sbt to use the correct directory, fixed bug in build/sbt-launch-lib.bash to correctly pull the sbt version
      be11317 [Brennon York] added switch to silence download progress only if AMPLAB_JENKINS is set
      28d0a99 [Brennon York] updated to remove the python dependency, uses grep instead
      7e785a6 [Brennon York] added silent and quiet flags to curl and wget respectively, added single echo output to denote start of a download if download is needed
      14a5da0 [Brennon York] removed unnecessary zinc output on startup
      1af4a94 [Brennon York] fixed bug with uppercase vs lowercase variable
      3e8b9b3 [Brennon York] updated to properly only restart zinc if it was freshly installed
      a680d12 [Brennon York] Added comments to functions and tested various mvn calls
      bb8cc9d [Brennon York] removed package files
      ef017e6 [Brennon York] removed OS complexities, setup generic install_app call, removed extra file complexities, removed help, removed forced install (defaults now), removed double-dash from cli
      07bf018 [Brennon York] Updated to specifically handle pulling down the correct scala version
      f914dea [Brennon York] Beginning final portions of localized scala home
      69c4e44 [Brennon York] working linux and osx installers for purely local mvn build
      4a1609c [Brennon York] finalizing working linux install for maven to local ./build/apache-maven folder
      cbfcc68 [Brennon York] Changed the default sbt/sbt to build/sbt and added a build/mvn which will automatically download, install, and execute maven with zinc for easier build capability
      a3e51cc9
  2. Dec 23, 2014
    • Cheng Lian's avatar
      [SPARK-4914][Build] Cleans lib_managed before compiling with Hive 0.13.1 · 395b771f
      Cheng Lian authored
      This PR tries to fix the Hive tests failure encountered in PR #3157 by cleaning `lib_managed` before building assembly jar against Hive 0.13.1 in `dev/run-tests`. Otherwise two sets of datanucleus jars would be left in `lib_managed` and may mess up class paths while executing Hive test suites. Please refer to [this thread] [1] for details. A clean build would be even safer, but we only clean `lib_managed` here to save build time.
      
      This PR also takes the chance to clean up some minor typos and formatting issues in the comments.
      
      [1]: https://github.com/apache/spark/pull/3157#issuecomment-67656488
      
      <!-- Reviewable:start -->
      [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/3756)
      <!-- Reviewable:end -->
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #3756 from liancheng/clean-lib-managed and squashes the following commits:
      
      e2bd21d [Cheng Lian] Adds lib_managed to clean set
      c9f2f3e [Cheng Lian] Cleans lib_managed before compiling with Hive 0.13.1
      395b771f
  3. Dec 17, 2014
  4. Dec 16, 2014
    • Andrew Or's avatar
      [Release] Cache known author translations locally · b85044ec
      Andrew Or authored
      This bypasses unnecessary calls to the Github and JIRA API.
      Additionally, having a local cache allows us to remember names
      that we had to manually discover ourselves.
      b85044ec
    • Andrew Or's avatar
      [Release] Major improvements to generate contributors script · 6f80b749
      Andrew Or authored
      This commit introduces several major improvements to the script
      that generates the contributors list for release notes, notably:
      
      (1) Use release tags instead of a range of commits. Across branches,
      commits are not actually strictly two-dimensional, and so it is not
      sufficient to specify a start hash and an end hash. Otherwise, we
      end up counting commits that were already merged in an older branch.
      
      (2) Match PR numbers in addition to commit hashes. This is related
      to the first point in that if a PR is already merged in an older
      minor release tag, it should be filtered out here. This requires us
      to do some intelligent regex parsing on the commit description in
      addition to just relying on the GitHub API.
      
      (3) Relax author validity check. The old code fails on a name that
      has many middle names, for instance. The test was just too strict.
      
      (4) Use GitHub authentication. This allows us to make far more
      requests through the GitHub API than before (5000 as opposed to 60
      per hour).
      
      (5) Translate from Github username, not commit author name. This is
      important because the commit author name is not always configured
      correctly by the user. For instance, the username "falaki" used to
      resolve to just "Hossein", which was treated as a github username
      and translated to something else that is completely arbitrary.
      
      (6) Add an option to use the untranslated name. If there is not
      a satisfactory candidate to replace the untranslated name with,
      at least allow the user to not translate it.
      6f80b749
  5. Dec 09, 2014
    • Sandy Ryza's avatar
      SPARK-4338. [YARN] Ditch yarn-alpha. · 912563aa
      Sandy Ryza authored
      Sorry if this is a little premature with 1.2 still not out the door, but it will make other work like SPARK-4136 and SPARK-2089 a lot easier.
      
      Author: Sandy Ryza <sandy@cloudera.com>
      
      Closes #3215 from sryza/sandy-spark-4338 and squashes the following commits:
      
      1c5ac08 [Sandy Ryza] Update building Spark docs and remove unnecessary newline
      9c1421c [Sandy Ryza] SPARK-4338. Ditch yarn-alpha.
      912563aa
  6. Dec 04, 2014
    • Patrick Wendell's avatar
      [HOTFIX] Fixing two issues with the release script. · 8dae26f8
      Patrick Wendell authored
      1. The version replacement was still producing some false changes.
      2. Uploads to the staging repo specifically.
      
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #3608 from pwendell/release-script and squashes the following commits:
      
      3c63294 [Patrick Wendell] Fixing two issues with the release script:
      8dae26f8
  7. Dec 03, 2014
    • Andrew Or's avatar
      [Release] Correctly translate contributors name in release notes · a4dfb4ef
      Andrew Or authored
      This commit involves three main changes:
      
      (1) It separates the translation of contributor names from the
      generation of the contributors list. This is largely motivated
      by the Github API limit; even if we exceed this limit, we should
      at least be able to proceed manually as before. This is why the
      translation logic is abstracted into its own script
      translate-contributors.py.
      
      (2) When we look for candidate replacements for invalid author
      names, we should look for the assignees of the associated JIRAs
      too. As a result, the intermediate file must keep track of these.
      
      (3) This provides an interactive mode with which the user can
      sit at the terminal and manually pick the candidate replacement
      that he/she thinks makes the most sense. As before, there is a
      non-interactive mode that picks the first candidate that the
      script considers "valid."
      
      TODO: We should have a known_contributors file that stores
      known mappings so we don't have to go through all of this
      translation every time. This is also valuable because some
      contributors simply cannot be automatically translated.
      a4dfb4ef
  8. Dec 02, 2014
  9. Nov 29, 2014
    • Takayuki Hasegawa's avatar
      SPARK-4507: PR merge script should support closing multiple JIRA tickets · 4316a7b0
      Takayuki Hasegawa authored
      This will fix SPARK-4507.
      
      For pull requests that reference multiple JIRAs in their titles, it would be helpful if the PR merge script offered to close all of them.
      
      Author: Takayuki Hasegawa <takayuki.hasegawa0311@gmail.com>
      
      Closes #3428 from hase1031/SPARK-4507 and squashes the following commits:
      
      bf6d64b [Takayuki Hasegawa] SPARK-4507: try to resolve issue when no JIRAs in title
      401224c [Takayuki Hasegawa] SPARK-4507: moved codes as before
      ce89021 [Takayuki Hasegawa] SPARK-4507: PR merge script should support closing multiple JIRA tickets
      4316a7b0
  10. Nov 27, 2014
  11. Nov 25, 2014
  12. Nov 17, 2014
    • Patrick Wendell's avatar
      SPARK-4466: Provide support for publishing Scala 2.11 artifacts to Maven · c6e0c2ab
      Patrick Wendell authored
      The maven release plug-in does not have support for publishing two separate sets of artifacts for a single release. Because of the way that Scala 2.11 support in Spark works, we have to write some customized code to do this. The good news is that the Maven release API is just a thin wrapper on doing git commits and pushing artifacts to the HTTP API of Apache's Sonatype server and this might overall make our deployment easier to understand.
      
      This was already used for the 1.2 snapshot, so I think it is working well. One other nice thing is this could be pretty easily extended to publish nightly snapshots.
      
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #3332 from pwendell/releases and squashes the following commits:
      
      2fedaed [Patrick Wendell] Automate the opening and closing of Sonatype repos
      e2a24bb [Patrick Wendell] Fixing issue where we overrode non-spark version numbers
      9df3a50 [Patrick Wendell] Adding TODO
      1cc1749 [Patrick Wendell] Don't build the thriftserver for 2.11
      933201a [Patrick Wendell] Make tagging of release commit eager
      d0388a6 [Patrick Wendell] Support Scala 2.11 build
      4f4dc62 [Patrick Wendell] Change to 2.11 should not be included when committing new patch
      bf742e1 [Patrick Wendell] Minor fixes
      ffa1df2 [Patrick Wendell] Adding a Scala 2.11 package to test it
      9ac4381 [Patrick Wendell] Addressing TODO
      b3105ff [Patrick Wendell] Removing commented out code
      d906803 [Patrick Wendell] Small fix
      3f4d985 [Patrick Wendell] More work
      fcd54c2 [Patrick Wendell] Consolidating use of keys
      df2af30 [Patrick Wendell] Changes to release stuff
      c6e0c2ab
  13. Nov 12, 2014
  14. Nov 11, 2014
    • Prashant Sharma's avatar
      Support cross building for Scala 2.11 · daaca14c
      Prashant Sharma authored
      Let's give this another go using a version of Hive that shades its JLine dependency.
      
      Author: Prashant Sharma <prashant.s@imaginea.com>
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #3159 from pwendell/scala-2.11-prashant and squashes the following commits:
      
      e93aa3e [Patrick Wendell] Restoring -Phive-thriftserver profile and cleaning up build script.
      f65d17d [Patrick Wendell] Fixing build issue due to merge conflict
      a8c41eb [Patrick Wendell] Reverting dev/run-tests back to master state.
      7a6eb18 [Patrick Wendell] Merge remote-tracking branch 'apache/master' into scala-2.11-prashant
      583aa07 [Prashant Sharma] REVERT ME: removed hive thirftserver
      3680e58 [Prashant Sharma] Revert "REVERT ME: Temporarily removing some Cli tests."
      935fb47 [Prashant Sharma] Revert "Fixed by disabling a few tests temporarily."
      925e90f [Prashant Sharma] Fixed by disabling a few tests temporarily.
      2fffed3 [Prashant Sharma] Exclude groovy from sbt build, and also provide a way for such instances in future.
      8bd4e40 [Prashant Sharma] Switched to gmaven plus, it fixes random failures observer with its predecessor gmaven.
      5272ce5 [Prashant Sharma] SPARK_SCALA_VERSION related bugs.
      2121071 [Patrick Wendell] Migrating version detection to PySpark
      b1ed44d [Patrick Wendell] REVERT ME: Temporarily removing some Cli tests.
      1743a73 [Patrick Wendell] Removing decimal test that doesn't work with Scala 2.11
      f5cad4e [Patrick Wendell] Add Scala 2.11 docs
      210d7e1 [Patrick Wendell] Revert "Testing new Hive version with shaded jline"
      48518ce [Patrick Wendell] Remove association of Hive and Thriftserver profiles.
      e9d0a06 [Patrick Wendell] Revert "Enable thritfserver for Scala 2.10 only"
      67ec364 [Patrick Wendell] Guard building of thriftserver around Scala 2.10 check
      8502c23 [Patrick Wendell] Enable thritfserver for Scala 2.10 only
      e22b104 [Patrick Wendell] Small fix in pom file
      ec402ab [Patrick Wendell] Various fixes
      0be5a9d [Patrick Wendell] Testing new Hive version with shaded jline
      4eaec65 [Prashant Sharma] Changed scripts to ignore target.
      5167bea [Prashant Sharma] small correction
      a4fcac6 [Prashant Sharma] Run against scala 2.11 on jenkins.
      80285f4 [Prashant Sharma] MAven equivalent of setting spark.executor.extraClasspath during tests.
      034b369 [Prashant Sharma] Setting test jars on executor classpath during tests from sbt.
      d4874cb [Prashant Sharma] Fixed Python Runner suite. null check should be first case in scala 2.11.
      6f50f13 [Prashant Sharma] Fixed build after rebasing with master. We should use ${scala.binary.version} instead of just 2.10
      e56ca9d [Prashant Sharma] Print an error if build for 2.10 and 2.11 is spotted.
      937c0b8 [Prashant Sharma] SCALA_VERSION -> SPARK_SCALA_VERSION
      cb059b0 [Prashant Sharma] Code review
      0476e5e [Prashant Sharma] Scala 2.11 support with repl and all build changes.
      daaca14c
    • Andrew Or's avatar
      2ddb1415
  15. Nov 10, 2014
    • Cheng Lian's avatar
      [SPARK-4000][Build] Uploads HiveCompatibilitySuite logs · 534b2314
      Cheng Lian authored
      This is a follow up of #2845. In addition to unit-tests.log files, also upload failure output files generated by `HiveCompatibilitySuite` to Jenkins master. These files can be very helpful to debug Hive compatibility test failures.
      
      /cc pwendell marmbrus
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #2993 from liancheng/upload-hive-compat-logs and squashes the following commits:
      
      8e6247f [Cheng Lian] Uploads HiveCompatibilitySuite logs
      534b2314
  16. Nov 04, 2014
    • Xiangrui Meng's avatar
      [SPARK-3573][MLLIB] Make MLlib's Vector compatible with SQL's SchemaRDD · 1a9c6cdd
      Xiangrui Meng authored
      Register MLlib's Vector as a SQL user-defined type (UDT) in both Scala and Python. With this PR, we can easily map a RDD[LabeledPoint] to a SchemaRDD, and then select columns or save to a Parquet file. Examples in Scala/Python are attached. The Scala code was copied from jkbradley.
      
      ~~This PR contains the changes from #3068 . I will rebase after #3068 is merged.~~
      
      marmbrus jkbradley
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #3070 from mengxr/SPARK-3573 and squashes the following commits:
      
      3a0b6e5 [Xiangrui Meng] organize imports
      236f0a0 [Xiangrui Meng] register vector as UDT and provide dataset examples
      1a9c6cdd
  17. Oct 31, 2014
    • wangfei's avatar
      [SPARK-3826][SQL]enable hive-thriftserver to support hive-0.13.1 · 7c41d135
      wangfei authored
       In #2241 hive-thriftserver is not enabled. This patch enable hive-thriftserver to support hive-0.13.1 by using a shim layer refer to #2241.
      
       1 A light shim layer(code in sql/hive-thriftserver/hive-version) for each different hive version to handle api compatibility
      
       2 New pom profiles "hive-default" and "hive-versions"(copy from #2241) to activate different hive version
      
       3 SBT cmd for different version as follows:
         hive-0.12.0 --- sbt/sbt -Phive,hadoop-2.3 -Phive-0.12.0 assembly
         hive-0.13.1 --- sbt/sbt -Phive,hadoop-2.3 -Phive-0.13.1 assembly
      
       4 Since hive-thriftserver depend on hive subproject, this patch should be merged with #2241 to enable hive-0.13.1 for hive-thriftserver
      
      Author: wangfei <wangfei1@huawei.com>
      Author: scwf <wangfei1@huawei.com>
      
      Closes #2685 from scwf/shim-thriftserver1 and squashes the following commits:
      
      f26f3be [wangfei] remove clean to save time
      f5cac74 [wangfei] remove local hivecontext test
      578234d [wangfei] use new shaded hive
      18fb1ff [wangfei] exclude kryo in hive pom
      fa21d09 [wangfei] clean package assembly/assembly
      8a4daf2 [wangfei] minor fix
      0d7f6cf [wangfei] address comments
      f7c93ae [wangfei] adding build with hive 0.13 before running tests
      bcf943f [wangfei] Merge branch 'master' of https://github.com/apache/spark into shim-thriftserver1
      c359822 [wangfei] reuse getCommandProcessor in hiveshim
      52674a4 [scwf] sql/hive included since examples depend on it
      3529e98 [scwf] move hive module to hive profile
      f51ff4e [wangfei] update and fix conflicts
      f48d3a5 [scwf] Merge branch 'master' of https://github.com/apache/spark into shim-thriftserver1
      41f727b [scwf] revert pom changes
      13afde0 [scwf] fix small bug
      4b681f4 [scwf] enable thriftserver in profile hive-0.13.1
      0bc53aa [scwf] fixed when result filed is null
      dfd1c63 [scwf] update run-tests to run hive-0.12.0 default now
      c6da3ce [scwf] Merge branch 'master' of https://github.com/apache/spark into shim-thriftserver
      7c66b8e [scwf] update pom according spark-2706
      ae47489 [scwf] update and fix conflicts
      7c41d135
  18. Oct 26, 2014
    • GuoQiang Li's avatar
      [SPARK-3997][Build]scalastyle should output the error location · 89e8a5d8
      GuoQiang Li authored
      Author: GuoQiang Li <witgo@qq.com>
      
      Closes #2846 from witgo/SPARK-3997 and squashes the following commits:
      
      d6a57f8 [GuoQiang Li] scalastyle should output the error location
      89e8a5d8
    • Michael Armbrust's avatar
      [HOTFIX][SQL] Temporarily turn off hive-server tests. · 879a1658
      Michael Armbrust authored
      The thirift server is not available in the default (hive13) profile yet which is breaking all SQL only PRs.  This turns off these test until #2685 is merged.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #2950 from marmbrus/fixTests and squashes the following commits:
      
      1a6dfee [Michael Armbrust] [HOTFIX][SQL] Temporarily turn of hive-server tests.
      879a1658
  19. Oct 24, 2014
    • Michael Armbrust's avatar
      [SQL] Update Hive test harness for Hive 12 and 13 · 3a845d3c
      Michael Armbrust authored
      As part of the upgrade I also copy the newest version of the query tests, and whitelist a bunch of new ones that are now passing.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #2936 from marmbrus/fix13tests and squashes the following commits:
      
      d9cbdab [Michael Armbrust] Remove user specific tests
      65801cd [Michael Armbrust] style and rat
      8f6b09a [Michael Armbrust] Update test harness to work with both Hive 12 and 13.
      f044843 [Michael Armbrust] Update Hive query tests and golden files to 0.13
      3a845d3c
    • Zhan Zhang's avatar
      [SPARK-2706][SQL] Enable Spark to support Hive 0.13 · 7c89a8f0
      Zhan Zhang authored
      Given that a lot of users are trying to use hive 0.13 in spark, and the incompatibility between hive-0.12 and hive-0.13 on the API level I want to propose following approach, which has no or minimum impact on existing hive-0.12 support, but be able to jumpstart the development of hive-0.13 and future version support.
      
      Approach: Introduce “hive-version” property,  and manipulate pom.xml files to support different hive version at compiling time through shim layer, e.g., hive-0.12.0 and hive-0.13.1. More specifically,
      
      1. For each different hive version, there is a very light layer of shim code to handle API differences, sitting in sql/hive/hive-version, e.g., sql/hive/v0.12.0 or sql/hive/v0.13.1
      
      2. Add a new profile hive-default active by default, which picks up all existing configuration and hive-0.12.0 shim (v0.12.0)  if no hive.version is specified.
      
      3. If user specifies different version (currently only 0.13.1 by -Dhive.version = 0.13.1), hive-versions profile will be activated, which pick up hive-version specific shim layer and configuration, mainly the hive jars and hive-version shim, e.g., v0.13.1.
      
      4. With this approach, nothing is changed with current hive-0.12 support.
      
      No change by default: sbt/sbt -Phive
      For example: sbt/sbt -Phive -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 assembly
      
      To enable hive-0.13: sbt/sbt -Dhive.version=0.13.1
      For example: sbt/sbt -Dhive.version=0.13.1 -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 assembly
      
      Note that in hive-0.13, hive-thriftserver is not enabled, which should be fixed by other Jira, and we don’t need -Phive with -Dhive.version in building (probably we should use -Phive -Dhive.version=xxx instead after thrift server is also supported in hive-0.13.1).
      
      Author: Zhan Zhang <zhazhan@gmail.com>
      Author: zhzhan <zhazhan@gmail.com>
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #2241 from zhzhan/spark-2706 and squashes the following commits:
      
      3ece905 [Zhan Zhang] minor fix
      410b668 [Zhan Zhang] solve review comments
      cbb4691 [Zhan Zhang] change run-test for new options
      0d4d2ed [Zhan Zhang] rebase
      497b0f4 [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      8fad1cf [Zhan Zhang] change the pom file and make hive-0.13.1 as the default
      ab028d1 [Zhan Zhang] rebase
      4a2e36d [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      4cb1b93 [zhzhan] Merge pull request #1 from pwendell/pr-2241
      b0478c0 [Patrick Wendell] Changes to simplify the build of SPARK-2706
      2b50502 [Zhan Zhang] rebase
      a72c0d4 [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      cb22863 [Zhan Zhang] correct the typo
      20f6cf7 [Zhan Zhang] solve compatability issue
      f7912a9 [Zhan Zhang] rebase and solve review feedback
      301eb4a [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      10c3565 [Zhan Zhang] address review comments
      6bc9204 [Zhan Zhang] rebase and remove temparory repo
      d3aa3f2 [Zhan Zhang] Merge branch 'master' into spark-2706
      cedcc6f [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      3ced0d7 [Zhan Zhang] rebase
      d9b981d [Zhan Zhang] rebase and fix error due to rollback
      adf4924 [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      3dd50e8 [Zhan Zhang] solve conflicts and remove unnecessary implicts
      d10bf00 [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      dc7bdb3 [Zhan Zhang] solve conflicts
      7e0cc36 [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      d7c3e1e [Zhan Zhang] Merge branch 'master' into spark-2706
      68deb11 [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      d48bd18 [Zhan Zhang] address review comments
      3ee3b2b [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      57ea52e [Zhan Zhang] Merge branch 'master' into spark-2706
      2b0d513 [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      9412d24 [Zhan Zhang] address review comments
      f4af934 [Zhan Zhang] rebase
      1ccd7cc [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      128b60b [Zhan Zhang] ignore 0.12.0 test cases for the time being
      af9feb9 [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      5f5619f [Zhan Zhang] restructure the directory and different hive version support
      05d3683 [Zhan Zhang] solve conflicts
      e4c1982 [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      94b4fdc [Zhan Zhang] Spark-2706: hive-0.13.1 support on spark
      87ebf3b [Zhan Zhang] Merge branch 'master' into spark-2706
      921e914 [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      f896b2a [Zhan Zhang] Merge branch 'master' into spark-2706
      789ea21 [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      cb53a2c [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      f6a8a40 [Zhan Zhang] revert
      ba14f28 [Zhan Zhang] test
      dbedff3 [Zhan Zhang] Merge remote-tracking branch 'upstream/master'
      70964fe [Zhan Zhang] revert
      fe0f379 [Zhan Zhang] Merge branch 'master' of https://github.com/zhzhan/spark
      70ffd93 [Zhan Zhang] revert
      42585ec [Zhan Zhang] test
      7d5fce2 [Zhan Zhang] test
      7c89a8f0
    • Cheng Lian's avatar
      [SPARK-4000][BUILD] Sends archived unit tests logs to Jenkins master · a29c9bd6
      Cheng Lian authored
      This PR sends archived unit tests logs to the build history directory in Jenkins master, so that we can serve it via HTTP later to help debugging Jenkins build failures.
      
      pwendell JoshRosen Please help review, thanks!
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #2845 from liancheng/log-archive and squashes the following commits:
      
      ac8d9d4 [Cheng Lian] Includes build number in messages posted to GitHub
      68c7010 [Cheng Lian] Logs backup should be implemented in dev/run-tests-jenkins
      4b912f7 [Cheng Lian] Sends archived unit tests logs to Jenkins master
      a29c9bd6
  20. Oct 08, 2014
  21. Oct 06, 2014
    • Nicholas Chammas's avatar
      [SPARK-3479] [Build] Report failed test category · 69c3f441
      Nicholas Chammas authored
      This PR allows SparkQA (i.e. Jenkins) to report in its posts to GitHub what category of test failed, if one can be determined.
      
      The failure categories are:
      * general failure
      * RAT checks failed
      * Scala style checks failed
      * Python style checks failed
      * Build failed
      * Spark unit tests failed
      * PySpark unit tests failed
      * MiMa checks failed
      
      This PR also fixes the diffing logic used to determine if a patch introduces new classes.
      
      Author: Nicholas Chammas <nicholas.chammas@gmail.com>
      
      Closes #2606 from nchammas/report-failed-test-category and squashes the following commits:
      
      d67df03 [Nicholas Chammas] report what test category failed
      69c3f441
  22. Oct 05, 2014
    • Patrick Wendell's avatar
      HOTFIX: Fix unicode error in merge script. · e222221e
      Patrick Wendell authored
      The merge script builds up a big command array and sometimes
      this contains both unicode and ascii strings. This doesn't work
      if you try to join them into a single string. Longer term a solution
      is to go and make sure the source of all strings is unicode.
      
      This patch provides a simpler solution... just print the array
      rather than joining. I actually prefer printing an array here
      anyways since joining on spaces is lossy in the case of arguments
      that themselves contain spaces.
      
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #2645 from pwendell/merge-script and squashes the following commits:
      
      167b792 [Patrick Wendell] HOTFIX: Fix unicode error in merge script.
      e222221e
  23. Sep 30, 2014
  24. Sep 24, 2014
    • Nicholas Chammas's avatar
      [Build] Diff from branch point · c4291260
      Nicholas Chammas authored
      Sometimes Jenkins posts [spurious reports of new classes being added](https://github.com/apache/spark/pull/2339#issuecomment-56570170). I believe this stems from diffing the patch against `master`, as opposed to against `master...`, which starts from the commit the PR was branched from.
      
      This patch fixes that behavior.
      
      Author: Nicholas Chammas <nicholas.chammas@gmail.com>
      
      Closes #2512 from nchammas/diff-only-commits-ahead and squashes the following commits:
      
      c065599 [Nicholas Chammas] comment typo fix
      a453c67 [Nicholas Chammas] diff from branch point
      c4291260
  25. Sep 19, 2014
  26. Sep 18, 2014
  27. Sep 17, 2014
    • Nicholas Chammas's avatar
      [SPARK-3534] Fix expansion of testing arguments to sbt · 7fc3bb7c
      Nicholas Chammas authored
      Testing arguments to `sbt` need to be passed as an array, not a single, long string.
      
      Fixes a bug introduced in #2420.
      
      Author: Nicholas Chammas <nicholas.chammas@gmail.com>
      
      Closes #2437 from nchammas/selective-testing and squashes the following commits:
      
      a9f9c1c [Nicholas Chammas] fix printing of sbt test arguments
      cf57cbf [Nicholas Chammas] fix sbt test arguments
      e33b978 [Nicholas Chammas] Merge pull request #2 from apache/master
      0b47ca4 [Nicholas Chammas] Merge branch 'master' of github.com:nchammas/spark
      8051486 [Nicholas Chammas] Merge pull request #1 from apache/master
      03180a4 [Nicholas Chammas] Merge branch 'master' of github.com:nchammas/spark
      d4c5f43 [Nicholas Chammas] Merge pull request #6 from apache/master
      7fc3bb7c
    • Nicholas Chammas's avatar
      [SPARK-1455] [SPARK-3534] [Build] When possible, run SQL tests only. · 5044e495
      Nicholas Chammas authored
      If the only files changed are related to SQL, then only run the SQL tests.
      
      This patch includes some cosmetic/maintainability refactoring. I would be more than happy to undo some of these changes if they are inappropriate.
      
      We can accept this patch mostly as-is and address the immediate need documented in [SPARK-3534](https://issues.apache.org/jira/browse/SPARK-3534), or we can keep it open until a satisfactory solution along the lines [discussed here](https://issues.apache.org/jira/browse/SPARK-1455?focusedCommentId=14136424&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14136424) is reached.
      
      Note: I had to hack this patch up to test it locally, so what I'm submitting here and what I tested are technically different.
      
      Author: Nicholas Chammas <nicholas.chammas@gmail.com>
      
      Closes #2420 from nchammas/selective-testing and squashes the following commits:
      
      db3fa2d [Nicholas Chammas] diff against master!
      f9e23f6 [Nicholas Chammas] when possible, run SQL tests only
      5044e495
  28. Sep 15, 2014
    • Prashant Sharma's avatar
      [SPARK-3433][BUILD] Fix for Mima false-positives with @DeveloperAPI and @Experimental annotations. · ecf0c029
      Prashant Sharma authored
      Actually false positive reported was due to mima generator not picking up the new jars in presence of old jars(theoretically this should not have happened.). So as a workaround, ran them both separately and just append them together.
      
      Author: Prashant Sharma <prashant@apache.org>
      Author: Prashant Sharma <prashant.s@imaginea.com>
      
      Closes #2285 from ScrapCodes/mima-fix and squashes the following commits:
      
      093c76f [Prashant Sharma] Update mima
      59012a8 [Prashant Sharma] Update mima
      35b6c71 [Prashant Sharma] SPARK-3433 Fix for Mima false-positives with @DeveloperAPI and @Experimental annotations.
      ecf0c029
    • Matthew Farrellee's avatar
      [SPARK-3425] do not set MaxPermSize for OpenJDK 1.8 · fe2b1d6a
      Matthew Farrellee authored
      Closes #2387
      
      Author: Matthew Farrellee <matt@redhat.com>
      
      Closes #2301 from mattf/SPARK-3425 and squashes the following commits:
      
      20f3c09 [Matthew Farrellee] [SPARK-3425] do not set MaxPermSize for OpenJDK 1.8
      fe2b1d6a
  29. Sep 09, 2014
  30. Sep 08, 2014
    • Prashant Sharma's avatar
      SPARK-3337 Paranoid quoting in shell to allow install dirs with spaces within. · e16a8e7d
      Prashant Sharma authored
      ...
      
      Tested ! TBH, it isn't a great idea to have directory with spaces within. Because emacs doesn't like it then hadoop doesn't like it. and so on...
      
      Author: Prashant Sharma <prashant.s@imaginea.com>
      
      Closes #2229 from ScrapCodes/SPARK-3337/quoting-shell-scripts and squashes the following commits:
      
      d4ad660 [Prashant Sharma] SPARK-3337 Paranoid quoting in shell to allow install dirs with spaces within.
      e16a8e7d
Loading