Skip to content
Snippets Groups Projects
  1. Jun 07, 2015
    • Sean Owen's avatar
      [SPARK-7733] [CORE] [BUILD] Update build, code to use Java 7 for 1.5.0+ · e84815dc
      Sean Owen authored
      Update build to use Java 7, and remove some comments and special-case support for Java 6.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #6265 from srowen/SPARK-7733 and squashes the following commits:
      
      59bda4e [Sean Owen] Update build to use Java 7, and remove some comments and special-case support for Java 6
      e84815dc
  2. May 30, 2015
    • Taka Shinagawa's avatar
      [DOCS] [MINOR] Update for the Hadoop versions table with hadoop-2.6 · 3ab71eb9
      Taka Shinagawa authored
      Updated the doc for the hadoop-2.6 profile, which is new to Spark 1.4
      
      Author: Taka Shinagawa <taka.epsilon@gmail.com>
      
      Closes #6450 from mrt/docfix2 and squashes the following commits:
      
      db1c43b [Taka Shinagawa] Updated the hadoop versions for hadoop-2.6 profile
      323710e [Taka Shinagawa] The hadoop-2.6 profile is added to the Hadoop versions table
      3ab71eb9
    • Sean Owen's avatar
      [SPARK-7890] [DOCS] Document that Spark 2.11 now supports Kafka · 8c8de3ed
      Sean Owen authored
      Remove caveat about Kafka / JDBC not being supported for Scala 2.11
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #6470 from srowen/SPARK-7890 and squashes the following commits:
      
      4652634 [Sean Owen] One more rewording
      7b7f3c8 [Sean Owen] Restore note about JDBC component
      126744d [Sean Owen] Remove caveat about Kafka / JDBC not being supported for Scala 2.11
      8c8de3ed
  3. May 28, 2015
    • Mike Dusenberry's avatar
      [DOCS] Fixing broken "IDE setup" link in the Building Spark documentation. · 3e312a5e
      Mike Dusenberry authored
      The location of the IDE setup information has changed, so this just updates the link on the Building Spark page.
      
      Author: Mike Dusenberry <dusenberrymw@gmail.com>
      
      Closes #6467 from dusenberrymw/Fix_Broken_Link_On_Building_Spark_Doc and squashes the following commits:
      
      75c533a [Mike Dusenberry] Fixing broken "IDE setup" link in the Building Spark documentation by pointing to new location.
      3e312a5e
  4. May 27, 2015
    • Cheolsoo Park's avatar
      [SPARK-7850][BUILD] Hive 0.12.0 profile in POM should be removed · 6dd64587
      Cheolsoo Park authored
      I grep'ed hive-0.12.0 in the source code and removed all the profiles and doc references.
      
      Author: Cheolsoo Park <cheolsoop@netflix.com>
      
      Closes #6393 from piaozhexiu/SPARK-7850 and squashes the following commits:
      
      fb429ce [Cheolsoo Park] Remove hive-0.13.1 profile
      82bf09a [Cheolsoo Park] Remove hive 0.12.0 shim code
      f3722da [Cheolsoo Park] Remove hive-0.12.0 profile and references from POM and build docs
      6dd64587
  5. May 16, 2015
    • Sean Owen's avatar
      [SPARK-4556] [BUILD] binary distribution assembly can't run in local mode · 1fd33815
      Sean Owen authored
      Add note on building a runnable distribution with make-distribution.sh
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #6186 from srowen/SPARK-4556 and squashes the following commits:
      
      4002966 [Sean Owen] Add pointer to --help flag
      9fa7883 [Sean Owen] Add note on building a runnable distribution with make-distribution.sh
      1fd33815
  6. May 14, 2015
    • FavioVazquez's avatar
      [SPARK-7249] Updated Hadoop dependencies due to inconsistency in the versions · 7fb715de
      FavioVazquez authored
      Updated Hadoop dependencies due to inconsistency in the versions. Now the global properties are the ones used by the hadoop-2.2 profile, and the profile was set to empty but kept for backwards compatibility reasons.
      
      Changes proposed by vanzin resulting from previous pull-request https://github.com/apache/spark/pull/5783 that did not fixed the problem correctly.
      
      Please let me know if this is the correct way of doing this, the comments of vanzin are in the pull-request mentioned.
      
      Author: FavioVazquez <favio.vazquezp@gmail.com>
      
      Closes #5786 from FavioVazquez/update-hadoop-dependencies and squashes the following commits:
      
      11670e5 [FavioVazquez] - Added missing instance of -Phadoop-2.2 in create-release.sh
      379f50d [FavioVazquez] - Added instances of -Phadoop-2.2 in create-release.sh, run-tests, scalastyle and building-spark.md - Reconstructed docs to not ask users to rely on default behavior
      3f9249d [FavioVazquez] Merge branch 'master' of https://github.com/apache/spark into update-hadoop-dependencies
      31bdafa [FavioVazquez] - Added missing instances in -Phadoop-1 in create-release.sh, run-tests and in the building-spark documentation
      cbb93e8 [FavioVazquez] - Added comment related to SPARK-3710 about  hadoop-yarn-server-tests in Hadoop 2.2 that fails to pull some needed dependencies
      83dc332 [FavioVazquez] - Cleaned up the main POM concerning the yarn profile - Erased hadoop-2.2 profile from yarn/pom.xml and its content was integrated into yarn/pom.xml
      93f7624 [FavioVazquez] - Deleted unnecessary comments and <activation> tag on the YARN profile in the main POM
      668d126 [FavioVazquez] - Moved <dependencies> <activation> and <properties> sections of the hadoop-2.2 profile in the YARN POM to the YARN profile in the root POM - Erased unnecessary hadoop-2.2 profile from the YARN POM
      fda6a51 [FavioVazquez] - Updated hadoop1 releases in create-release.sh  due to changes in the default hadoop version set - Erased unnecessary instance of -Dyarn.version=2.2.0 in create-release.sh - Prettify comment in yarn/pom.xml
      0470587 [FavioVazquez] - Erased unnecessary instance of -Phadoop-2.2 -Dhadoop.version=2.2.0 in create-release.sh - Updated how the releases are made in the create-release.sh no that the default hadoop version is the 2.2.0 - Erased unnecessary instance of -Phadoop-2.2 -Dhadoop.version=2.2.0 in scalastyle - Erased unnecessary instance of -Phadoop-2.2 -Dhadoop.version=2.2.0 in run-tests - Better example given in the hadoop-third-party-distributions.md now that the default hadoop version is 2.2.0
      a650779 [FavioVazquez] - Default value of avro.mapred.classifier has been set to hadoop2 in pom.xml - Cleaned up hadoop-2.3 and 2.4 profiles due to change in the default set in avro.mapred.classifier in pom.xml
      199f40b [FavioVazquez] - Erased unnecessary CDH5-specific note in docs/building-spark.md - Remove example of instance -Phadoop-2.2 -Dhadoop.version=2.2.0 in docs/building-spark.md - Enabled hadoop-2.2 profile when the Hadoop version is 2.2.0, which is now the default .Added comment in the yarn/pom.xml to specify that.
      88a8b88 [FavioVazquez] - Simplified Hadoop profiles due to new setting of global properties in the pom.xml file - Added comment to specify that the hadoop-2.2 profile is now the default hadoop profile in the pom.xml file - Erased hadoop-2.2 from related hadoop profiles now that is a no-op in the make-distribution.sh file
      70b8344 [FavioVazquez] - Fixed typo in the make-distribution.sh file and added hadoop-1 in the Related profiles
      287fa2f [FavioVazquez] - Updated documentation about specifying the hadoop version in building-spark. Now is clear that Spark will build against Hadoop 2.2.0 by default. - Added Cloudera CDH 5.3.3 without MapReduce example in the building-spark doc.
      1354292 [FavioVazquez] - Fixed hadoop-1 version to match jenkins build profile in hadoop1.0 tests and documentation
      6b4bfaf [FavioVazquez] - Cleanup in hadoop-2.x profiles since they contained mostly redundant stuff.
      7e9955d [FavioVazquez] - Updated Hadoop dependencies due to inconsistency in the versions. Now the global properties are the ones used by the hadoop-2.2 profile, and the profile was set to empty but kept for backwards compatibility reasons
      660decc [FavioVazquez] - Updated Hadoop dependencies due to inconsistency in the versions. Now the global properties are the ones used by the hadoop-2.2 profile, and the profile was set to empty but kept for backwards compatibility reasons
      ec91ce3 [FavioVazquez] - Updated protobuf-java version of com.google.protobuf dependancy to fix blocking error when connecting to HDFS via the Hadoop Cloudera HDFS CDH5 (fix for 2.5.0-cdh5.3.3 version)
      7fb715de
  7. May 03, 2015
    • Sean Owen's avatar
      [SPARK-7302] [DOCS] SPARK building documentation still mentions building for yarn 0.23 · 9e25b09f
      Sean Owen authored
      Remove references to Hadoop 0.23
      
      CC tgravescs Is this what you had in mind? basically all refs to 0.23?
      We don't support YARN 0.23, but also don't support Hadoop 0.23 anymore AFAICT. There are no builds or releases for it.
      
      In fact, on a related note, refs to CDH3 (Hadoop 0.20.2) should be removed as this certainly isn't supported either.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #5863 from srowen/SPARK-7302 and squashes the following commits:
      
      42f5d1e [Sean Owen] Remove CDH3 (Hadoop 0.20.2) refs too
      dad02e3 [Sean Owen] Remove references to Hadoop 0.23
      9e25b09f
  8. Mar 17, 2015
  9. Mar 03, 2015
  10. Feb 16, 2015
  11. Feb 12, 2015
    • Sean Owen's avatar
      SPARK-5727 [BUILD] Remove Debian packaging · 9a3ea49f
      Sean Owen authored
      (for master / 1.4 only)
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #4526 from srowen/SPARK-5727.2 and squashes the following commits:
      
      83ba49c [Sean Owen] Remove Debian packaging
      9a3ea49f
  12. Feb 11, 2015
    • Sean Owen's avatar
      SPARK-5727 [BUILD] Deprecate Debian packaging · bd0d6e0c
      Sean Owen authored
      This just adds a deprecation message. It's intended for backporting to branch 1.3 but can go in master too, to be followed by another PR that removes it for 1.4.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #4516 from srowen/SPARK-5727.1 and squashes the following commits:
      
      d48989f [Sean Owen] Refer to Spark 1.4
      6c1c8b3 [Sean Owen] Deprecate Debian packaging
      bd0d6e0c
  13. Feb 02, 2015
  14. Jan 09, 2015
    • Sean Owen's avatar
      SPARK-5136 [DOCS] Improve documentation around setting up Spark IntelliJ project · 547df977
      Sean Owen authored
      This PR simply points to the IntelliJ wiki page instead of also including IntelliJ notes in the docs. The intent however is to also update the wiki page with updated tips. This is the text I propose for the IntelliJ section on the wiki. I realize it omits some of the existing instructions on the wiki, about enabling Hive, but I think those are actually optional.
      
      ------
      
      IntelliJ supports both Maven- and SBT-based projects. It is recommended, however, to import Spark as a Maven project. Choose "Import Project..." from the File menu, and select the `pom.xml` file in the Spark root directory.
      
      It is fine to leave all settings at their default values in the Maven import wizard, with two caveats. First, it is usually useful to enable "Import Maven projects automatically", sincchanges to the project structure will automatically update the IntelliJ project.
      
      Second, note the step that prompts you to choose active Maven build profiles. As documented above, some build configuration require specific profiles to be enabled. The same profiles that are enabled with `-P[profile name]` above may be enabled on this screen. For example, if developing for Hadoop 2.4 with YARN support, enable profiles `yarn` and `hadoop-2.4`.
      
      These selections can be changed later by accessing the "Maven Projects" tool window from the View menu, and expanding the Profiles section.
      
      "Rebuild Project" can fail the first time the project is compiled, because generate source files are not automatically generated. Try clicking the  "Generate Sources and Update Folders For All Projects" button in the "Maven Projects" tool window to manually generate these sources.
      
      Compilation may fail with an error like "scalac: bad option: -P:/home/jakub/.m2/repository/org/scalamacros/paradise_2.10.4/2.0.1/paradise_2.10.4-2.0.1.jar". If so, go to Preferences > Build, Execution, Deployment > Scala Compiler and clear the "Additional compiler options" field. It will work then although the option will come back when the project reimports.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #3952 from srowen/SPARK-5136 and squashes the following commits:
      
      f3baa66 [Sean Owen] Point to new IJ / Eclipse wiki link
      016b7df [Sean Owen] Point to IntelliJ wiki page instead of also including IntelliJ notes in the docs
      547df977
  15. Dec 27, 2014
    • Brennon York's avatar
      [SPARK-4501][Core] - Create build/mvn to automatically download maven/zinc/scalac · a3e51cc9
      Brennon York authored
      Creates a top level directory script (as `build/mvn`) to automatically download zinc and the specific version of scala used to easily build spark. This will also download and install maven if the user doesn't already have it and all packages are hosted under the `build/` directory. Tested on both Linux and OSX OS's and both work. All commands pass through to the maven binary so it acts exactly as a traditional maven call would.
      
      Author: Brennon York <brennon.york@capitalone.com>
      
      Closes #3707 from brennonyork/SPARK-4501 and squashes the following commits:
      
      0e5a0e4 [Brennon York] minor incorrect doc verbage (with -> this)
      9b79e38 [Brennon York] fixed merge conflicts with dev/run-tests, properly quoted args in sbt/sbt, fixed bug where relative paths would fail if passed in from build/mvn
      d2d41b6 [Brennon York] added blurb about leverging zinc with build/mvn
      b979c58 [Brennon York] updated the merge conflict
      c5634de [Brennon York] updated documentation to overview build/mvn, updated all points where sbt/sbt was referenced with build/sbt
      b8437ba [Brennon York] set progress bars for curl and wget when not run on jenkins, no progress bar when run on jenkins, moved sbt script to build/sbt, wrote stub and warning under sbt/sbt which calls build/sbt, modified build/sbt to use the correct directory, fixed bug in build/sbt-launch-lib.bash to correctly pull the sbt version
      be11317 [Brennon York] added switch to silence download progress only if AMPLAB_JENKINS is set
      28d0a99 [Brennon York] updated to remove the python dependency, uses grep instead
      7e785a6 [Brennon York] added silent and quiet flags to curl and wget respectively, added single echo output to denote start of a download if download is needed
      14a5da0 [Brennon York] removed unnecessary zinc output on startup
      1af4a94 [Brennon York] fixed bug with uppercase vs lowercase variable
      3e8b9b3 [Brennon York] updated to properly only restart zinc if it was freshly installed
      a680d12 [Brennon York] Added comments to functions and tested various mvn calls
      bb8cc9d [Brennon York] removed package files
      ef017e6 [Brennon York] removed OS complexities, setup generic install_app call, removed extra file complexities, removed help, removed forced install (defaults now), removed double-dash from cli
      07bf018 [Brennon York] Updated to specifically handle pulling down the correct scala version
      f914dea [Brennon York] Beginning final portions of localized scala home
      69c4e44 [Brennon York] working linux and osx installers for purely local mvn build
      4a1609c [Brennon York] finalizing working linux install for maven to local ./build/apache-maven folder
      cbfcc68 [Brennon York] Changed the default sbt/sbt to build/sbt and added a build/mvn which will automatically download, install, and execute maven with zinc for easier build capability
      a3e51cc9
  16. Dec 25, 2014
    • Kousuke Saruta's avatar
      [SPARK-4953][Doc] Fix the description of building Spark with YARN · 11dd9931
      Kousuke Saruta authored
      At the section "Specifying the Hadoop Version" In building-spark.md, there is description about building with YARN with Hadoop 0.23.
      Spark 1.3.0 will not support Hadoop 0.23 so we should fix the description.
      
      Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>
      
      Closes #3787 from sarutak/SPARK-4953 and squashes the following commits:
      
      ee9c355 [Kousuke Saruta] Removed description related to a specific vendor
      9ab0c24 [Kousuke Saruta] Fix the description about building SPARK with YARN
      11dd9931
  17. Dec 15, 2014
    • Ryan Williams's avatar
      [SPARK-4668] Fix some documentation typos. · 8176b7a0
      Ryan Williams authored
      Author: Ryan Williams <ryan.blake.williams@gmail.com>
      
      Closes #3523 from ryan-williams/tweaks and squashes the following commits:
      
      d2eddaa [Ryan Williams] code review feedback
      ce27fc1 [Ryan Williams] CoGroupedRDD comment nit
      c6cfad9 [Ryan Williams] remove unnecessary if statement
      b74ea35 [Ryan Williams] comment fix
      b0221f0 [Ryan Williams] fix a gendered pronoun
      c71ffed [Ryan Williams] use names on a few boolean parameters
      89954aa [Ryan Williams] clarify some comments in {Security,Shuffle}Manager
      e465dac [Ryan Williams] Saved building-spark.md with Dillinger.io
      83e8358 [Ryan Williams] fix pom.xml typo
      dc4662b [Ryan Williams] typo fixes in tuning.md, configuration.md
      8176b7a0
  18. Dec 09, 2014
    • Sandy Ryza's avatar
      SPARK-4338. [YARN] Ditch yarn-alpha. · 912563aa
      Sandy Ryza authored
      Sorry if this is a little premature with 1.2 still not out the door, but it will make other work like SPARK-4136 and SPARK-2089 a lot easier.
      
      Author: Sandy Ryza <sandy@cloudera.com>
      
      Closes #3215 from sryza/sandy-spark-4338 and squashes the following commits:
      
      1c5ac08 [Sandy Ryza] Update building Spark docs and remove unnecessary newline
      9c1421c [Sandy Ryza] SPARK-4338. Ditch yarn-alpha.
      912563aa
  19. Nov 29, 2014
    • Takuya UESHIN's avatar
      [DOCS][BUILD] Add instruction to use change-version-to-2.11.sh in 'Building for Scala 2.11'. · 0fcd24cc
      Takuya UESHIN authored
      To build with Scala 2.11, we have to execute `change-version-to-2.11.sh` before Maven execute, otherwise inter-module dependencies are broken.
      
      Author: Takuya UESHIN <ueshin@happy-camper.st>
      
      Closes #3361 from ueshin/docs/building-spark_2.11 and squashes the following commits:
      
      1d29126 [Takuya UESHIN] Add instruction to use change-version-to-2.11.sh in 'Building for Scala 2.11'.
      0fcd24cc
  20. Nov 25, 2014
  21. Nov 24, 2014
  22. Nov 14, 2014
    • Sandy Ryza's avatar
      SPARK-4375. no longer require -Pscala-2.10 · f5f757e4
      Sandy Ryza authored
      It seems like the winds might have moved away from this approach, but wanted to post the PR anyway because I got it working and to show what it would look like.
      
      Author: Sandy Ryza <sandy@cloudera.com>
      
      Closes #3239 from sryza/sandy-spark-4375 and squashes the following commits:
      
      0ffbe95 [Sandy Ryza] Enable -Dscala-2.11 in sbt
      cd42d94 [Sandy Ryza] Update doc
      f6644c3 [Sandy Ryza] SPARK-4375 take 2
      f5f757e4
  23. Nov 11, 2014
    • Prashant Sharma's avatar
      Support cross building for Scala 2.11 · daaca14c
      Prashant Sharma authored
      Let's give this another go using a version of Hive that shades its JLine dependency.
      
      Author: Prashant Sharma <prashant.s@imaginea.com>
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #3159 from pwendell/scala-2.11-prashant and squashes the following commits:
      
      e93aa3e [Patrick Wendell] Restoring -Phive-thriftserver profile and cleaning up build script.
      f65d17d [Patrick Wendell] Fixing build issue due to merge conflict
      a8c41eb [Patrick Wendell] Reverting dev/run-tests back to master state.
      7a6eb18 [Patrick Wendell] Merge remote-tracking branch 'apache/master' into scala-2.11-prashant
      583aa07 [Prashant Sharma] REVERT ME: removed hive thirftserver
      3680e58 [Prashant Sharma] Revert "REVERT ME: Temporarily removing some Cli tests."
      935fb47 [Prashant Sharma] Revert "Fixed by disabling a few tests temporarily."
      925e90f [Prashant Sharma] Fixed by disabling a few tests temporarily.
      2fffed3 [Prashant Sharma] Exclude groovy from sbt build, and also provide a way for such instances in future.
      8bd4e40 [Prashant Sharma] Switched to gmaven plus, it fixes random failures observer with its predecessor gmaven.
      5272ce5 [Prashant Sharma] SPARK_SCALA_VERSION related bugs.
      2121071 [Patrick Wendell] Migrating version detection to PySpark
      b1ed44d [Patrick Wendell] REVERT ME: Temporarily removing some Cli tests.
      1743a73 [Patrick Wendell] Removing decimal test that doesn't work with Scala 2.11
      f5cad4e [Patrick Wendell] Add Scala 2.11 docs
      210d7e1 [Patrick Wendell] Revert "Testing new Hive version with shaded jline"
      48518ce [Patrick Wendell] Remove association of Hive and Thriftserver profiles.
      e9d0a06 [Patrick Wendell] Revert "Enable thritfserver for Scala 2.10 only"
      67ec364 [Patrick Wendell] Guard building of thriftserver around Scala 2.10 check
      8502c23 [Patrick Wendell] Enable thritfserver for Scala 2.10 only
      e22b104 [Patrick Wendell] Small fix in pom file
      ec402ab [Patrick Wendell] Various fixes
      0be5a9d [Patrick Wendell] Testing new Hive version with shaded jline
      4eaec65 [Prashant Sharma] Changed scripts to ignore target.
      5167bea [Prashant Sharma] small correction
      a4fcac6 [Prashant Sharma] Run against scala 2.11 on jenkins.
      80285f4 [Prashant Sharma] MAven equivalent of setting spark.executor.extraClasspath during tests.
      034b369 [Prashant Sharma] Setting test jars on executor classpath during tests from sbt.
      d4874cb [Prashant Sharma] Fixed Python Runner suite. null check should be first case in scala 2.11.
      6f50f13 [Prashant Sharma] Fixed build after rebasing with master. We should use ${scala.binary.version} instead of just 2.10
      e56ca9d [Prashant Sharma] Print an error if build for 2.10 and 2.11 is spotted.
      937c0b8 [Prashant Sharma] SCALA_VERSION -> SPARK_SCALA_VERSION
      cb059b0 [Prashant Sharma] Code review
      0476e5e [Prashant Sharma] Scala 2.11 support with repl and all build changes.
      daaca14c
  24. Nov 03, 2014
  25. Oct 27, 2014
    • Prashant Sharma's avatar
      [SPARK-4032] Deprecate YARN alpha support in Spark 1.2 · c9e05ca2
      Prashant Sharma authored
      Author: Prashant Sharma <prashant.s@imaginea.com>
      
      Closes #2878 from ScrapCodes/SPARK-4032/deprecate-yarn-alpha and squashes the following commits:
      
      17e9857 [Prashant Sharma] added deperecated comment to Client and ExecutorRunnable.
      3a34b1e [Prashant Sharma] Updated docs...
      4608dea [Prashant Sharma] [SPARK-4032] Deprecate YARN alpha support in Spark 1.2
      c9e05ca2
  26. Oct 24, 2014
    • Zhan Zhang's avatar
      [SPARK-2706][SQL] Enable Spark to support Hive 0.13 · 7c89a8f0
      Zhan Zhang authored
      Given that a lot of users are trying to use hive 0.13 in spark, and the incompatibility between hive-0.12 and hive-0.13 on the API level I want to propose following approach, which has no or minimum impact on existing hive-0.12 support, but be able to jumpstart the development of hive-0.13 and future version support.
      
      Approach: Introduce “hive-version” property,  and manipulate pom.xml files to support different hive version at compiling time through shim layer, e.g., hive-0.12.0 and hive-0.13.1. More specifically,
      
      1. For each different hive version, there is a very light layer of shim code to handle API differences, sitting in sql/hive/hive-version, e.g., sql/hive/v0.12.0 or sql/hive/v0.13.1
      
      2. Add a new profile hive-default active by default, which picks up all existing configuration and hive-0.12.0 shim (v0.12.0)  if no hive.version is specified.
      
      3. If user specifies different version (currently only 0.13.1 by -Dhive.version = 0.13.1), hive-versions profile will be activated, which pick up hive-version specific shim layer and configuration, mainly the hive jars and hive-version shim, e.g., v0.13.1.
      
      4. With this approach, nothing is changed with current hive-0.12 support.
      
      No change by default: sbt/sbt -Phive
      For example: sbt/sbt -Phive -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 assembly
      
      To enable hive-0.13: sbt/sbt -Dhive.version=0.13.1
      For example: sbt/sbt -Dhive.version=0.13.1 -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 assembly
      
      Note that in hive-0.13, hive-thriftserver is not enabled, which should be fixed by other Jira, and we don’t need -Phive with -Dhive.version in building (probably we should use -Phive -Dhive.version=xxx instead after thrift server is also supported in hive-0.13.1).
      
      Author: Zhan Zhang <zhazhan@gmail.com>
      Author: zhzhan <zhazhan@gmail.com>
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #2241 from zhzhan/spark-2706 and squashes the following commits:
      
      3ece905 [Zhan Zhang] minor fix
      410b668 [Zhan Zhang] solve review comments
      cbb4691 [Zhan Zhang] change run-test for new options
      0d4d2ed [Zhan Zhang] rebase
      497b0f4 [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      8fad1cf [Zhan Zhang] change the pom file and make hive-0.13.1 as the default
      ab028d1 [Zhan Zhang] rebase
      4a2e36d [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      4cb1b93 [zhzhan] Merge pull request #1 from pwendell/pr-2241
      b0478c0 [Patrick Wendell] Changes to simplify the build of SPARK-2706
      2b50502 [Zhan Zhang] rebase
      a72c0d4 [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      cb22863 [Zhan Zhang] correct the typo
      20f6cf7 [Zhan Zhang] solve compatability issue
      f7912a9 [Zhan Zhang] rebase and solve review feedback
      301eb4a [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      10c3565 [Zhan Zhang] address review comments
      6bc9204 [Zhan Zhang] rebase and remove temparory repo
      d3aa3f2 [Zhan Zhang] Merge branch 'master' into spark-2706
      cedcc6f [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      3ced0d7 [Zhan Zhang] rebase
      d9b981d [Zhan Zhang] rebase and fix error due to rollback
      adf4924 [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      3dd50e8 [Zhan Zhang] solve conflicts and remove unnecessary implicts
      d10bf00 [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      dc7bdb3 [Zhan Zhang] solve conflicts
      7e0cc36 [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      d7c3e1e [Zhan Zhang] Merge branch 'master' into spark-2706
      68deb11 [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      d48bd18 [Zhan Zhang] address review comments
      3ee3b2b [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      57ea52e [Zhan Zhang] Merge branch 'master' into spark-2706
      2b0d513 [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      9412d24 [Zhan Zhang] address review comments
      f4af934 [Zhan Zhang] rebase
      1ccd7cc [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      128b60b [Zhan Zhang] ignore 0.12.0 test cases for the time being
      af9feb9 [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      5f5619f [Zhan Zhang] restructure the directory and different hive version support
      05d3683 [Zhan Zhang] solve conflicts
      e4c1982 [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      94b4fdc [Zhan Zhang] Spark-2706: hive-0.13.1 support on spark
      87ebf3b [Zhan Zhang] Merge branch 'master' into spark-2706
      921e914 [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      f896b2a [Zhan Zhang] Merge branch 'master' into spark-2706
      789ea21 [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      cb53a2c [Zhan Zhang] Merge branch 'master' of https://github.com/apache/spark
      f6a8a40 [Zhan Zhang] revert
      ba14f28 [Zhan Zhang] test
      dbedff3 [Zhan Zhang] Merge remote-tracking branch 'upstream/master'
      70964fe [Zhan Zhang] revert
      fe0f379 [Zhan Zhang] Merge branch 'master' of https://github.com/zhzhan/spark
      70ffd93 [Zhan Zhang] revert
      42585ec [Zhan Zhang] test
      7d5fce2 [Zhan Zhang] test
      7c89a8f0
  27. Oct 05, 2014
  28. Oct 03, 2014
  29. Sep 16, 2014
    • Sean Owen's avatar
      SPARK-3069 [DOCS] Build instructions in README are outdated · 61e21fe7
      Sean Owen authored
      Here's my crack at Bertrand's suggestion. The Github `README.md` contains build info that's outdated. It should just point to the current online docs, and reflect that Maven is the primary build now.
      
      (Incidentally, the stanza at the end about contributions of original work should go in https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark too. It won't hurt to be crystal clear about the agreement to license, given that ICLAs are not required of anyone here.)
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #2014 from srowen/SPARK-3069 and squashes the following commits:
      
      501507e [Sean Owen] Note that Zinc is for Maven builds too
      db2bd97 [Sean Owen] sbt -> sbt/sbt and add note about zinc
      be82027 [Sean Owen] Fix additional occurrences of building-with-maven -> building-spark
      91c921f [Sean Owen] Move building-with-maven to building-spark and create a redirect. Update doc links to building-spark.html Add jekyll-redirect-from plugin and make associated config changes (including fixing pygments deprecation). Add example of SBT to README.md
      999544e [Sean Owen] Change "Building Spark with Maven" title to "Building Spark"; reinstate tl;dr info about dev/run-tests in README.md; add brief note about building with SBT
      c18d140 [Sean Owen] Optionally, remove the copy of contributing text from main README.md
      8e83934 [Sean Owen] Add CONTRIBUTING.md to trigger notice on new pull request page
      b1c04a1 [Sean Owen] Refer to current online documentation for building, and remove slightly outdated copy in README.md
      61e21fe7
  30. Aug 23, 2014
    • Kousuke Saruta's avatar
      [SPARK-2963] REGRESSION - The description about how to build for using CLI and... · 323cd92b
      Kousuke Saruta authored
      [SPARK-2963] REGRESSION - The description about how to build for using CLI and Thrift JDBC server is absent in proper document  -
      
      The most important things I mentioned in #1885 is as follows.
      
      * People who build Spark is not always programmer.
      * If a person who build Spark is not a programmer, he/she won't read programmer's guide before building.
      
      So, how to build for using CLI and JDBC server is not only in programmer's guide.
      
      Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>
      
      Closes #2080 from sarutak/SPARK-2963 and squashes the following commits:
      
      ee07c76 [Kousuke Saruta] Modified regression of the description about building for using Thrift JDBC server and CLI
      ed53329 [Kousuke Saruta] Modified description and notaton of proper noun
      07c59fc [Kousuke Saruta] Added a description about how to build to use HiveServer and CLI for SparkSQL to building-with-maven.md
      6e6645a [Kousuke Saruta] Merge branch 'master' of git://git.apache.org/spark into SPARK-2963
      c88fa93 [Kousuke Saruta] Added a description about building to use HiveServer and CLI for SparkSQL
      323cd92b
  31. Aug 20, 2014
    • Patrick Wendell's avatar
      SPARK-3092 [SQL]: Always include the thriftserver when -Phive is enabled. · f2f26c2a
      Patrick Wendell authored
      Currently we have a separate profile called hive-thriftserver. I originally suggested this in case users did not want to bundle the thriftserver, but it's ultimately lead to a lot of confusion. Since the thriftserver is only a few classes, I don't see a really good reason to isolate it from the rest of Hive. So let's go ahead and just include it in the same profile to simplify things.
      
      This has been suggested in the past by liancheng.
      
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #2006 from pwendell/hiveserver and squashes the following commits:
      
      742ea40 [Patrick Wendell] Merge remote-tracking branch 'apache/master' into hiveserver
      034ad47 [Patrick Wendell] SPARK-3092: Always include the thriftserver when -Phive is enabled.
      f2f26c2a
  32. Aug 13, 2014
  33. Aug 03, 2014
    • Stephen Boesch's avatar
      SPARK-2712 - Add a small note to maven doc that mvn package must happen ... · f8cd143b
      Stephen Boesch authored
      Per request by Reynold adding small note about proper sequencing of build then test.
      
      Author: Stephen Boesch <javadba@gmail.com>
      
      Closes #1615 from javadba/docs and squashes the following commits:
      
      6c3183e [Stephen Boesch] Moved updated testing blurb per PWendell
      5764757 [Stephen Boesch] SPARK-2712 - Add a small note to maven doc that mvn package must happen before test
      f8cd143b
  34. May 30, 2014
    • Matei Zaharia's avatar
      [SPARK-1566] consolidate programming guide, and general doc updates · c8bf4131
      Matei Zaharia authored
      This is a fairly large PR to clean up and update the docs for 1.0. The major changes are:
      
      * A unified programming guide for all languages replaces language-specific ones and shows language-specific info in tabs
      * New programming guide sections on key-value pairs, unit testing, input formats beyond text, migrating from 0.9, and passing functions to Spark
      * Spark-submit guide moved to a separate page and expanded slightly
      * Various cleanups of the menu system, security docs, and others
      * Updated look of title bar to differentiate the docs from previous Spark versions
      
      You can find the updated docs at http://people.apache.org/~matei/1.0-docs/_site/ and in particular http://people.apache.org/~matei/1.0-docs/_site/programming-guide.html.
      
      Author: Matei Zaharia <matei@databricks.com>
      
      Closes #896 from mateiz/1.0-docs and squashes the following commits:
      
      03e6853 [Matei Zaharia] Some tweaks to configuration and YARN docs
      0779508 [Matei Zaharia] tweak
      ef671d4 [Matei Zaharia] Keep frames in JavaDoc links, and other small tweaks
      1bf4112 [Matei Zaharia] Review comments
      4414f88 [Matei Zaharia] tweaks
      d04e979 [Matei Zaharia] Fix some old links to Java guide
      a34ed33 [Matei Zaharia] tweak
      541bb3b [Matei Zaharia] miscellaneous changes
      fcefdec [Matei Zaharia] Moved submitting apps to separate doc
      61d72b4 [Matei Zaharia] stuff
      181f217 [Matei Zaharia] migration guide, remove old language guides
      e11a0da [Matei Zaharia] Add more API functions
      6a030a9 [Matei Zaharia] tweaks
      8db0ae3 [Matei Zaharia] Added key-value pairs section
      318d2c9 [Matei Zaharia] tweaks
      1c81477 [Matei Zaharia] New section on basics and function syntax
      e38f559 [Matei Zaharia] Actually added programming guide to Git
      a33d6fe [Matei Zaharia] First pass at updating programming guide to support all languages, plus other tweaks throughout
      3b6a876 [Matei Zaharia] More CSS tweaks
      01ec8bf [Matei Zaharia] More CSS tweaks
      e6d252e [Matei Zaharia] Change color of doc title bar to differentiate from 0.9.0
      c8bf4131
  35. May 12, 2014
    • Andrew Or's avatar
      [SPARK-1753 / 1773 / 1814] Update outdated docs for spark-submit, YARN, standalone etc. · 2ffd1eaf
      Andrew Or authored
      YARN
      - SparkPi was updated to not take in master as an argument; we should update the docs to reflect that.
      - The default YARN build guide should be in maven, not sbt.
      - This PR also adds a paragraph on steps to debug a YARN application.
      
      Standalone
      - Emphasize spark-submit more. Right now it's one small paragraph preceding the legacy way of launching through `org.apache.spark.deploy.Client`.
      - The way we set configurations / environment variables according to the old docs is outdated. This needs to reflect changes introduced by the Spark configuration changes we made.
      
      In general, this PR also adds a little more documentation on the new spark-shell, spark-submit, spark-defaults.conf etc here and there.
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #701 from andrewor14/yarn-docs and squashes the following commits:
      
      e2c2312 [Andrew Or] Merge in changes in #752 (SPARK-1814)
      25cfe7b [Andrew Or] Merge in the warning from SPARK-1753
      a8c39c5 [Andrew Or] Minor changes
      336bbd9 [Andrew Or] Tabs -> spaces
      4d9d8f7 [Andrew Or] Merge branch 'master' of github.com:apache/spark into yarn-docs
      041017a [Andrew Or] Abstract Spark submit documentation to cluster-overview.html
      3cc0649 [Andrew Or] Detail how to set configurations + remove legacy instructions
      5b7140a [Andrew Or] Merge branch 'master' of github.com:apache/spark into yarn-docs
      85a51fc [Andrew Or] Update run-example, spark-shell, configuration etc.
      c10e8c7 [Andrew Or] Merge branch 'master' of github.com:apache/spark into yarn-docs
      381fe32 [Andrew Or] Update docs for standalone mode
      757c184 [Andrew Or] Add a note about the requirements for the debugging trick
      f8ca990 [Andrew Or] Merge branch 'master' of github.com:apache/spark into yarn-docs
      924f04c [Andrew Or] Revert addition of --deploy-mode
      d5fe17b [Andrew Or] Update the YARN docs
      2ffd1eaf
  36. May 09, 2014
  37. May 05, 2014
    • Sean Owen's avatar
      SPARK-1556. jets3t dep doesn't update properly with newer Hadoop versions · 73b0cbcc
      Sean Owen authored
      See related discussion at https://github.com/apache/spark/pull/468
      
      This PR may still overstep what you have in mind, but let me put it on the table to start. Besides fixing the issue, it has one substantive change, and that is to manage Hadoop-specific things only in Hadoop-related profiles. This does _not_ remove `yarn.version`.
      
      - Moves the YARN and Hadoop profiles together in pom.xml. Sorry that this makes the diff a little hard to grok but the changes are only as follows.
      - Removes `hadoop.major.version`
      - Introduce `hadoop-2.2` and `hadoop-2.3` profiles to control Hadoop-specific changes:
        - like the protobuf version issue - this was only 'solved' now by enabling YARN for 2.2+, which is really an orthogonal issue
        - like the jets3t version issue now
      - Hadoop profiles set an appropriate default `hadoop.version`, that can be overridden
      - _(YARN profiles in the parent now only exist to add the sub-module)_
      - Fixes the jets3t dependency issue
       - and makes it a runtime dependency
       - and centralizes config of this guy in the parent pom
      - Updates build docs
      - Updates SBT build too
        - and fixes a regex problem along the way
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #629 from srowen/SPARK-1556 and squashes the following commits:
      
      c3fa967 [Sean Owen] Fix hadoop-2.4 profile typo in doc
      a2105fd [Sean Owen] Add hadoop-2.4 profile and don't set hadoop.version in profiles
      274f4f9 [Sean Owen] Make jets3t a runtime dependency, and bring its exclusion up into parent config
      bbed826 [Sean Owen] Use jets3t 0.9.0 for Hadoop 2.3+ (and correct similar regex issue in SBT build)
      f21f356 [Sean Owen] Build changes to set up for jets3t fix
      73b0cbcc
  38. May 04, 2014
    • witgo's avatar
      The default version of yarn is equal to the hadoop version · fb054322
      witgo authored
      This is a part of [PR 590](https://github.com/apache/spark/pull/590)
      
      Author: witgo <witgo@qq.com>
      
      Closes #626 from witgo/yarn_version and squashes the following commits:
      
      c390631 [witgo] restore  the yarn dependency declarations
      f8a4ad8 [witgo] revert remove the dependency of avro in yarn-alpha
      2df6cf5 [witgo] review commit
      a1d876a [witgo] review commit
      20e7e3e [witgo] review commit
      c76763b [witgo] The default value of yarn.version is equal to hadoop.version
      fb054322
  39. Apr 29, 2014
    • witgo's avatar
      Improved build configuration · 030f2c21
      witgo authored
      1, Fix SPARK-1441: compile spark core error with hadoop 0.23.x
      2, Fix SPARK-1491: maven hadoop-provided profile fails to build
      3, Fix org.scala-lang: * ,org.apache.avro:* inconsistent versions dependency
      4, A modified on the sql/catalyst/pom.xml,sql/hive/pom.xml,sql/core/pom.xml (Four spaces formatted into two spaces)
      
      Author: witgo <witgo@qq.com>
      
      Closes #480 from witgo/format_pom and squashes the following commits:
      
      03f652f [witgo] review commit
      b452680 [witgo] Merge branch 'master' of https://github.com/apache/spark into format_pom
      bee920d [witgo] revert fix SPARK-1629: Spark Core missing commons-lang dependence
      7382a07 [witgo] Merge branch 'master' of https://github.com/apache/spark into format_pom
      6902c91 [witgo] fix SPARK-1629: Spark Core missing commons-lang dependence
      0da4bc3 [witgo] merge master
      d1718ed [witgo] Merge branch 'master' of https://github.com/apache/spark into format_pom
      e345919 [witgo] add avro dependency to yarn-alpha
      77fad08 [witgo] Merge branch 'master' of https://github.com/apache/spark into format_pom
      62d0862 [witgo] Fix org.scala-lang: * inconsistent versions dependency
      1a162d7 [witgo] Merge branch 'master' of https://github.com/apache/spark into format_pom
      934f24d [witgo] review commit
      cf46edc [witgo] exclude jruby
      06e7328 [witgo] Merge branch 'SparkBuild' into format_pom
      99464d2 [witgo] fix maven hadoop-provided profile fails to build
      0c6c1fc [witgo] Fix compile spark core error with hadoop 0.23.x
      6851bec [witgo] Maintain consistent SparkBuild.scala, pom.xml
      030f2c21
Loading