-
Luciano Resende authored
## What changes were proposed in this pull request? Create a maven profile for executing the docker integration tests using maven Remove docker integration tests from main sbt build Update documentation on how to run docker integration tests from sbt ## How was this patch tested? Manual test of the docker integration tests as in : mvn -Pdocker-integration-tests -pl :spark-docker-integration-tests_2.11 compile test ## Other comments Note that the the DB2 Docker Tests are still disabled as there is a kernel version issue on the AMPLab Jenkins slaves and we would need to get them on the right level before enabling those tests. They do run ok locally with the updates from PR #12348 Author: Luciano Resende <lresende@apache.org> Closes #12508 from lresende/docker.
Luciano Resende authored## What changes were proposed in this pull request? Create a maven profile for executing the docker integration tests using maven Remove docker integration tests from main sbt build Update documentation on how to run docker integration tests from sbt ## How was this patch tested? Manual test of the docker integration tests as in : mvn -Pdocker-integration-tests -pl :spark-docker-integration-tests_2.11 compile test ## Other comments Note that the the DB2 Docker Tests are still disabled as there is a kernel version issue on the AMPLab Jenkins slaves and we would need to get them on the right level before enabling those tests. They do run ok locally with the updates from PR #12348 Author: Luciano Resende <lresende@apache.org> Closes #12508 from lresende/docker.
- Building with build/mvn
- Building a Runnable Distribution
- Setting up Maven's Memory Usage
- Specifying the Hadoop Version
- Apache Hadoop 2.2.X
- Apache Hadoop 2.3.X
- Apache Hadoop 2.4.X or 2.5.X
- Apache Hadoop 2.6.X
- Apache Hadoop 2.7.X and later
- Different versions of HDFS and YARN.
- Building With Hive and JDBC Support
- Apache Hadoop 2.4.X with Hive 1.2.1 support
- Building for Scala 2.10
- Spark Tests in Maven
- Building submodules individually
- Continuous Compilation
- Building Spark with IntelliJ IDEA or Eclipse
- Running Java 8 Test Suites
- Running Docker based Integration Test Suites
- Packaging without Hadoop Dependencies for YARN
- Building with SBT
- Testing with SBT
- Speeding up Compilation with Zinc
layout: global
title: Building Spark
redirect_from: "building-with-maven.html"
- This will become a table of contents (this text will be scraped). {:toc}
Building Spark using Maven requires Maven 3.3.9 or newer and Java 7+. The Spark build can supply a suitable Maven binary; see below.
build/mvn
Building with Spark now comes packaged with a self-contained Maven installation to ease building and deployment of Spark from source located under the build/
directory. This script will automatically download and setup all necessary build requirements (Maven, Scala, and Zinc) locally within the build/
directory itself. It honors any mvn
binary if present already, however, will pull down its own copy of Scala and Zinc regardless to ensure proper version requirements are met. build/mvn
execution acts as a pass through to the mvn
call allowing easy transition from previous build methods. As an example, one can build a version of Spark as follows:
{% highlight bash %} build/mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests clean package {% endhighlight %}
Other build examples can be found below.
Note: When building on an encrypted filesystem (if your home directory is encrypted, for example), then the Spark build might fail with a "Filename too long" error. As a workaround, add the following in the configuration args of the scala-maven-plugin
in the project pom.xml
:
<arg>-Xmax-classfile-name</arg>
<arg>128</arg>
and in project/SparkBuild.scala
add:
scalacOptions in Compile ++= Seq("-Xmax-classfile-name", "128"),
to the sharedSettings
val. See also this PR if you are unsure of where to add these lines.
Building a Runnable Distribution
To create a Spark distribution like those distributed by the
Spark Downloads page, and that is laid out so as
to be runnable, use ./dev/make-distribution.sh
in the project root directory. It can be configured
with Maven profile settings and so on like the direct Maven build. Example:
./dev/make-distribution.sh --name custom-spark --tgz -Psparkr -Phadoop-2.4 -Phive -Phive-thriftserver -Pyarn
For more information on usage, run ./dev/make-distribution.sh --help
Setting up Maven's Memory Usage
You'll need to configure Maven to use more memory than usual by setting MAVEN_OPTS
. We recommend the following settings:
{% highlight bash %} export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m" {% endhighlight %}
If you don't run this, you may see errors like the following:
[INFO] Compiling 203 Scala sources and 9 Java sources to /Users/me/Development/spark/core/target/scala-{{site.SCALA_BINARY_VERSION}}/classes...
[ERROR] PermGen space -> [Help 1]
[INFO] Compiling 203 Scala sources and 9 Java sources to /Users/me/Development/spark/core/target/scala-{{site.SCALA_BINARY_VERSION}}/classes...
[ERROR] Java heap space -> [Help 1]
You can fix this by setting the MAVEN_OPTS
variable as discussed before.
Note:
- For Java 8 and above this step is not required.
- If using
build/mvn
with noMAVEN_OPTS
set, the script will automate this for you.
Specifying the Hadoop Version
Because HDFS is not protocol-compatible across versions, if you want to read from HDFS, you'll need to build Spark against the specific HDFS version in your environment. You can do this through the hadoop.version
property. If unset, Spark will build against Hadoop 2.2.0 by default. Note that certain build profiles are required for particular Hadoop versions:
Hadoop version | Profile required |
---|---|
2.2.x | hadoop-2.2 |
2.3.x | hadoop-2.3 |
2.4.x | hadoop-2.4 |
2.6.x | hadoop-2.6 |
2.7.x and later 2.x | hadoop-2.7 |
You can enable the yarn
profile and optionally set the yarn.version
property if it is different from hadoop.version
. Spark only supports YARN versions 2.2.0 and later.
Examples:
{% highlight bash %}
Apache Hadoop 2.2.X
mvn -Pyarn -Phadoop-2.2 -DskipTests clean package
Apache Hadoop 2.3.X
mvn -Pyarn -Phadoop-2.3 -Dhadoop.version=2.3.0 -DskipTests clean package
Apache Hadoop 2.4.X or 2.5.X
mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=VERSION -DskipTests clean package
Apache Hadoop 2.6.X
mvn -Pyarn -Phadoop-2.6 -Dhadoop.version=2.6.0 -DskipTests clean package
Apache Hadoop 2.7.X and later
mvn -Pyarn -Phadoop-2.7 -Dhadoop.version=VERSION -DskipTests clean package
Different versions of HDFS and YARN.
mvn -Pyarn -Phadoop-2.3 -Dhadoop.version=2.3.0 -Dyarn.version=2.2.0 -DskipTests clean package {% endhighlight %}
Building With Hive and JDBC Support
To enable Hive integration for Spark SQL along with its JDBC server and CLI,
add the -Phive
and Phive-thriftserver
profiles to your existing build options.
By default Spark will build with Hive 1.2.1 bindings.
{% highlight bash %}
Apache Hadoop 2.4.X with Hive 1.2.1 support
mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phive -Phive-thriftserver -DskipTests clean package {% endhighlight %}
Building for Scala 2.10
To produce a Spark package compiled with Scala 2.10, use the -Dscala-2.10
property:
./dev/change-scala-version.sh 2.10
mvn -Pyarn -Phadoop-2.4 -Dscala-2.10 -DskipTests clean package
Spark Tests in Maven
Tests are run by default via the ScalaTest Maven plugin.
Some of the tests require Spark to be packaged first, so always run mvn package
with -DskipTests
the first time. The following is an example of a correct (build, test) sequence:
mvn -Pyarn -Phadoop-2.3 -DskipTests -Phive -Phive-thriftserver clean package
mvn -Pyarn -Phadoop-2.3 -Phive -Phive-thriftserver test
The ScalaTest plugin also supports running only a specific test suite as follows:
mvn -Dhadoop.version=... -DwildcardSuites=org.apache.spark.repl.ReplSuite test
Building submodules individually
It's possible to build Spark sub-modules using the mvn -pl
option.
For instance, you can build the Spark Streaming module using:
{% highlight bash %} mvn -pl :spark-streaming_2.11 clean install {% endhighlight %}
where spark-streaming_2.11
is the artifactId
as defined in streaming/pom.xml
file.
Continuous Compilation
We use the scala-maven-plugin which supports incremental and continuous compilation. E.g.
mvn scala:cc
should run continuous compilation (i.e. wait for changes). However, this has not been tested extensively. A couple of gotchas to note:
-
it only scans the paths
src/main
andsrc/test
(see docs), so it will only work from within certain submodules that have that structure. -
you'll typically need to run
mvn install
from the project root for compilation within specific submodules to work; this is because submodules that depend on other submodules do so via thespark-parent
module).
Thus, the full flow for running continuous-compilation of the core
submodule may look more like:
$ mvn install
$ cd core
$ mvn scala:cc
Building Spark with IntelliJ IDEA or Eclipse
For help in setting up IntelliJ IDEA or Eclipse for Spark development, and troubleshooting, refer to the wiki page for IDE setup.
Running Java 8 Test Suites
Running only Java 8 tests and nothing else.
mvn install -DskipTests
mvn -pl :java8-tests_2.11 test
or
sbt java8-tests/test
Java 8 tests are automatically enabled when a Java 8 JDK is detected. If you have JDK 8 installed but it is not the system default, you can set JAVA_HOME to point to JDK 8 before running the tests.
Running Docker based Integration Test Suites
Running only docker based integration tests and nothing else.
mvn install -DskipTests
mvn -Pdocker-integration-tests -pl :spark-docker-integration-tests_2.11
or
sbt docker-integration-tests/test
Packaging without Hadoop Dependencies for YARN
The assembly directory produced by mvn package
will, by default, include all of Spark's dependencies, including Hadoop and some of its ecosystem projects. On YARN deployments, this causes multiple versions of these to appear on executor classpaths: the version packaged in the Spark assembly and the version on each node, included with yarn.application.classpath
. The hadoop-provided
profile builds the assembly without including Hadoop-ecosystem projects, like ZooKeeper and Hadoop itself.
Building with SBT
Maven is the official build tool recommended for packaging Spark, and is the build of reference. But SBT is supported for day-to-day development since it can provide much faster iterative compilation. More advanced developers may wish to use SBT.
The SBT build is derived from the Maven POM files, and so the same Maven profiles and variables can be set to control the SBT build. For example:
build/sbt -Pyarn -Phadoop-2.3 package
To avoid the overhead of launching sbt each time you need to re-compile, you can launch sbt
in interactive mode by running build/sbt
, and then run all build commands at the command
prompt. For more recommendations on reducing build time, refer to the
wiki page.
Testing with SBT
Some of the tests require Spark to be packaged first, so always run build/sbt package
the first time. The following is an example of a correct (build, test) sequence:
build/sbt -Pyarn -Phadoop-2.3 -Phive -Phive-thriftserver package
build/sbt -Pyarn -Phadoop-2.3 -Phive -Phive-thriftserver test
To run only a specific test suite as follows:
build/sbt -Pyarn -Phadoop-2.3 -Phive -Phive-thriftserver "test-only org.apache.spark.repl.ReplSuite"
To run test suites of a specific sub project as follows:
build/sbt -Pyarn -Phadoop-2.3 -Phive -Phive-thriftserver core/test
Speeding up Compilation with Zinc
Zinc is a long-running server version of SBT's incremental
compiler. When run locally as a background process, it speeds up builds of Scala-based projects
like Spark. Developers who regularly recompile Spark with Maven will be the most interested in
Zinc. The project site gives instructions for building and running zinc
; OS X users can
install it using brew install zinc
.
If using the build/mvn
package zinc
will automatically be downloaded and leveraged for all
builds. This process will auto-start after the first time build/mvn
is called and bind to port
3030 unless the ZINC_PORT
environment variable is set. The zinc
process can subsequently be
shut down at any time by running build/zinc-<version>/bin/zinc -shutdown
and will automatically
restart whenever build/mvn
is called.