-
Patrick Wendell authored
Author: Patrick Wendell <patrick@databricks.com> Closes #4638 from pwendell/SPARK-5850 and squashes the following commits: 386126f [Patrick Wendell] SPARK-5850: Remove experimental label for Scala 2.11 and FlumePollingStream.
Patrick Wendell authoredAuthor: Patrick Wendell <patrick@databricks.com> Closes #4638 from pwendell/SPARK-5850 and squashes the following commits: 386126f [Patrick Wendell] SPARK-5850: Remove experimental label for Scala 2.11 and FlumePollingStream.
layout: global
title: Building Spark
redirect_from: "building-with-maven.html"
- This will become a table of contents (this text will be scraped). {:toc}
Building Spark using Maven requires Maven 3.0.4 or newer and Java 6+.
build/mvn
Building with Spark now comes packaged with a self-contained Maven installation to ease building and deployment of Spark from source located under the build/
directory. This script will automatically download and setup all necessary build requirements (Maven, Scala, and Zinc) locally within the build/
directory itself. It honors any mvn
binary if present already, however, will pull down its own copy of Scala and Zinc regardless to ensure proper version requirements are met. build/mvn
execution acts as a pass through to the mvn
call allowing easy transition from previous build methods. As an example, one can build a version of Spark as follows:
{% highlight bash %} build/mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests clean package {% endhighlight %}
Other build examples can be found below.
Setting up Maven's Memory Usage
You'll need to configure Maven to use more memory than usual by setting MAVEN_OPTS
. We recommend the following settings:
{% highlight bash %} export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m" {% endhighlight %}
If you don't run this, you may see errors like the following:
[INFO] Compiling 203 Scala sources and 9 Java sources to /Users/me/Development/spark/core/target/scala-{{site.SCALA_BINARY_VERSION}}/classes...
[ERROR] PermGen space -> [Help 1]
[INFO] Compiling 203 Scala sources and 9 Java sources to /Users/me/Development/spark/core/target/scala-{{site.SCALA_BINARY_VERSION}}/classes...
[ERROR] Java heap space -> [Help 1]
You can fix this by setting the MAVEN_OPTS
variable as discussed before.
Note:
- For Java 8 and above this step is not required.
- If using
build/mvn
andMAVEN_OPTS
were not already set, the script will automate this for you.
Specifying the Hadoop Version
Because HDFS is not protocol-compatible across versions, if you want to read from HDFS, you'll need to build Spark against the specific HDFS version in your environment. You can do this through the "hadoop.version" property. If unset, Spark will build against Hadoop 1.0.4 by default. Note that certain build profiles are required for particular Hadoop versions:
Hadoop version | Profile required |
---|---|
0.23.x | hadoop-0.23 |
1.x to 2.1.x | (none) |
2.2.x | hadoop-2.2 |
2.3.x | hadoop-2.3 |
2.4.x | hadoop-2.4 |
For Apache Hadoop versions 1.x, Cloudera CDH "mr1" distributions, and other Hadoop versions without YARN, use:
{% highlight bash %}
Apache Hadoop 1.2.1
mvn -Dhadoop.version=1.2.1 -DskipTests clean package
Cloudera CDH 4.2.0 with MapReduce v1
mvn -Dhadoop.version=2.0.0-mr1-cdh4.2.0 -DskipTests clean package
Apache Hadoop 0.23.x
mvn -Phadoop-0.23 -Dhadoop.version=0.23.7 -DskipTests clean package {% endhighlight %}
You can enable the "yarn" profile and optionally set the "yarn.version" property if it is different from "hadoop.version". Spark only supports YARN versions 2.2.0 and later.
Examples:
{% highlight bash %}
Apache Hadoop 2.2.X
mvn -Pyarn -Phadoop-2.2 -Dhadoop.version=2.2.0 -DskipTests clean package
Apache Hadoop 2.3.X
mvn -Pyarn -Phadoop-2.3 -Dhadoop.version=2.3.0 -DskipTests clean package
Apache Hadoop 2.4.X or 2.5.X
mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=VERSION -DskipTests clean package
Versions of Hadoop after 2.5.X may or may not work with the -Phadoop-2.4 profile (they were released after this version of Spark).
Different versions of HDFS and YARN.
mvn -Pyarn -Phadoop-2.3 -Dhadoop.version=2.3.0 -Dyarn.version=2.2.0 -DskipTests clean package {% endhighlight %}
Building With Hive and JDBC Support
To enable Hive integration for Spark SQL along with its JDBC server and CLI,
add the -Phive
and Phive-thriftserver
profiles to your existing build options.
By default Spark will build with Hive 0.13.1 bindings. You can also build for
Hive 0.12.0 using the -Phive-0.12.0
profile.
{% highlight bash %}