Skip to content
Snippets Groups Projects
Commit e84815dc authored by Sean Owen's avatar Sean Owen
Browse files

[SPARK-7733] [CORE] [BUILD] Update build, code to use Java 7 for 1.5.0+

Update build to use Java 7, and remove some comments and special-case support for Java 6.

Author: Sean Owen <sowen@cloudera.com>

Closes #6265 from srowen/SPARK-7733 and squashes the following commits:

59bda4e [Sean Owen] Update build to use Java 7, and remove some comments and special-case support for Java 6
parent db81b9d8
No related branches found
No related tags found
No related merge requests found
...@@ -58,24 +58,6 @@ fi ...@@ -58,24 +58,6 @@ fi
SPARK_ASSEMBLY_JAR="${ASSEMBLY_DIR}/${ASSEMBLY_JARS}" SPARK_ASSEMBLY_JAR="${ASSEMBLY_DIR}/${ASSEMBLY_JARS}"
# Verify that versions of java used to build the jars and run Spark are compatible
if [ -n "$JAVA_HOME" ]; then
JAR_CMD="$JAVA_HOME/bin/jar"
else
JAR_CMD="jar"
fi
if [ $(command -v "$JAR_CMD") ] ; then
jar_error_check=$("$JAR_CMD" -tf "$SPARK_ASSEMBLY_JAR" nonexistent/class/path 2>&1)
if [[ "$jar_error_check" =~ "invalid CEN header" ]]; then
echo "Loading Spark jar with '$JAR_CMD' failed. " 1>&2
echo "This is likely because Spark was compiled with Java 7 and run " 1>&2
echo "with Java 6. (see SPARK-1703). Please use Java 7 to run Spark " 1>&2
echo "or build Spark with Java 6." 1>&2
exit 1
fi
fi
LAUNCH_CLASSPATH="$SPARK_ASSEMBLY_JAR" LAUNCH_CLASSPATH="$SPARK_ASSEMBLY_JAR"
# Add the launcher build dir to the classpath if requested. # Add the launcher build dir to the classpath if requested.
......
...@@ -52,8 +52,8 @@ private[spark] class ChildFirstURLClassLoader(urls: Array[URL], parent: ClassLoa ...@@ -52,8 +52,8 @@ private[spark] class ChildFirstURLClassLoader(urls: Array[URL], parent: ClassLoa
* Used to implement fine-grained class loading locks similar to what is done by Java 7. This * Used to implement fine-grained class loading locks similar to what is done by Java 7. This
* prevents deadlock issues when using non-hierarchical class loaders. * prevents deadlock issues when using non-hierarchical class loaders.
* *
* Note that due to Java 6 compatibility (and some issues with implementing class loaders in * Note that due to some issues with implementing class loaders in
* Scala), Java 7's `ClassLoader.registerAsParallelCapable` method is not called. * Scala, Java 7's `ClassLoader.registerAsParallelCapable` method is not called.
*/ */
private val locks = new ConcurrentHashMap[String, Object]() private val locks = new ConcurrentHashMap[String, Object]()
......
...@@ -1295,8 +1295,7 @@ private[spark] object Utils extends Logging { ...@@ -1295,8 +1295,7 @@ private[spark] object Utils extends Logging {
} catch { } catch {
case t: Throwable => case t: Throwable =>
if (originalThrowable != null) { if (originalThrowable != null) {
// We could do originalThrowable.addSuppressed(t), but it's originalThrowable.addSuppressed(t)
// not available in JDK 1.6.
logWarning(s"Suppressing exception in finally: " + t.getMessage, t) logWarning(s"Suppressing exception in finally: " + t.getMessage, t)
throw originalThrowable throw originalThrowable
} else { } else {
......
...@@ -103,9 +103,6 @@ class SorterSuite extends SparkFunSuite { ...@@ -103,9 +103,6 @@ class SorterSuite extends SparkFunSuite {
* has the keys and values alternating. The basic Java sorts work only on the keys, so the * has the keys and values alternating. The basic Java sorts work only on the keys, so the
* real Java solution is to make Tuple2s to store the keys and values and sort an array of * real Java solution is to make Tuple2s to store the keys and values and sort an array of
* those, while the Sorter approach can work directly on the input data format. * those, while the Sorter approach can work directly on the input data format.
*
* Note that the Java implementation varies tremendously between Java 6 and Java 7, when
* the Java sort changed from merge sort to TimSort.
*/ */
ignore("Sorter benchmark for key-value pairs") { ignore("Sorter benchmark for key-value pairs") {
val numElements = 25000000 // 25 mil val numElements = 25000000 // 25 mil
......
...@@ -7,11 +7,7 @@ redirect_from: "building-with-maven.html" ...@@ -7,11 +7,7 @@ redirect_from: "building-with-maven.html"
* This will become a table of contents (this text will be scraped). * This will become a table of contents (this text will be scraped).
{:toc} {:toc}
Building Spark using Maven requires Maven 3.0.4 or newer and Java 6+. Building Spark using Maven requires Maven 3.0.4 or newer and Java 7+.
**Note:** Building Spark with Java 7 or later can create JAR files that may not be
readable with early versions of Java 6, due to the large number of files in the JAR
archive. Build with Java 6 if this is an issue for your deployment.
# Building with `build/mvn` # Building with `build/mvn`
......
...@@ -20,7 +20,7 @@ Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). It's easy ...@@ -20,7 +20,7 @@ Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). It's easy
locally on one machine --- all you need is to have `java` installed on your system `PATH`, locally on one machine --- all you need is to have `java` installed on your system `PATH`,
or the `JAVA_HOME` environment variable pointing to a Java installation. or the `JAVA_HOME` environment variable pointing to a Java installation.
Spark runs on Java 6+, Python 2.6+ and R 3.1+. For the Scala API, Spark {{site.SPARK_VERSION}} uses Spark runs on Java 7+, Python 2.6+ and R 3.1+. For the Scala API, Spark {{site.SPARK_VERSION}} uses
Scala {{site.SCALA_BINARY_VERSION}}. You will need to use a compatible Scala version Scala {{site.SCALA_BINARY_VERSION}}. You will need to use a compatible Scala version
({{site.SCALA_BINARY_VERSION}}.x). ({{site.SCALA_BINARY_VERSION}}.x).
......
...@@ -54,7 +54,7 @@ import org.apache.spark.SparkConf ...@@ -54,7 +54,7 @@ import org.apache.spark.SparkConf
<div data-lang="java" markdown="1"> <div data-lang="java" markdown="1">
Spark {{site.SPARK_VERSION}} works with Java 6 and higher. If you are using Java 8, Spark supports Spark {{site.SPARK_VERSION}} works with Java 7 and higher. If you are using Java 8, Spark supports
[lambda expressions](http://docs.oracle.com/javase/tutorial/java/javaOO/lambdaexpressions.html) [lambda expressions](http://docs.oracle.com/javase/tutorial/java/javaOO/lambdaexpressions.html)
for concisely writing functions, otherwise you can use the classes in the for concisely writing functions, otherwise you can use the classes in the
[org.apache.spark.api.java.function](api/java/index.html?org/apache/spark/api/java/function/package-summary.html) package. [org.apache.spark.api.java.function](api/java/index.html?org/apache/spark/api/java/function/package-summary.html) package.
......
...@@ -141,22 +141,6 @@ SPARK_HIVE=$("$MVN" help:evaluate -Dexpression=project.activeProfiles -pl sql/hi ...@@ -141,22 +141,6 @@ SPARK_HIVE=$("$MVN" help:evaluate -Dexpression=project.activeProfiles -pl sql/hi
# because we use "set -o pipefail" # because we use "set -o pipefail"
echo -n) echo -n)
JAVA_CMD="$JAVA_HOME"/bin/java
JAVA_VERSION=$("$JAVA_CMD" -version 2>&1)
if [[ ! "$JAVA_VERSION" =~ "1.6" && -z "$SKIP_JAVA_TEST" ]]; then
echo "***NOTE***: JAVA_HOME is not set to a JDK 6 installation. The resulting"
echo " distribution may not work well with PySpark and will not run"
echo " with Java 6 (See SPARK-1703 and SPARK-1911)."
echo " This test can be disabled by adding --skip-java-test."
echo "Output from 'java -version' was:"
echo "$JAVA_VERSION"
read -p "Would you like to continue anyways? [y,n]: " -r
if [[ ! "$REPLY" =~ ^[Yy]$ ]]; then
echo "Okay, exiting."
exit 1
fi
fi
if [ "$NAME" == "none" ]; then if [ "$NAME" == "none" ]; then
NAME=$SPARK_HADOOP_VERSION NAME=$SPARK_HADOOP_VERSION
fi fi
......
...@@ -116,7 +116,7 @@ ...@@ -116,7 +116,7 @@
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<akka.group>com.typesafe.akka</akka.group> <akka.group>com.typesafe.akka</akka.group>
<akka.version>2.3.11</akka.version> <akka.version>2.3.11</akka.version>
<java.version>1.6</java.version> <java.version>1.7</java.version>
<sbt.project.name>spark</sbt.project.name> <sbt.project.name>spark</sbt.project.name>
<mesos.version>0.21.1</mesos.version> <mesos.version>0.21.1</mesos.version>
<mesos.classifier>shaded-protobuf</mesos.classifier> <mesos.classifier>shaded-protobuf</mesos.classifier>
......
...@@ -25,8 +25,7 @@ public final class PlatformDependent { ...@@ -25,8 +25,7 @@ public final class PlatformDependent {
/** /**
* Facade in front of {@link sun.misc.Unsafe}, used to avoid directly exposing Unsafe outside of * Facade in front of {@link sun.misc.Unsafe}, used to avoid directly exposing Unsafe outside of
* this package. This also lets us aovid accidental use of deprecated methods or methods that * this package. This also lets us avoid accidental use of deprecated methods.
* aren't present in Java 6.
*/ */
public static final class UNSAFE { public static final class UNSAFE {
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment