Skip to content
Snippets Groups Projects
Commit aa6536fa authored by Jongyoul Lee's avatar Jongyoul Lee Committed by Sean Owen
Browse files

[SPARK-3619] Part 2. Upgrade to Mesos 0.21 to work around MESOS-1688

- MESOS_NATIVE_LIBRARY become deprecated
- Chagned MESOS_NATIVE_LIBRARY to MESOS_NATIVE_JAVA_LIBRARY

Author: Jongyoul Lee <jongyoul@gmail.com>

Closes #4361 from jongyoul/SPARK-3619-1 and squashes the following commits:

f1ea91f [Jongyoul Lee] Merge branch 'SPARK-3619-1' of https://github.com/jongyoul/spark into SPARK-3619-1
a6a00c2 [Jongyoul Lee] [SPARK-3619] Upgrade to Mesos 0.21 to work around MESOS-1688 - Removed 'Known issues' section
2e15a21 [Jongyoul Lee] [SPARK-3619] Upgrade to Mesos 0.21 to work around MESOS-1688 - MESOS_NATIVE_LIBRARY become deprecated - Chagned MESOS_NATIVE_LIBRARY to MESOS_NATIVE_JAVA_LIBRARY
0dace7b [Jongyoul Lee] [SPARK-3619] Upgrade to Mesos 0.21 to work around MESOS-1688 - MESOS_NATIVE_LIBRARY become deprecated - Chagned MESOS_NATIVE_LIBRARY to MESOS_NATIVE_JAVA_LIBRARY
parent 62ede538
No related branches found
No related tags found
No related merge requests found
...@@ -15,7 +15,7 @@ ...@@ -15,7 +15,7 @@
# - SPARK_PUBLIC_DNS, to set the public DNS name of the driver program # - SPARK_PUBLIC_DNS, to set the public DNS name of the driver program
# - SPARK_CLASSPATH, default classpath entries to append # - SPARK_CLASSPATH, default classpath entries to append
# - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and RDD data # - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and RDD data
# - MESOS_NATIVE_LIBRARY, to point to your libmesos.so if you use Mesos # - MESOS_NATIVE_JAVA_LIBRARY, to point to your libmesos.so if you use Mesos
# Options read in YARN client mode # Options read in YARN client mode
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files # - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
......
...@@ -110,7 +110,7 @@ cluster, or `mesos://zk://host:2181` for a multi-master Mesos cluster using ZooK ...@@ -110,7 +110,7 @@ cluster, or `mesos://zk://host:2181` for a multi-master Mesos cluster using ZooK
The driver also needs some configuration in `spark-env.sh` to interact properly with Mesos: The driver also needs some configuration in `spark-env.sh` to interact properly with Mesos:
1. In `spark-env.sh` set some environment variables: 1. In `spark-env.sh` set some environment variables:
* `export MESOS_NATIVE_LIBRARY=<path to libmesos.so>`. This path is typically * `export MESOS_NATIVE_JAVA_LIBRARY=<path to libmesos.so>`. This path is typically
`<prefix>/lib/libmesos.so` where the prefix is `/usr/local` by default. See Mesos installation `<prefix>/lib/libmesos.so` where the prefix is `/usr/local` by default. See Mesos installation
instructions above. On Mac OS X, the library is called `libmesos.dylib` instead of instructions above. On Mac OS X, the library is called `libmesos.dylib` instead of
`libmesos.so`. `libmesos.so`.
...@@ -167,9 +167,6 @@ acquire. By default, it will acquire *all* cores in the cluster (that get offere ...@@ -167,9 +167,6 @@ acquire. By default, it will acquire *all* cores in the cluster (that get offere
only makes sense if you run just one application at a time. You can cap the maximum number of cores only makes sense if you run just one application at a time. You can cap the maximum number of cores
using `conf.set("spark.cores.max", "10")` (for example). using `conf.set("spark.cores.max", "10")` (for example).
# Known issues
- When using the "fine-grained" mode, make sure that your executors always leave 32 MB free on the slaves. Otherwise it can happen that your Spark job does not proceed anymore. Currently, Apache Mesos only offers resources if there are at least 32 MB memory allocatable. But as Spark allocates memory only for the executor and cpu only for tasks, it can happen on high slave memory usage that no new tasks will be started anymore. More details can be found in [MESOS-1688](https://issues.apache.org/jira/browse/MESOS-1688). Alternatively use the "coarse-gained" mode, which is not affected by this issue.
# Running Alongside Hadoop # Running Alongside Hadoop
You can run Spark and Mesos alongside your existing Hadoop cluster by just launching them as a You can run Spark and Mesos alongside your existing Hadoop cluster by just launching them as a
......
...@@ -281,7 +281,7 @@ class ReplSuite extends FunSuite { ...@@ -281,7 +281,7 @@ class ReplSuite extends FunSuite {
assertDoesNotContain("Exception", output) assertDoesNotContain("Exception", output)
} }
if (System.getenv("MESOS_NATIVE_LIBRARY") != null) { if (System.getenv("MESOS_NATIVE_JAVA_LIBRARY") != null) {
test("running on Mesos") { test("running on Mesos") {
val output = runInterpreter("localquiet", val output = runInterpreter("localquiet",
""" """
......
...@@ -289,7 +289,7 @@ class ReplSuite extends FunSuite { ...@@ -289,7 +289,7 @@ class ReplSuite extends FunSuite {
assertDoesNotContain("Exception", output) assertDoesNotContain("Exception", output)
} }
if (System.getenv("MESOS_NATIVE_LIBRARY") != null) { if (System.getenv("MESOS_NATIVE_JAVA_LIBRARY") != null) {
test("running on Mesos") { test("running on Mesos") {
val output = runInterpreter("localquiet", val output = runInterpreter("localquiet",
""" """
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment