Skip to content
Snippets Groups Projects
Commit 141c22e2 authored by Joseph E. Gonzalez's avatar Joseph E. Gonzalez
Browse files

merging in master changes

parent 637b67da
No related branches found
No related tags found
No related merge requests found
ec2-23-20-12-62.compute-1.amazonaws.com
ec2-54-205-173-19.compute-1.amazonaws.com
ec2-54-225-4-124.compute-1.amazonaws.com
ec2-23-22-209-112.compute-1.amazonaws.com
ec2-50-16-69-88.compute-1.amazonaws.com
ec2-54-205-163-126.compute-1.amazonaws.com
ec2-54-242-235-95.compute-1.amazonaws.com
ec2-54-211-169-232.compute-1.amazonaws.com
ec2-54-237-31-30.compute-1.amazonaws.com
ec2-54-235-15-124.compute-1.amazonaws.com
# A Spark Worker will be started on each of the machines listed below.
localhost
\ No newline at end of file
#!/usr/bin/env bash
# This file contains environment variables required to run Spark. Copy it as
# spark-env.sh and edit that to configure Spark for your site. At a minimum,
# the following two variables should be set:
# - SCALA_HOME, to point to your Scala installation, or SCALA_LIBRARY_PATH to
# point to the directory for Scala library JARs (if you install Scala as a
# Debian or RPM package, these are in a separate path, often /usr/share/java)
# spark-env.sh and edit that to configure Spark for your site.
#
# The following variables can be set in this file:
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
# - MESOS_NATIVE_LIBRARY, to point to your libmesos.so if you use Mesos
# - SPARK_JAVA_OPTS, to set node-specific JVM options for Spark. Note that
# we recommend setting app-wide options in the application's driver program.
# Examples of node-specific options : -Dspark.local.dir, GC options
# Examples of app-wide options : -Dspark.serializer
#
# If using the standalone deploy mode, you can also set variables for it:
# - SPARK_MASTER_IP, to bind the master to a different IP address
# If using the standalone deploy mode, you can also set variables for it here:
# - SPARK_MASTER_IP, to bind the master to a different IP address or hostname
# - SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports
# - SPARK_WORKER_CORES, to set the number of cores to use on this machine
# - SPARK_WORKER_MEMORY, to set how much memory to use (e.g. 1000m, 2g)
# - SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT
# - SPARK_WORKER_INSTANCES, to set the number of worker instances/processes
# to be spawned on every slave machine
# - SPARK_WORKER_INSTANCES, to set the number of worker processes per node
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment