Skip to content
Snippets Groups Projects
Commit 88d53f0d authored by Evan Chan's avatar Evan Chan
Browse files

"launch" scripts is more accurate terminology

parent 5a18b854
No related branches found
No related tags found
No related merge requests found
...@@ -3,7 +3,7 @@ layout: global ...@@ -3,7 +3,7 @@ layout: global
title: Spark Standalone Mode title: Spark Standalone Mode
--- ---
In addition to running on the Mesos or YARN cluster managers, Spark also provides a simple standalone deploy mode. You can launch a standalone cluster either manually, by starting a master and workers by hand, or use our provided [deploy scripts](#cluster-launch-scripts). It is also possible to run these daemons on a single machine for testing. In addition to running on the Mesos or YARN cluster managers, Spark also provides a simple standalone deploy mode. You can launch a standalone cluster either manually, by starting a master and workers by hand, or use our provided [launch scripts](#cluster-launch-scripts). It is also possible to run these daemons on a single machine for testing.
# Starting a Cluster Manually # Starting a Cluster Manually
...@@ -55,7 +55,7 @@ Finally, the following configuration options can be passed to the master and wor ...@@ -55,7 +55,7 @@ Finally, the following configuration options can be passed to the master and wor
# Cluster Launch Scripts # Cluster Launch Scripts
To launch a Spark standalone cluster with the deploy scripts, you need to create a file called `conf/slaves` in your Spark directory, which should contain the hostnames of all the machines where you would like to start Spark workers, one per line. The master machine must be able to access each of the slave machines via password-less `ssh` (using a private key). For testing, you can just put `localhost` in this file. To launch a Spark standalone cluster with the launch scripts, you need to create a file called `conf/slaves` in your Spark directory, which should contain the hostnames of all the machines where you would like to start Spark workers, one per line. The master machine must be able to access each of the slave machines via password-less `ssh` (using a private key). For testing, you can just put `localhost` in this file.
Once you've set up this file, you can launch or stop your cluster with the following shell scripts, based on Hadoop's deploy scripts, and available in `SPARK_HOME/bin`: Once you've set up this file, you can launch or stop your cluster with the following shell scripts, based on Hadoop's deploy scripts, and available in `SPARK_HOME/bin`:
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment