Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
S
spark
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
cs525-sp18-g07
spark
Commits
236bcd0d
Commit
236bcd0d
authored
13 years ago
by
Olivier Grisel
Browse files
Options
Downloads
Patches
Plain Diff
Markdown rendering for the toplevel README.md to improve readability on github
parent
21425001
No related branches found
No related tags found
Tags containing commit
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
README.md
+59
-0
59 additions, 0 deletions
README.md
with
59 additions
and
0 deletions
README
→
README
.md
+
59
−
0
View file @
236bcd0d
ONLINE DOCUMENTATION
# Spark
You can find the latest Spark documentation, including a programming guide,
Lightning-Fast Cluster Computing -
<http://www.spark-project.org/>
on the project wiki at http://github.com/mesos/spark/wiki. This file only
contains basic setup instructions.
## Online Documentation
BUILDING
You can find the latest Spark documentation, including a programming
guide, on the project wiki at
<http://github.com/mesos/spark/wiki>
. This
file only contains basic setup instructions.
## Building
Spark requires Scala 2.8. This version has been tested with 2.8.1.final.
Spark requires Scala 2.8. This version has been tested with 2.8.1.final.
Experimental support for Scala 2.9 is available in the
`scala-2.9`
branch.
The project is built using Simple Build Tool (SBT), which is packaged with it.
The project is built using Simple Build Tool (SBT), which is packaged with it.
To build Spark and its example programs, run sbt/sbt update compile.
To build Spark and its example programs, run:
sbt/sbt update compile
To run Spark, you will need to have Scala's bin in your $PATH, or you
To run Spark, you will need to have Scala's bin in your $PATH, or you
will need to set the SCALA_HOME environment variable to point to where
will need to set the
`
SCALA_HOME
`
environment variable to point to where
you've installed Scala. Scala must be accessible through one of these
you've installed Scala. Scala must be accessible through one of these
methods on Mesos slave nodes as well as on the master.
methods on Mesos slave nodes as well as on the master.
To run one of the examples, use ./run <class> <params>. For example,
To run one of the examples, use
`./run <class> <params>`
. For example:
./run spark.examples.SparkLR will run the Logistic Regression example.
./run spark.examples.SparkLR local[2]
will run the Logistic Regression example locally on 2 CPUs.
Each of the example programs prints usage help if no params are given.
Each of the example programs prints usage help if no params are given.
All of the Spark samples take a <host> parameter that is the Mesos master
All of the Spark samples take a
`
<host>
`
parameter that is the Mesos master
to connect to. This can be a Mesos URL, or "local" to run locally with one
to connect to. This can be a Mesos URL, or "local" to run locally with one
thread, or "local[N]" to run locally with N threads.
thread, or "local[N]" to run locally with N threads.
## Configuration
CONFIGURATION
Spark can be configured through two files:
`conf/java-opts`
and
`conf/spark-env.sh`
.
Spark can be configured through two files: conf/java-opts and conf/spark-env.sh.
In java-opts, you can add flags to be passed to the JVM when running Spark.
In
`
java-opts
`
, you can add flags to be passed to the JVM when running Spark.
In spark-env.sh, you can set any environment variables you wish to be available
In
`
spark-env.sh
`
, you can set any environment variables you wish to be available
when running Spark programs, such as PATH, SCALA_HOME, etc. There are also
when running Spark programs, such as
`
PATH
`
,
`
SCALA_HOME
`
, etc. There are also
several Spark-specific variables you can set:
several Spark-specific variables you can set:
- SPARK_CLASSPATH: Extra entries to be added to the classpath, separated by ":".
-
`
SPARK_CLASSPATH
`
: Extra entries to be added to the classpath, separated by ":".
- SPARK_MEM: Memory for Spark to use, in the format used by java's -Xmx
option
-
`
SPARK_MEM
`
: Memory for Spark to use, in the format used by java's
`
-Xmx
`
(for example, 200m meams 200 MB, 1g means 1 GB, etc).
option
(for example,
`-Xmx
200m
`
meams 200 MB,
`-Xmx
1g
`
means 1 GB, etc).
- SPARK_LIBRARY_PATH: Extra entries to add to java.library.path for locating
-
`
SPARK_LIBRARY_PATH
`
: Extra entries to add to
`
java.library.path
`
for locating
shared libraries.
shared libraries.
- SPARK_JAVA_OPTS: Extra options to pass to JVM.
-
`
SPARK_JAVA_OPTS
`
: Extra options to pass to JVM.
Note that spark-env.sh must be a shell script (it must be executable and start
Note that
`
spark-env.sh
`
must be a shell script (it must be executable and start
with a #! header to specify the shell to use).
with a
`
#!
`
header to specify the shell to use).
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment