Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
S
spark
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
cs525-sp18-g07
spark
Commits
61c4762d
Commit
61c4762d
authored
11 years ago
by
Patrick Wendell
Browse files
Options
Downloads
Patches
Plain Diff
Changes based on feedback
parent
e653a9d8
No related branches found
Branches containing commit
No related tags found
Tags containing commit
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
docs/cdh-hdp.md
+24
-12
24 additions, 12 deletions
docs/cdh-hdp.md
with
24 additions
and
12 deletions
docs/cdh-hdp.md
+
24
−
12
View file @
61c4762d
...
...
@@ -3,14 +3,17 @@ layout: global
title
:
Running with Cloudera and HortonWorks Distributions
---
Spark
is fully compatible with
all versions of Cloudera's Distribution Including Hadoop (CDH) and
the Hortonworks Data Platform (HDP). There are a few things to keep in mind when
running against
Spark
can run against
all versions of Cloudera's Distribution Including Hadoop (CDH) and
the Hortonworks Data Platform (HDP). There are a few things to keep in mind when
using Spark with
these distributions:
# Compile-time Hadoop Version
When compiling Spark, you'll need to
[
set the HADOOP_VERSION flag
](
http://localhost:4000/index.html#a-note-about-hadoop-versions
)
.
The table below lists the corresponding HADOOP_VERSION for each CDH/HDP release. _Note_ that
[
set the HADOOP_VERSION flag
](
http://localhost:4000/index.html#a-note-about-hadoop-versions
)
:
HADOOP_VERSION=1.0.4 sbt/sbt assembly
The table below lists the corresponding HADOOP_VERSION for each CDH/HDP release. Note that
some Hadoop releases are binary compatible across client versions. This means the pre-built Spark
distribution may "just work" without you needing to compile. That said, we recommend compiling with
the _exact_ Hadoop version you are running to avoid any compatibility errors.
...
...
@@ -51,16 +54,25 @@ Spark can run in a variety of deployment modes:
cores dedicated to Spark on each node.
*
Run Spark alongside Hadoop using a cluster resource manager, such as YARN or Mesos.
These options are identical for those using CDH and HDP. Note that if you
are running a YARN
cluster, you may
still
choose
to run Spark on dedicated
nodes. In this case, you should use
th
e
`mr1`
versions of HADOOP_HOME when compiling
, not the YARN versions
.
These options are identical for those using CDH and HDP. Note that if you
have a YARN cluster,
but
still
prefer
to run Spark on
a
dedicated
set of nodes rather than scheduling through YARN,
us
e
`mr1`
versions of HADOOP_HOME when compiling.
# Inheriting Cluster Configuration
If you plan to read and write from HDFS using Spark, it is good to include copies of two relevant
Hadoop configuration files in your $SPARK_HOME/conf directory. These are
`hdfs-site.xml`
, which
provides default behaviors for the HDFS client, and
`core-site.xml`
, which sets the default
filesystem name. The location of these configuration files varies across CDH and HDP versions, but
If you plan to read and write from HDFS using Spark, there are two Hadoop configuration files that
should be included on Spark's classpath:
*
`hdfs-site.xml`
, which provides default behaviors for the HDFS client.
*
`core-site.xml`
, which sets the default filesystem name.
The location of these configuration files varies across CDH and HDP versions, but
a common location is inside of
`/etc/hadoop/conf`
. Some tools, such as Cloudera Manager, create
configurations on-the-fly, but offer a mechanisms to download copies of them.
If you can locate these files, copy them into $SPARK_HOME/conf/.
There are a few ways to make these files visible to Spark:
*
You can copy these files into
`$SPARK_HOME/conf`
and they will be included in Spark's
classpath automatically.
*
If you are running Spark on the same nodes as Hadoop _and_ your distribution includes both
`hdfs-site.xml`
and
`core-site.xml`
in the same directory, you can set
`HADOOP_CONF_DIR`
in
`$SPARK_HOME/spark-env.sh`
to that directory.
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment