Skip to content
Snippets Groups Projects
Commit 22b982d2 authored by Patrick Wendell's avatar Patrick Wendell
Browse files

File rename

parent 61c4762d
No related branches found
No related tags found
No related merge requests found
......@@ -98,7 +98,7 @@
<ul class="dropdown-menu">
<li><a href="configuration.html">Configuration</a></li>
<li><a href="tuning.html">Tuning Guide</a></li>
<li><a href="cdh-hdp.html">Running with CDH/HDP</a></li>
<li><a href="hadoop-third-party-distributions.html">Running with CDH/HDP</a></li>
<li><a href="hardware-provisioning.html">Hardware Provisioning</a></li>
<li><a href="building-with-maven.html">Building Spark with Maven</a></li>
<li><a href="contributing-to-spark.html">Contributing to Spark</a></li>
......
......@@ -54,9 +54,7 @@ Spark can run in a variety of deployment modes:
cores dedicated to Spark on each node.
* Run Spark alongside Hadoop using a cluster resource manager, such as YARN or Mesos.
These options are identical for those using CDH and HDP. Note that if you have a YARN cluster,
but still prefer to run Spark on a dedicated set of nodes rather than scheduling through YARN,
use `mr1` versions of HADOOP_HOME when compiling.
These options are identical for those using CDH and HDP.
# Inheriting Cluster Configuration
If you plan to read and write from HDFS using Spark, there are two Hadoop configuration files that
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment