From 1e2c9311871968426e019164b129652fd6d0037f Mon Sep 17 00:00:00 2001
From: WeichenXu <WeichenXu123@outlook.com>
Date: Tue, 7 Jun 2016 13:29:27 +0100
Subject: [PATCH] [MINOR] fix typo in documents

## What changes were proposed in this pull request?

I use spell check tools checks typo in spark documents and fix them.

## How was this patch tested?

N/A

Author: WeichenXu <WeichenXu123@outlook.com>

Closes #13538 from WeichenXu123/fix_doc_typo.
---
 docs/graphx-programming-guide.md    | 2 +-
 docs/hardware-provisioning.md       | 2 +-
 docs/streaming-programming-guide.md | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/docs/graphx-programming-guide.md b/docs/graphx-programming-guide.md
index 9dea9b5904..81cf17475f 100644
--- a/docs/graphx-programming-guide.md
+++ b/docs/graphx-programming-guide.md
@@ -132,7 +132,7 @@ var graph: Graph[VertexProperty, String] = null
 
 Like RDDs, property graphs are immutable, distributed, and fault-tolerant.  Changes to the values or
 structure of the graph are accomplished by producing a new graph with the desired changes.  Note
-that substantial parts of the original graph (i.e., unaffected structure, attributes, and indicies)
+that substantial parts of the original graph (i.e., unaffected structure, attributes, and indices)
 are reused in the new graph reducing the cost of this inherently functional data structure.  The
 graph is partitioned across the executors using a range of vertex partitioning heuristics.  As with
 RDDs, each partition of the graph can be recreated on a different machine in the event of a failure.
diff --git a/docs/hardware-provisioning.md b/docs/hardware-provisioning.md
index 60ecb4f483..bb6f616b18 100644
--- a/docs/hardware-provisioning.md
+++ b/docs/hardware-provisioning.md
@@ -22,7 +22,7 @@ Hadoop and Spark on a common cluster manager like [Mesos](running-on-mesos.html)
 
 * If this is not possible, run Spark on different nodes in the same local-area network as HDFS.
 
-* For low-latency data stores like HBase, it may be preferrable to run computing jobs on different
+* For low-latency data stores like HBase, it may be preferable to run computing jobs on different
 nodes than the storage system to avoid interference.
 
 # Local Disks
diff --git a/docs/streaming-programming-guide.md b/docs/streaming-programming-guide.md
index 78ae6a7407..0a6a0397d9 100644
--- a/docs/streaming-programming-guide.md
+++ b/docs/streaming-programming-guide.md
@@ -1259,7 +1259,7 @@ dstream.foreachRDD(sendRecord)
 </div>
 
 This is incorrect as this requires the connection object to be serialized and sent from the
-driver to the worker. Such connection objects are rarely transferrable across machines. This
+driver to the worker. Such connection objects are rarely transferable across machines. This
 error may manifest as serialization errors (connection object not serializable), initialization
 errors (connection object needs to be initialized at the workers), etc. The correct solution is
 to create the connection object at the worker.
-- 
GitLab