From b1d59e60dee2a41f8eff8ef29b3bcac69111e2f0 Mon Sep 17 00:00:00 2001
From: Sean Owen <sowen@cloudera.com>
Date: Tue, 1 Aug 2017 19:05:55 +0100
Subject: [PATCH] [SPARK-21593][DOCS] Fix 2 rendering errors on configuration
 page

## What changes were proposed in this pull request?

Fix 2 rendering errors on configuration doc page, due to SPARK-21243 and SPARK-15355.

## How was this patch tested?

Manually built and viewed docs with jekyll

Author: Sean Owen <sowen@cloudera.com>

Closes #18793 from srowen/SPARK-21593.
---
 docs/configuration.md | 22 ++++++++++++----------
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/docs/configuration.md b/docs/configuration.md
index 500f980455..011d583d6e 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -536,15 +536,17 @@ Apart from these, the following properties are also available, and may be useful
   </td>
 </tr>
 <tr>
-    <td><code>spark.reducer.maxBlocksInFlightPerAddress</code></td>
-    <td>Int.MaxValue</td>
-    <td>
-      This configuration limits the number of remote blocks being fetched per reduce task from a
-      given host port. When a large number of blocks are being requested from a given address in a
-      single fetch or simultaneously, this could crash the serving executor or Node Manager. This
-      is especially useful to reduce the load on the Node Manager when external shuffle is enabled.
-      You can mitigate this issue by setting it to a lower value.
-    </td>
+  <td><code>spark.reducer.maxBlocksInFlightPerAddress</code></td>
+  <td>Int.MaxValue</td>
+  <td>
+    This configuration limits the number of remote blocks being fetched per reduce task from a
+    given host port. When a large number of blocks are being requested from a given address in a
+    single fetch or simultaneously, this could crash the serving executor or Node Manager. This
+    is especially useful to reduce the load on the Node Manager when external shuffle is enabled.
+    You can mitigate this issue by setting it to a lower value.
+  </td>
+</tr>
+<tr>
   <td><code>spark.reducer.maxReqSizeShuffleToMem</code></td>
   <td>Long.MaxValue</td>
   <td>
@@ -1081,7 +1083,7 @@ Apart from these, the following properties are also available, and may be useful
   </td>
 </tr>
 <tr>
-  <td><code>spark.storage.replication.proactive<code></td>
+  <td><code>spark.storage.replication.proactive</code></td>
   <td>false</td>
   <td>
     Enables proactive block replication for RDD blocks. Cached RDD block replicas lost due to
-- 
GitLab