diff --git a/docs/configuration.md b/docs/configuration.md index 8136bd62ab6af2215893f9b952b33c875a9ce9cc..c8336b39133de958805d6dc0c06740764c631270 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -562,7 +562,7 @@ Apart from these, the following properties are also available, and may be useful </td> </tr> <tr> - <td>spark.hadoop.validateOutputSpecs</td> + <td><code>spark.hadoop.validateOutputSpecs</code></td> <td>true</td> <td>If set to true, validates the output specification (e.g. checking if the output directory already exists) used in saveAsHadoopFile and other variants. This can be disabled to silence exceptions due to pre-existing @@ -570,7 +570,7 @@ Apart from these, the following properties are also available, and may be useful previous versions of Spark. Simply use Hadoop's FileSystem API to delete output directories by hand.</td> </tr> <tr> - <td>spark.executor.heartbeatInterval</td> + <td><code>spark.executor.heartbeatInterval</code></td> <td>10000</td> <td>Interval (milliseconds) between each executor's heartbeats to the driver. Heartbeats let the driver know that the executor is still alive and update it with metrics for in-progress