diff --git a/docs/configuration.md b/docs/configuration.md index f292bfbb7dcd65c4ea9799d34e29499e0928e92c..673cdb371a5127b6ada54626bc30a300a011c2d9 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -1228,7 +1228,7 @@ Apart from these, the following properties are also available, and may be useful </td> </tr> <tr> - <td><code>spark.streaming.receiver.writeAheadLogs.enable</code></td> + <td><code>spark.streaming.receiver.writeAheadLog.enable</code></td> <td>false</td> <td> Enable write ahead logs for receivers. All the input data received through receivers diff --git a/docs/streaming-programming-guide.md b/docs/streaming-programming-guide.md index 01450efe35e553538450594ae8816df302910435..e37a2bb37b9a48e70afbe2ff1eed7bc4b06d3a45 100644 --- a/docs/streaming-programming-guide.md +++ b/docs/streaming-programming-guide.md @@ -1574,7 +1574,7 @@ To run a Spark Streaming applications, you need to have the following. recovery, thus ensuring zero data loss (discussed in detail in the [Fault-tolerance Semantics](#fault-tolerance-semantics) section). This can be enabled by setting the [configuration parameter](configuration.html#spark-streaming) - `spark.streaming.receiver.writeAheadLogs.enable` to `true`. However, these stronger semantics may + `spark.streaming.receiver.writeAheadLog.enable` to `true`. However, these stronger semantics may come at the cost of the receiving throughput of individual receivers. This can be corrected by running [more receivers in parallel](#level-of-parallelism-in-data-receiving) to increase aggregate throughput. Additionally, it is recommended that the replication of the