diff --git a/docs/monitoring.md b/docs/monitoring.md index 37ede476c187dce91004c3d4be86b1d39066fa6e..6816671ffbf461ff004eb131ff699b499f3e8c1b 100644 --- a/docs/monitoring.md +++ b/docs/monitoring.md @@ -173,6 +173,8 @@ follows: Note that in all of these UIs, the tables are sortable by clicking their headers, making it easy to identify slow tasks, data skew, etc. +Note that the history server only displays completed Spark jobs. One way to signal the completion of a Spark job is to stop the Spark Context explicitly (`sc.stop()`), or in Python using the `with SparkContext() as sc:` to handle the Spark Context setup and tear down, and still show the job history on the UI. + # Metrics Spark has a configurable metrics system based on the