Skip to content
Snippets Groups Projects
  • Mark Grover's avatar
    70f846a3
    [SPARK-5847][CORE] Allow for configuring MetricsSystem's use of app ID to namespace all metrics · 70f846a3
    Mark Grover authored
    ## What changes were proposed in this pull request?
    Adding a new property to SparkConf called spark.metrics.namespace that allows users to
    set a custom namespace for executor and driver metrics in the metrics systems.
    
    By default, the root namespace used for driver or executor metrics is
    the value of `spark.app.id`. However, often times, users want to be able to track the metrics
    across apps for driver and executor metrics, which is hard to do with application ID
    (i.e. `spark.app.id`) since it changes with every invocation of the app. For such use cases,
    users can set the `spark.metrics.namespace` property to another spark configuration key like
    `spark.app.name` which is then used to populate the root namespace of the metrics system
    (with the app name in our example). `spark.metrics.namespace` property can be set to any
    arbitrary spark property key, whose value would be used to set the root namespace of the
    metrics system. Non driver and executor metrics are never prefixed with `spark.app.id`, nor
    does the `spark.metrics.namespace` property have any such affect on such metrics.
    
    ## How was this patch tested?
    Added new unit tests, modified existing unit tests.
    
    Author: Mark Grover <mark@apache.org>
    
    Closes #14270 from markgrover/spark-5847.
    70f846a3
    History
    [SPARK-5847][CORE] Allow for configuring MetricsSystem's use of app ID to namespace all metrics
    Mark Grover authored
    ## What changes were proposed in this pull request?
    Adding a new property to SparkConf called spark.metrics.namespace that allows users to
    set a custom namespace for executor and driver metrics in the metrics systems.
    
    By default, the root namespace used for driver or executor metrics is
    the value of `spark.app.id`. However, often times, users want to be able to track the metrics
    across apps for driver and executor metrics, which is hard to do with application ID
    (i.e. `spark.app.id`) since it changes with every invocation of the app. For such use cases,
    users can set the `spark.metrics.namespace` property to another spark configuration key like
    `spark.app.name` which is then used to populate the root namespace of the metrics system
    (with the app name in our example). `spark.metrics.namespace` property can be set to any
    arbitrary spark property key, whose value would be used to set the root namespace of the
    metrics system. Non driver and executor metrics are never prefixed with `spark.app.id`, nor
    does the `spark.metrics.namespace` property have any such affect on such metrics.
    
    ## How was this patch tested?
    Added new unit tests, modified existing unit tests.
    
    Author: Mark Grover <mark@apache.org>
    
    Closes #14270 from markgrover/spark-5847.
monitoring.md 17.39 KiB
layout: global
title: Monitoring and Instrumentation
description: Monitoring, metrics, and instrumentation guide for Spark SPARK_VERSION_SHORT

There are several ways to monitor Spark applications: web UIs, metrics, and external instrumentation.

Web Interfaces

Every SparkContext launches a web UI, by default on port 4040, that displays useful information about the application. This includes:

  • A list of scheduler stages and tasks
  • A summary of RDD sizes and memory usage
  • Environmental information.
  • Information about the running executors

You can access this interface by simply opening http://<driver-node>:4040 in a web browser. If multiple SparkContexts are running on the same host, they will bind to successive ports beginning with 4040 (4041, 4042, etc).

Note that this information is only available for the duration of the application by default. To view the web UI after the fact, set spark.eventLog.enabled to true before starting the application. This configures Spark to log Spark events that encode the information displayed in the UI to persisted storage.

Viewing After the Fact

If Spark is run on Mesos or YARN, it is still possible to construct the UI of an application through Spark's history server, provided that the application's event logs exist. You can start the history server by executing:

./sbin/start-history-server.sh

This creates a web interface at http://<server-url>:18080 by default, listing incomplete and completed applications and attempts.

When using the file-system provider class (see spark.history.provider below), the base logging directory must be supplied in the spark.history.fs.logDirectory configuration option, and should contain sub-directories that each represents an application's event logs.

The spark jobs themselves must be configured to log events, and to log them to the same shared, writeable directory. For example, if the server was configured with a log directory of hdfs://namenode/shared/spark-logs, then the client-side options would be:

spark.eventLog.enabled true
spark.eventLog.dir hdfs://namenode/shared/spark-logs

The history server can be configured as follows:

Environment Variables

Environment Variable Meaning
SPARK_DAEMON_MEMORY Memory to allocate to the history server (default: 1g).
SPARK_DAEMON_JAVA_OPTS JVM options for the history server (default: none).
SPARK_PUBLIC_DNS The public address for the history server. If this is not set, links to application history may use the internal address of the server, resulting in broken links (default: none).
SPARK_HISTORY_OPTS spark.history.* configuration options for the history server (default: none).

Spark configuration options

Property Name Default Meaning
spark.history.provider org.apache.spark.deploy.history.FsHistoryProvider Name of the class implementing the application history backend. Currently there is only one implementation, provided by Spark, which looks for application logs stored in the file system.
spark.history.fs.logDirectory file:/tmp/spark-events For the filesystem history provider, the URL to the directory containing application event logs to load. This can be a local file:// path, an HDFS path hdfs://namenode/shared/spark-logs or that of an alternative filesystem supported by the Hadoop APIs.
spark.history.fs.update.interval 10s The period at which the filesystem history provider checks for new or updated logs in the log directory. A shorter interval detects new applications faster, at the expense of more server load re-reading updated applications. As soon as an update has completed, listings of the completed and incomplete applications will reflect the changes.
spark.history.retainedApplications 50 The number of application UIs to retain. If this cap is exceeded, then the oldest applications will be removed.
spark.history.ui.port 18080 The port to which the web interface of the history server binds.
spark.history.kerberos.enabled false Indicates whether the history server should use kerberos to login. This is required if the history server is accessing HDFS files on a secure Hadoop cluster. If this is true, it uses the configs spark.history.kerberos.principal and spark.history.kerberos.keytab.
spark.history.kerberos.principal (none) Kerberos principal name for the History Server.
spark.history.kerberos.keytab (none) Location of the kerberos keytab file for the History Server.
spark.history.ui.acls.enable false Specifies whether acls should be checked to authorize users viewing the applications. If enabled, access control checks are made regardless of what the individual application had set for spark.ui.acls.enable when the application was run. The application owner will always have authorization to view their own application and any users specified via spark.ui.view.acls and groups specified via spark.ui.view.acls.groups when the application was run will also have authorization to view that application. If disabled, no access control checks are made.
spark.history.fs.cleaner.enabled false Specifies whether the History Server should periodically clean up event logs from storage.
spark.history.fs.cleaner.interval 1d How often the filesystem job history cleaner checks for files to delete. Files are only deleted if they are older than spark.history.fs.cleaner.maxAge
spark.history.fs.cleaner.maxAge 7d Job history files older than this will be deleted when the filesystem history cleaner runs.
spark.history.fs.numReplayThreads 25% of available cores Number of threads that will be used by history server to process event logs.

Note that in all of these UIs, the tables are sortable by clicking their headers, making it easy to identify slow tasks, data skew, etc.

Note

  1. The history server displays both completed and incomplete Spark jobs. If an application makes multiple attempts after failures, the failed attempts will be displayed, as well as any ongoing incomplete attempt or the final successful attempt.

  2. Incomplete applications are only updated intermittently. The time between updates is defined by the interval between checks for changed files (spark.history.fs.update.interval). On larger clusters the update interval may be set to large values. The way to view a running application is actually to view its own web UI.

  3. Applications which exited without registering themselves as completed will be listed as incomplete —even though they are no longer running. This can happen if an application crashes.

  4. One way to signal the completion of a Spark job is to stop the Spark Context explicitly (sc.stop()), or in Python using the with SparkContext() as sc: construct to handle the Spark Context setup and tear down.

REST API

In addition to viewing the metrics in the UI, they are also available as JSON. This gives developers an easy way to create new visualizations and monitoring tools for Spark. The JSON is available for both running applications, and in the history server. The endpoints are mounted at /api/v1. Eg., for the history server, they would typically be accessible at http://<server-url>:18080/api/v1, and for a running application, at http://localhost:4040/api/v1.

In the API, an application is referenced by its application ID, [app-id]. When running on YARN, each application may have multiple attempts, but there are attempt IDs only for applications in cluster mode, not applications in client mode. Applications in YARN cluster mode can be identified by their [attempt-id]. In the API listed below, when running in YARN cluster mode, [app-id] will actually be [base-app-id]/[attempt-id], where [base-app-id] is the YARN application ID.

Endpoint Meaning
/applications A list of all applications.
?status=[completed|running] list only applications in the chosen state.
?minDate=[date] earliest date/time to list.
Examples:
?minDate=2015-02-10
?minDate=2015-02-03T16:42:40.000GMT
?maxDate=[date] latest date/time to list; uses same format as minDate.
/applications/[app-id]/jobs A list of all jobs for a given application.
?status=[complete|succeeded|failed] list only jobs in the specific state.
/applications/[app-id]/jobs/[job-id] Details for the given job.
/applications/[app-id]/stages A list of all stages for a given application.
/applications/[app-id]/stages/[stage-id] A list of all attempts for the given stage.
?status=[active|complete|pending|failed] list only stages in the state.
/applications/[app-id]/stages/[stage-id]/[stage-attempt-id] Details for the given stage attempt
/applications/[app-id]/stages/[stage-id]/[stage-attempt-id]/taskSummary Summary metrics of all tasks in the given stage attempt.
?quantiles summarize the metrics with the given quantiles.
Example: ?quantiles=0.01,0.5,0.99
/applications/[app-id]/stages/[stage-id]/[stage-attempt-id]/taskList A list of all tasks for the given stage attempt.
?offset=[offset]&length=[len] list tasks in the given range.
?sortBy=[runtime|-runtime] sort the tasks.
Example: ?offset=10&length=50&sortBy=runtime
/applications/[app-id]/executors A list of all active executors for the given application.
/applications/[app-id]/allexecutors A list of all(active and dead) executors for the given application.
/applications/[app-id]/storage/rdd A list of stored RDDs for the given application.
/applications/[app-id]/storage/rdd/[rdd-id] Details for the storage status of a given RDD.
/applications/[base-app-id]/logs Download the event logs for all attempts of the given application as files within a zip file.
/applications/[base-app-id]/[attempt-id]/logs Download the event logs for a specific application attempt as a zip file.

The number of jobs and stages which can retrieved is constrained by the same retention mechanism of the standalone Spark UI; "spark.ui.retainedJobs" defines the threshold value triggering garbage collection on jobs, and spark.ui.retainedStages that for stages. Note that the garbage collection takes place on playback: it is possible to retrieve more entries by increasing these values and restarting the history server.

API Versioning Policy

These endpoints have been strongly versioned to make it easier to develop applications on top. In particular, Spark guarantees:

  • Endpoints will never be removed from one version
  • Individual fields will never be removed for any given endpoint
  • New endpoints may be added
  • New fields may be added to existing endpoints
  • New versions of the api may be added in the future at a separate endpoint (eg., api/v2). New versions are not required to be backwards compatible.
  • Api versions may be dropped, but only after at least one minor release of co-existing with a new api version.