Spark的监测系统。
配置文件在目录~/spark-1.1.0-bin-hadoop2.4/conf/metrics.properties.template下。
这个配置文件是主要针对Spark内部组件监测的一个配置项。它可以配置一个或多个sinks。一下是metrics的一些基本属性概念
“instance”指定“who”(角色)使用的指标体系。配置的实例可以是"master", "worker", "executor","driver", "applications" 。这些角色将创建指标系统来监测。所以这些实例等于这些角色。
“source”指定“where”(来源)收集的数据指标。在指标体系中,存在两种来源:
1、Spark内部source,像的MasterSource,WorkerSource等,它们会接收Spark组件的内部状态。这些source都与实例是创建特定的指标体系后,被添加的。
2、常见的source,就像JvmSource,这将低水平状态的收集,是由配置项决定的,并通过反射加载。
“sink”指定“where”(目的地)输出指标数据。多个sinks可以共存并冲洗指标对所有这些汇总。
Metrics配置如下:
[instance].[sink|source].[name].[options] =xxxx
[instance]可以是"master", "worker", "executor","driver", "applications",这意味着只有指定的实例才有这个属性。可用通配符“*”来代替实例名,这意味着所有的实例将具有这种属性。
[sink|source]表示该属性是source或者sink。此字段只能是source or sink。
[name] 指定sink or source的名称,它是自定义的
[options] 这是 source or sink的特定属性。
配置注意项:
1、添加一个新的sink,需要设置一个符合规范的class类名选项
2、Sink设置的轮询周期至少是一秒
3、指定名称的具体设置会覆盖带*号的,比如master.sink.console.period优先于
*.sink.console.period.
4、Metrics具体配置。如:spark.metrics.conf=${SPARK_HOME}/conf/metrics.properties必须添加java的参数如-Dspark.metrics.conf=xxx。如果把文件放在${SPARK_HOME}/conf目录下,它会自动加载。
5、在master, worker and client driver的sinks的默认添加方式是MetricsServlet处理。发送请求参数“/metrics/json”以josn的格式注册metrics。对于master可以发送“/metrics/mastern/json”获取master实例和app。
Metrics配置案例:
# org.apache.spark.metrics.sink.ConsoleSink # Name: Default: Description: # period 10 Poll period # unit seconds Units of poll period
# org.apache.spark.metrics.sink.CSVSink # Name: Default: Description: # period 10 Poll period # unit seconds Units of poll period # directory /tmp Where to store CSV files
# org.apache.spark.metrics.sink.GangliaSink # Name: Default: Description: # host NONE Hostname or multicast group of Ganglia server # port NONE Port of Ganglia server(s) # period 10 Poll period # unit seconds Units of poll period # ttl 1 TTL of messages sent by Ganglia # mode multicast Ganglia network mode ('unicast' or 'multicast')
# org.apache.spark.metrics.sink.JmxSink
# org.apache.spark.metrics.sink.MetricsServlet # Name: Default: Description: # path VARIES* Path prefix from the web server root # sample false Whether to show entire set of samples for histograms ('false' or 'true') # # * Default path is /metrics/json for all instances except the master. The master has two paths: # /metrics/aplications/json # App information # /metrics/master/json # Master information
# org.apache.spark.metrics.sink.GraphiteSink # Name: Default: Description: # host NONE Hostname of Graphite server # port NONE Port of Graphite server # period 10 Poll period # unit seconds Units of poll period # prefix EMPTY STRING Prefix to prepend to metric name
## Examples # Enable JmxSink for all instances by class name #*.sink.jmx.class=org.apache.spark.metrics.sink.JmxSink
# Enable ConsoleSink for all instances by class name #*.sink.console.class=org.apache.spark.metrics.sink.ConsoleSink
# Polling period for ConsoleSink #*.sink.console.period=10
#*.sink.console.unit=seconds
# Master instance overlap polling period #master.sink.console.period=15
#master.sink.console.unit=seconds
# Enable CsvSink for all instances #*.sink.csv.class=org.apache.spark.metrics.sink.CsvSink
# Polling period for CsvSink #*.sink.csv.period=1
#*.sink.csv.unit=minutes
# Polling directory for CsvSink #*.sink.csv.directory=/tmp/
# Worker instance overlap polling period #worker.sink.csv.period=10
#worker.sink.csv.unit=minutes
# Enable jvm source for instance master, worker, driver and executor #master.source.jvm.class=org.apache.spark.metrics.source.JvmSource
#worker.source.jvm.class=org.apache.spark.metrics.source.JvmSource
#driver.source.jvm.class=org.apache.spark.metrics.source.JvmSource
#executor.source.jvm.class=org.apache.spark.metrics.source.JvmSource
|
MetricsSystem
主要查看如下三个方法:
Initialize加载默认配置项,源码如下:
registerSources:
进入registerSource
继续:register
MetricRegistryListener是个监听器。
这里面的监听就包括添加、移除监听器等。
registerSinks方法
Source
ApplicationSource
Application注册了status、runtime_ms、cores
BlockManagerSource
BlockManager注册了maxMem_MB、remainingMem_MB、memUsed_MB、diskSpaceUsed_MB
DAGSchedulerSource
DAGScheduler注册了failedStages、runningStages、waitingStages、allJobs、activeJobs
ExecutorSource
Executor注册了activeTasks、completeTasks、currentPool_size、maxPool_size等
MasterSource
Master上注册了workers、apps、waitingApps