Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 

Logstash

This integration collects logs and metrics from Logstash instances.

You can find additional information about monitoring Logstash with the Logstash integration in the Logstash Reference: {{ url "logstash-monitoring-ea" "Monitoring Logstash with Elastic Agent" }}.

Compatibility

The logstash package works with Logstash 8.5.0 and later

Metrics Collection

Metric collection for the Logstash integration can be done via Elastic Agent (preferred) or with Stack Monitoring. By utilizing Elastic Agent we are able to query additional monitoring APIs and provide additional dashboards, which provide the best view into your Logstash deployment and pipeline execution.

Elastic Agent based metrics collection is not compatible with the Stack Monitoring UI inside Kibana, please only select Metrics (Elastic Agent). Users that prefer the Stack Monitoring UI should uncheck Metrics (Elastic Agent) and continue to use Metrics (Stack Monitoring).

Fields and Sample Events

Health Report

The health report api is available starting with Logstash 8.16.0, which provides the health_report dataset for Node health and Pipeline health dashboards

Example

An example event for 'health_report' looks as following:

{{fields "health_report"}}

{{event "health_report"}}

Node

This is the node dataset, which drives the Node dashboard pages.

Example

{{fields "node_cel"}}

{{event "node_cel"}}

Pipeline

This is the pipeline dataset, which drives the Pipeline dashboard pages.

Example

{{fields "pipeline"}}

{{event "pipeline"}}

Plugin

This is the plugin dataset, which drives the Pipeline detail dashboard pages. Note that this dataset may produce many documents for logstash instances using a large number of pipelines and/or plugins within those pipelines. For those instances, we recommend reviewing the pipeline collection period, and setting it to an appropriate value.

Example

{{fields "plugins"}}

{{event "plugins"}}

Logs

Logstash package supports the plain text format and the JSON format. Also, two types of logs can be activated with the Logstash package:

  • log collects and parses the logs that Logstash writes to disk.
  • slowlog parses the logstash slowlog (make sure to configure the Logstash slowlog option).

Known issues

When using the log data stream to parse plaintext logs, if a multiline plaintext log contains an embedded JSON object such that the JSON object starts on a new line, the fileset may not parse the multiline plaintext log event correctly.

Metrics

Logstash metric related data streams works with Logstash 7.3.0 and later.

Node Stats

{{event "node_stats"}}

Exported fields

Field Description Type
@timestamp Date/time when the event originated. This is the date/time extracted from the event, typically representing when the event was generated by the source. If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. Required field for all events. date
data_stream.dataset Data stream dataset. constant_keyword
data_stream.namespace Data stream namespace. constant_keyword
data_stream.type Data stream type. constant_keyword
host.hostname Hostname of the host. It normally contains what the hostname command returns on the host machine. keyword
logstash.node.jvm.version Version keyword
logstash.node.state.pipeline.hash keyword
logstash.node.state.pipeline.id keyword
logstash.node.stats.events.duration_in_millis long
logstash.node.stats.events.filtered Filtered events counter. long
logstash.node.stats.events.in Incoming events counter. long
logstash.node.stats.events.out Outgoing events counter. long
logstash.node.stats.jvm.mem.heap_max_in_bytes long
logstash.node.stats.jvm.mem.heap_used_in_bytes long
logstash.node.stats.jvm.uptime_in_millis long
logstash.node.stats.logstash.uuid keyword
logstash.node.stats.logstash.version keyword
logstash.node.stats.os.cgroup.cpu.stat.number_of_elapsed_periods long
logstash.node.stats.os.cgroup.cpu.stat.number_of_times_throttled long
logstash.node.stats.os.cgroup.cpu.stat.time_throttled_nanos long
logstash.node.stats.os.cgroup.cpuacct.usage_nanos long
logstash.node.stats.os.cpu.load_average.15m long
logstash.node.stats.os.cpu.load_average.1m long
logstash.node.stats.os.cpu.load_average.5m long
logstash.node.stats.pipelines.events.duration_in_millis long
logstash.node.stats.pipelines.events.out long
logstash.node.stats.pipelines.hash keyword
logstash.node.stats.pipelines.id keyword
logstash.node.stats.pipelines.queue.events_count long
logstash.node.stats.pipelines.queue.max_queue_size_in_bytes long
logstash.node.stats.pipelines.queue.queue_size_in_bytes long
logstash.node.stats.pipelines.queue.type keyword
logstash.node.stats.pipelines.vertices.duration_in_millis long
logstash.node.stats.pipelines.vertices.events_in long
logstash.node.stats.pipelines.vertices.events_out events_out long
logstash.node.stats.pipelines.vertices.id id keyword
logstash.node.stats.pipelines.vertices.pipeline_ephemeral_id pipeline_ephemeral_id keyword
logstash.node.stats.pipelines.vertices.queue_push_duration_in_millis queue_push_duration_in_millis float
logstash.node.stats.process.cpu.percent double
logstash.node.stats.queue.events_count long
logstash_stats.pipelines nested
process.pid Process id. long
service.version Version of the service the data was collected from. This allows to look at a data set only for a specific version of a service. keyword

Node

{{event "node"}}