Skip to main content

Zabbix Preprocessing

Plugin: scripts.d.plugin Module: zabbix

Overview

This module runs Zabbix-style data collection jobs natively inside Netdata.

It supports Zabbix's master-item + dependent-pipeline pattern: a single collection step (command, HTTP, or SNMP) produces raw data, and one or more preprocessing pipelines extract individual metrics using Zabbix-compatible preprocessing steps (JSONPath, regex, JavaScript, SNMP walk, Prometheus parsing, CSV, XPath, and more).

For each configured job it collects:

  • User-defined metrics: Each dependent pipeline produces a charted metric with configurable context, unit, family, chart type, and dimension algorithm.
  • Job state: OK / WARNING / ERROR / UNKNOWN state tracking per job and per discovered instance.
  • Low-level discovery (LLD): Optional discovery pipelines that dynamically create instances from JSON arrays, similar to Zabbix LLD.

Collection supports three modes:

  • Command: Runs an external script/binary via nd-run and captures stdout.
  • HTTP: Performs an HTTP request and captures the response body.
  • SNMP: Queries an SNMP agent (GET or WALK) and captures the result.

The raw output is then processed through Zabbix-compatible preprocessing steps to extract metrics. Zabbix macros ({HOST.NAME}, {HOST.IP}, {#MACRO}, etc.) are expanded before execution.

This collector is supported on all platforms.

This collector supports collecting metrics from multiple instances of this integration, including remote instances.

Command-mode plugins run as the netdata user via nd-run.

Default Behavior

Auto-Detection

No auto-detection. Each job must be explicitly configured with a collection definition and one or more dependent_pipelines.

Limits

The default configuration for this integration does not impose any limits on data collection.

Performance Impact

Command-mode jobs spawn a subprocess per execution. HTTP and SNMP modes use in-process clients.

Metrics

Metrics grouped by scope.

The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.

Virtual Node Label Conventions

When a job references a vnode, the module reads Zabbix host macros from the virtual node's labels using prefix conventions:

Label keyZabbix macroDescription
_address{HOST.IP}, {HOST.CONN}IP address or DNS name of the host
_alias{HOST.ALIAS}Human-readable host alias
Other _ prefixedN/AReserved for future use

The {HOST.NAME}, {HOST.HOST}, and {HOST.DNS} macros are derived from the vnode hostname.

Per pipeline

Metrics produced by each dependent preprocessing pipeline.

Labels:

LabelDescription
zabbix_jobJob name.
zabbix_pipelinePipeline name.

Metrics:

MetricDimensionsUnit
zabbix.{context}{dimension}varies

Per job

Per-job instance state tracking.

Labels:

LabelDescription
zabbix_jobJob name.

Metrics:

MetricDimensionsUnit
zabbix.{job}.stateok, collect_failure, lld_failure, extraction_failure, dimension_failurestate

Alerts

There are no alerts configured by default for this integration.

Setup

Prerequisites

No action required.

Configuration

Options

Each job defines a collection source and one or more dependent preprocessing pipelines.

Config options
GroupOptionDescriptionDefaultRequired
Collectioncollection.typeCollection mode (command, http, or snmp).yes
collection.commandCommand to execute (for command type).no
collection.http.urlURL to fetch (for http type).no
collection.snmp.targetSNMP target host (for snmp type).no
collection.snmp.oidSNMP OID to query.no
Pipelinesdependent_pipelines[].namePipeline name (used for chart identification).yes
dependent_pipelines[].contextNetdata chart context.yes
dependent_pipelines[].dimensionDimension name within the chart.yes
dependent_pipelines[].unitMetric unit string.yes
dependent_pipelines[].stepsArray of Zabbix preprocessing steps applied to the raw collection output.[]yes
GeneralvnodeVirtual node name for host macro resolution.no
DiscoverylldLow-level discovery configuration for dynamic instance creation.no

via File

The configuration file name for this integration is scripts.d/zabbix.conf.

You can edit the configuration file using the edit-config script from the Netdata config directory.

cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config scripts.d/zabbix.conf
Examples
Command collection with JSONPath extraction

Run a script and extract a metric using JSONPath.

Config
jobs:
- name: api_latency
collection:
type: command
command: /usr/local/bin/get_api_stats.sh
dependent_pipelines:
- name: latency
context: myapp.api.latency
dimension: p99
unit: milliseconds
steps:
- type: jsonpath
params: "$.latency.p99"

SNMP collection

Query an SNMP OID and chart the result.

Config
jobs:
- name: disk_usage
vnode: my-switch
collection:
type: snmp
snmp:
target: "{HOST.IP}"
oid: ".1.3.6.1.2.1.25.2.3.1.6"
version: v2c
community: public
dependent_pipelines:
- name: used
context: zabbix.disk.used
dimension: used
unit: bytes
steps:
- type: snmp_get_value


Do you have any feedback for this page? If so, you can open a new issue on our netdata/learn repository.