Proxmox VE Monitoring
Plugin: guides Module: proxmox
Overview
This guide describes how Netdata monitors Proxmox VE hypervisors. Netdata provides comprehensive, zero-configuration monitoring of Proxmox hosts, including per-VM and per-container resource utilization, host system metrics, storage health, and cluster components.
When installed on a Proxmox host, Netdata automatically discovers and monitors all KVM/QEMU virtual machines and LXC containers through Linux cgroups, resolving friendly names for each VM and container.
Netdata uses multiple collectors working together to provide full Proxmox visibility:
- cgroups.plugin monitors per-VM and per-container CPU, memory, disk I/O, and network via Linux cgroups. It automatically resolves VM names from
/etc/pve/qemu-server/<VMID>.confand container hostnames from/etc/pve/lxc/<CTID>.conf. - proc.plugin monitors host-level system metrics (CPU, memory, network interfaces, disk I/O).
- apps.plugin monitors Proxmox-specific process groups (
proxmox-ve,libvirt,qemu-guest-agent). - go.d/zfspool monitors ZFS pool health, space utilization, and fragmentation (ZFS is common on Proxmox).
- go.d/ceph monitors Ceph cluster health and performance (for Proxmox clusters using Ceph storage).
- go.d/smartctl monitors physical disk SMART health data.
- go.d/sensors monitors hardware temperature, fan speed, and voltage.
- ebpf.plugin provides kernel-level visibility into VM/container syscalls, file I/O, and network activity.
This collector is only supported on the following platforms:
- Linux
This collector only supports collecting metrics from a single instance of this integration.
Proxmox VE Monitoring can be monitored further using the following other integrations:
- Proxmox VMs and Containers
- Systemd Services
- Applications
- ZFS Pools
- Ceph
- S.M.A.R.T.
- Linux Sensors
- Network interfaces
- Disk Statistics
- System statistics
- Memory Usage
- ZFS Adaptive Replacement Cache
Default Behavior
Auto-Detection
When Netdata is installed on a Proxmox VE host, it automatically detects and monitors:
- All running KVM/QEMU virtual machines
- All running LXC containers
- Host system resources (CPU, memory, network, disks)
- Systemd services (pveproxy, pvedaemon, pvestatd, corosync, etc.)
- ZFS pools (if ZFS is used)
Limits
The default configuration for this integration does not impose any limits on data collection.
Performance Impact
The default configuration for this integration is not expected to impose a significant performance impact on the system.
Metrics
This guide does not collect metrics directly. Metrics are collected by the related integrations listed above. See each integration's page for detailed metric documentation.
Alerts
There are no alerts configured by default for this integration.
Setup
Prerequisites
Install Netdata on the Proxmox host
Netdata must be installed directly on the Proxmox VE host (not inside a VM or container) to access cgroups for all VMs and containers.
wget -O /tmp/netdata-kickstart.sh https://get.netdata.cloud/kickstart.sh && sh /tmp/netdata-kickstart.sh
Configuration
Options
There are no configuration options.
via File
There is no configuration file.
Examples
There are no configuration examples.
Troubleshooting
VM or container names not resolved
If VMs or containers show raw cgroup paths instead of friendly names, verify that:
- Netdata is installed on the Proxmox host (not inside a VM)
- The
/etc/pve/directory is accessible to the netdata user - The
cgroup-name.shscript can read VM/container configuration files
Missing ZFS metrics
If ZFS pool metrics are not showing, ensure the zfspool collector is enabled and the zpool command is available to the netdata user.
Missing Ceph metrics
Ceph metrics require the Ceph collector to be configured with the Ceph REST API endpoint. See the Ceph integration page for details.
Do you have any feedback for this page? If so, you can open a new issue on our netdata/learn repository.