Skip to main content

PostgreSQL

Plugin: go.d.plugin Module: postgres

Overview

This collector monitors the activity and performance of Postgres servers, collects replication statistics, metrics for each database, table and index, and more.

It establishes a connection to the Postgres instance via a TCP or UNIX socket. To collect metrics for database tables and indexes, it establishes an additional connection for each discovered database.

This collector is supported on all platforms.

This collector supports collecting metrics from multiple instances of this integration, including remote instances.

Default Behavior

Auto-Detection

By default, it detects instances running on localhost by trying to connect as root and netdata using known PostgreSQL TCP and UNIX sockets:

  • 127.0.0.1:5432
  • /var/run/postgresql/

Limits

Table and index metrics are not collected for databases with more than 50 tables or 250 indexes. These limits can be changed in the configuration file.

Performance Impact

The default configuration for this integration is not expected to impose a significant performance impact on the system.

Metrics

Metrics grouped by scope.

The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.

Per PostgreSQL instance

These metrics refer to the entire monitored application.

This scope has no labels.

Metrics:

MetricDimensionsUnit
postgres.connections_utilizationusedpercentage
postgres.connections_usageavailable, usedconnections
postgres.connections_state_countactive, idle, idle_in_transaction, idle_in_transaction_aborted, disabledconnections
postgres.transactions_durationa dimension per buckettransactions/s
postgres.queries_durationa dimension per bucketqueries/s
postgres.locks_utilizationusedpercentage
postgres.checkpoints_ratescheduled, requestedcheckpoints/s
postgres.checkpoints_timewrite, syncmilliseconds
postgres.bgwriter_halts_ratemaxwrittenevents/s
postgres.buffers_io_ratecheckpoint, backend, bgwriterB/s
postgres.buffers_backend_fsync_ratefsynccalls/s
postgres.buffers_allocated_rateallocatedB/s
postgres.wal_io_ratewriteB/s
postgres.wal_files_countwritten, recycledfiles
postgres.wal_archiving_files_countready, donefiles/s
postgres.autovacuum_workers_countanalyze, vacuum_analyze, vacuum, vacuum_freeze, brin_summarizeworkers
postgres.txid_exhaustion_towards_autovacuum_percemergency_autovacuumpercentage
postgres.txid_exhaustion_perctxid_exhaustionpercentage
postgres.txid_exhaustion_oldest_txid_numxidxid
postgres.catalog_relations_countordinary_table, index, sequence, toast_table, view, materialized_view, composite_type, foreign_table, partitioned_table, partitioned_indexrelations
postgres.catalog_relations_sizeordinary_table, index, sequence, toast_table, view, materialized_view, composite_type, foreign_table, partitioned_table, partitioned_indexB
postgres.uptimeuptimeseconds
postgres.databases_countdatabasesdatabases

Per repl application

These metrics refer to the replication application.

Labels:

LabelDescription
applicationapplication name

Metrics:

MetricDimensionsUnit
postgres.replication_app_wal_lag_sizesent_lag, write_lag, flush_lag, replay_lagB
postgres.replication_app_wal_lag_timewrite_lag, flush_lag, replay_lagseconds

Per repl slot

These metrics refer to the replication slot.

Labels:

LabelDescription
slotreplication slot name

Metrics:

MetricDimensionsUnit
postgres.replication_slot_files_countwal_keep, pg_replslot_filesfiles

Per database

These metrics refer to the database.

Labels:

LabelDescription
databasedatabase name

Metrics:

MetricDimensionsUnit
postgres.db_transactions_ratiocommitted, rollbackpercentage
postgres.db_transactions_ratecommitted, rollbacktransactions/s
postgres.db_connections_utilizationusedpercentage
postgres.db_connections_countconnectionsconnections
postgres.db_cache_io_ratiomisspercentage
postgres.db_io_ratememory, diskB/s
postgres.db_ops_fetched_rows_ratiofetchedpercentage
postgres.db_ops_read_rows_ratereturned, fetchedrows/s
postgres.db_ops_write_rows_rateinserted, deleted, updatedrows/s
postgres.db_conflicts_rateconflictsqueries/s
postgres.db_conflicts_reason_ratetablespace, lock, snapshot, bufferpin, deadlockqueries/s
postgres.db_deadlocks_ratedeadlocksdeadlocks/s
postgres.db_locks_held_countaccess_share, row_share, row_exclusive, share_update, share, share_row_exclusive, exclusive, access_exclusivelocks
postgres.db_locks_awaited_countaccess_share, row_share, row_exclusive, share_update, share, share_row_exclusive, exclusive, access_exclusivelocks
postgres.db_temp_files_created_ratecreatedfiles/s
postgres.db_temp_files_io_ratewrittenB/s
postgres.db_sizesizeB

Per table

These metrics refer to the database table.

Labels:

LabelDescription
databasedatabase name
schemaschema name
tabletable name
parent_tableparent table name

Metrics:

MetricDimensionsUnit
postgres.table_rows_dead_ratiodeadpercentage
postgres.table_rows_countlive, deadrows
postgres.table_ops_rows_rateinserted, deleted, updatedrows/s
postgres.table_ops_rows_hot_ratiohotpercentage
postgres.table_ops_rows_hot_ratehotrows/s
postgres.table_cache_io_ratiomisspercentage
postgres.table_io_ratememory, diskB/s
postgres.table_index_cache_io_ratiomisspercentage
postgres.table_index_io_ratememory, diskB/s
postgres.table_toast_cache_io_ratiomisspercentage
postgres.table_toast_io_ratememory, diskB/s
postgres.table_toast_index_cache_io_ratiomisspercentage
postgres.table_toast_index_io_ratememory, diskB/s
postgres.table_scans_rateindex, sequentialscans/s
postgres.table_scans_rows_rateindex, sequentialrows/s
postgres.table_autovacuum_since_timetimeseconds
postgres.table_vacuum_since_timetimeseconds
postgres.table_autoanalyze_since_timetimeseconds
postgres.table_analyze_since_timetimeseconds
postgres.table_null_columnsnullcolumns
postgres.table_sizesizeB
postgres.table_bloat_size_percbloatpercentage
postgres.table_bloat_sizebloatB

Per index

These metrics refer to the table index.

Labels:

LabelDescription
databasedatabase name
schemaschema name
tabletable name
parent_tableparent table name
indexindex name

Metrics:

MetricDimensionsUnit
postgres.index_sizesizeB
postgres.index_bloat_size_percbloatpercentage
postgres.index_bloat_sizebloatB
postgres.index_usage_statusused, unusedstatus

Functions

This collector exposes real-time functions for interactive troubleshooting in the Top tab.

Top Queries

Retrieves aggregated SQL query performance metrics from PostgreSQL pg_stat_statements extension.

This function queries pg_stat_statements which tracks execution statistics for all SQL statements. Statistics include execution counts, timing metrics, I/O operations, and resource consumption. Columns are dynamically detected based on your PostgreSQL version.

Use cases:

  • Identify slow queries consuming the most total execution time
  • Find queries with high shared block reads for I/O optimization
  • Analyze temp block usage to detect queries needing memory tuning

Query text is truncated at 4096 characters for display purposes.

AspectDescription
NamePostgres:top-queries
Require Cloudyes
PerformanceQueries pg_stat_statements which maintains statistics in shared memory:
• On busy servers with many unique queries, the extension may consume significant memory
• Default limit of 500 rows balances usefulness with performance
SecurityQuery text may contain unmasked literal values including potentially sensitive data:
• Personal information in WHERE clauses or INSERT values
• Business data and internal identifiers
• Access should be restricted to authorized personnel only
AvailabilityAvailable when:
• The pg_stat_statements extension is installed in the database
• The collector has successfully connected to PostgreSQL
• Returns HTTP 503 if extension is not installed (with instructions to install)
• Returns HTTP 500 if the query fails
• Returns HTTP 504 if the query times out

Prerequisites

Enable pg_stat_statements

The pg_stat_statements extension must be installed and configured.

  1. Add to postgresql.conf:

    shared_preload_libraries = 'pg_stat_statements'
  2. Restart PostgreSQL, then create the extension:

    CREATE EXTENSION pg_stat_statements;
  3. Verify the extension is working:

    SELECT COUNT(*) FROM pg_stat_statements;
info
  • pg_stat_statements requires a server restart to load the shared library
  • Statistics can be reset with SELECT pg_stat_statements_reset()
  • The pg_stat_statements.max parameter controls maximum tracked statements (default 5000)
  • Enable track_io_timing for block read/write timing metrics (may add slight overhead)

Parameters

ParameterTypeDescriptionRequiredDefaultOptions
Filter ByselectSelect the primary sort column. Options include total time, mean time, calls, rows, shared blocks hit/read, and temp blocks written. Defaults to total time to focus on most resource-intensive queries.yestotalTime

Returns

Aggregated query statistics from pg_stat_statements. Each row represents a unique query pattern with cumulative metrics across all executions.

ColumnTypeUnitVisibilityDescription
Query IDstringhiddenInternal hash identifier for the normalized query. Can be used to track queries across statistics resets.
QuerystringNormalized SQL query text with literals replaced by parameter placeholders. Truncated to 4096 characters.
DatabasestringDatabase name where the query was executed.
UserstringPostgreSQL user who executed the query.
CallsintegerTotal number of times this query pattern has been executed. High values indicate frequently run queries.
Total TimedurationmillisecondsCumulative execution time across all executions. High values indicate queries consuming significant database resources.
Mean TimedurationmillisecondsAverage execution time per call. Use this to compare typical performance across different query patterns.
Min TimedurationmillisecondshiddenMinimum execution time observed for a single execution.
Max TimedurationmillisecondshiddenMaximum execution time observed for a single execution. Large gaps between min and max may indicate performance variability.
Stddev TimedurationmillisecondshiddenStandard deviation of execution time. High values indicate inconsistent query performance.
PlansintegerhiddenNumber of times the query was planned. Available in PostgreSQL 13+.
Total Plan TimedurationmillisecondshiddenCumulative time spent planning the query. Available in PostgreSQL 13+.
Mean Plan TimedurationmillisecondshiddenAverage time spent planning per execution. Available in PostgreSQL 13+.
Min Plan TimedurationmillisecondshiddenMinimum planning time observed. Available in PostgreSQL 13+.
Max Plan TimedurationmillisecondshiddenMaximum planning time observed. Available in PostgreSQL 13+.
Stddev Plan TimedurationmillisecondshiddenStandard deviation of planning time. Available in PostgreSQL 13+.
RowsintegerTotal number of rows retrieved or affected across all executions.
Shared Blocks HitintegerTotal shared buffer cache hits. High values indicate good cache utilization.
Shared Blocks ReadintegerTotal shared blocks read from disk. High values indicate queries that bypass the cache and may benefit from more shared_buffers.
Shared Blocks DirtiedintegerhiddenTotal shared blocks dirtied by the query.
Shared Blocks WrittenintegerhiddenTotal shared blocks written by the query.
Local Blocks HitintegerhiddenTotal local buffer cache hits (temporary tables).
Local Blocks ReadintegerhiddenTotal local blocks read from disk.
Local Blocks DirtiedintegerhiddenTotal local blocks dirtied.
Local Blocks WrittenintegerhiddenTotal local blocks written.
Temp Blocks ReadintegerTotal temp blocks read. Non-zero values indicate queries spilling to disk due to insufficient work_mem.
Temp Blocks WrittenintegerTotal temp blocks written. High values suggest increasing work_mem may improve performance.
Block Read TimedurationmillisecondsTime spent reading blocks from disk. Requires track_io_timing to be enabled.
Block Write TimedurationmillisecondsTime spent writing blocks to disk. Requires track_io_timing to be enabled.
WAL RecordsintegerhiddenTotal number of WAL records generated. Available in PostgreSQL 13+.
WAL Full Page ImagesintegerhiddenTotal number of WAL full page images generated. Available in PostgreSQL 13+.
WAL BytesintegerhiddenTotal bytes of WAL generated. Available in PostgreSQL 13+.
JIT FunctionsintegerhiddenTotal number of functions JIT-compiled. Available in PostgreSQL 15+.
JIT Generation TimedurationmillisecondshiddenTime spent generating JIT code. Available in PostgreSQL 15+.
JIT Inlining CountintegerhiddenNumber of times JIT inlining was performed. Available in PostgreSQL 15+.
JIT Inlining TimedurationmillisecondshiddenTime spent on JIT inlining. Available in PostgreSQL 15+.
JIT Optimization CountintegerhiddenNumber of times JIT optimization was performed. Available in PostgreSQL 15+.
JIT Optimization TimedurationmillisecondshiddenTime spent on JIT optimization. Available in PostgreSQL 15+.
JIT Emission CountintegerhiddenNumber of times JIT code was emitted. Available in PostgreSQL 15+.
JIT Emission TimedurationmillisecondshiddenTime spent emitting JIT code. Available in PostgreSQL 15+.
Temp Block Read TimedurationmillisecondshiddenTime spent reading temp blocks. Available in PostgreSQL 15+. Requires track_io_timing.
Temp Block Write TimedurationmillisecondshiddenTime spent writing temp blocks. Available in PostgreSQL 15+. Requires track_io_timing.

Alerts

The following alerts are available:

Alert nameOn metricDescription
postgres_total_connection_utilization postgres.connections_utilizationaverage total connection utilization over the last minute
postgres_acquired_locks_utilization postgres.locks_utilizationaverage acquired locks utilization over the last minute
postgres_txid_exhaustion_perc postgres.txid_exhaustion_percpercent towards TXID wraparound
postgres_db_cache_io_ratio postgres.db_cache_io_ratioaverage cache hit ratio in db ${label:database} over the last minute
postgres_db_transactions_rollback_ratio postgres.db_cache_io_ratioaverage aborted transactions percentage in db ${label:database} over the last five minutes
postgres_db_deadlocks_rate postgres.db_deadlocks_ratenumber of deadlocks detected in db ${label:database} in the last minute
postgres_table_cache_io_ratio postgres.table_cache_io_ratioaverage cache hit ratio in db ${label:database} table ${label:table} over the last minute
postgres_table_index_cache_io_ratio postgres.table_index_cache_io_ratioaverage index cache hit ratio in db ${label:database} table ${label:table} over the last minute
postgres_table_toast_cache_io_ratio postgres.table_toast_cache_io_ratioaverage TOAST hit ratio in db ${label:database} table ${label:table} over the last minute
postgres_table_toast_index_cache_io_ratio postgres.table_toast_index_cache_io_ratioaverage index TOAST hit ratio in db ${label:database} table ${label:table} over the last minute
postgres_table_bloat_size_perc postgres.table_bloat_size_percbloat size percentage in db ${label:database} table ${label:table}
postgres_table_last_autovacuum_time postgres.table_autovacuum_since_timetime elapsed since db ${label:database} table ${label:table} was vacuumed by the autovacuum daemon
postgres_table_last_autoanalyze_time postgres.table_autoanalyze_since_timetime elapsed since db ${label:database} table ${label:table} was analyzed by the autovacuum daemon
postgres_index_bloat_size_perc postgres.index_bloat_size_percbloat size percentage in db ${label:database} table ${label:table} index ${label:index}

Setup

You can configure the postgres collector in two ways:

MethodBest forHow to
UIFast setup without editing filesGo to Nodes → Configure this node → Collectors → Jobs, search for postgres, then click + to add a job.
FileIf you prefer configuring via file, or need to automate deployments (e.g., with Ansible)Edit go.d/postgres.conf and add a job.
important

UI configuration requires paid Netdata Cloud plan.

Prerequisites

Create netdata user

Create a user with granted pg_monitor or pg_read_all_stat built-in role.

To create the netdata user with these permissions, execute the following in the psql session, as a user with CREATEROLE privileges:

CREATE USER netdata;
GRANT pg_monitor TO netdata;

After creating the new user, restart the Netdata Agent with sudo systemctl restart netdata, or the appropriate method for your system.

Configuration

Options

The following options can be defined globally: update_every, autodetection_retry.

Config options
GroupOptionDescriptionDefaultRequired
Collectionupdate_everyData collection interval (seconds).1no
autodetection_retryAutodetection retry interval (seconds). Set 0 to disable.0no
TargetdsnPostgres connection string (DSN). See DSN syntax.postgres://postgres:postgres@127.0.0.1:5432/postgresyes
timeoutQuery timeout (seconds).2no
Filterscollect_databases_matchingDatabase selector. Controls which databases are included. Uses simple patterns.no
Limitsmax_db_tablesMaximum number of tables per database to collect metrics for (0 = no limit).50no
max_db_indexesMaximum number of indexes per database to collect metrics for (0 = no limit).250no
Virtual NodevnodeAssociates this data collection job with a Virtual Node.no

via UI

Configure the postgres collector from the Netdata web interface:

  1. Go to Nodes.
  2. Select the node where you want the postgres data-collection job to run and click the (Configure this node). That node will run the data collection.
  3. The Collectors → Jobs view opens by default.
  4. In the Search box, type postgres (or scroll the list) to locate the postgres collector.
  5. Click the + next to the postgres collector to add a new job.
  6. Fill in the job fields, then click Test to verify the configuration and Submit to save.
    • Test runs the job with the provided settings and shows whether data can be collected.
    • If it fails, an error message appears with details (for example, connection refused, timeout, or command execution errors), so you can adjust and retest.

via File

The configuration file name for this integration is go.d/postgres.conf.

The file format is YAML. Generally, the structure is:

update_every: 1
autodetection_retry: 0
jobs:
- name: some_name1
- name: some_name2

You can edit the configuration file using the edit-config script from the Netdata config directory.

cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config go.d/postgres.conf
Examples
TCP socket

An example configuration.

jobs:
- name: local
dsn: 'postgresql://netdata@127.0.0.1:5432/postgres'

Unix socket

An example configuration.

Config
jobs:
- name: local
dsn: 'host=/var/run/postgresql dbname=postgres user=netdata'

Unix socket (custom port)

Connect to PostgreSQL using a Unix socket with a non-default port (5433).

Config
jobs:
- name: local
dsn: 'host=/var/run/postgresql port=5433 dbname=postgres user=netdata'

Multi-instance

Note: When you define multiple jobs, their names must be unique.

Local and remote instances.

Config
jobs:
- name: local
dsn: 'postgresql://netdata@127.0.0.1:5432/postgres'

- name: remote
dsn: 'postgresql://netdata@203.0.113.0:5432/postgres'

Troubleshooting

Debug Mode

Important: Debug mode is not supported for data collection jobs created via the UI using the Dyncfg feature.

To troubleshoot issues with the postgres collector, run the go.d.plugin with the debug option enabled. The output should give you clues as to why the collector isn't working.

  • Navigate to the plugins.d directory, usually at /usr/libexec/netdata/plugins.d/. If that's not the case on your system, open netdata.conf and look for the plugins setting under [directories].

    cd /usr/libexec/netdata/plugins.d/
  • Switch to the netdata user.

    sudo -u netdata -s
  • Run the go.d.plugin to debug the collector:

    ./go.d.plugin -d -m postgres

    To debug a specific job:

    ./go.d.plugin -d -m postgres -j jobName

Getting Logs

If you're encountering problems with the postgres collector, follow these steps to retrieve logs and identify potential issues:

  • Run the command specific to your system (systemd, non-systemd, or Docker container).
  • Examine the output for any warnings or error messages that might indicate issues. These messages should provide clues about the root cause of the problem.

System with systemd

Use the following command to view logs generated since the last Netdata service restart:

journalctl _SYSTEMD_INVOCATION_ID="$(systemctl show --value --property=InvocationID netdata)" --namespace=netdata --grep postgres

System without systemd

Locate the collector log file, typically at /var/log/netdata/collector.log, and use grep to filter for collector's name:

grep postgres /var/log/netdata/collector.log

Note: This method shows logs from all restarts. Focus on the latest entries for troubleshooting current issues.

Docker Container

If your Netdata runs in a Docker container named "netdata" (replace if different), use this command:

docker logs netdata 2>&1 | grep postgres

Do you have any feedback for this page? If so, you can open a new issue on our netdata/learn repository.