collectd - System information collection daemon
collectd is a small daemon which collects system information periodically
and provides mechanisms to store and monitor the values in a variety of
* collectd is able to collect the following data:
Apache server utilization: Number of bytes transferred, number of
requests handled and detailed scoreboard statistics
APC UPS Daemon: UPS charge, load, input/output/battery voltage, etc.
Sensors in Macs running Mac OS X / Darwin: Temperature, fan speed and
Various sensors in the Aquaero 5 water cooling board made by Aquacomputer.
Statistics about Ascent, a free server for the game `World of Warcraft'.
Reads absolute barometric pressure, air pressure reduced to sea level and
temperature. Supported sensors are MPL115A2 and MPL3115 from Freescale
and BMP085 from Bosch.
Batterycharge, -current and voltage of ACPI and PMU based laptop
Name server and resolver statistics from the `statistics-channel'
interface of BIND 9.5, 9,6 and later.
Statistics from the Ceph distributed storage system.
CPU accounting information for process groups under Linux.
Chrony daemon statistics: Local clock drift, offset to peers, etc.
Number of nf_conntrack entries.
Number of context switches done by the operating system.
CPU utilization: Time spent in the system, user, nice, idle, and related
CPU frequency (For laptops with speed step or a similar technology)
CPU sleep: Time spent in suspend (For mobile devices which enter suspend automatically)
Parse statistics from websites using regular expressions.
Retrieves JSON data via cURL and parses it according to user
Retrieves XML data via cURL and parses it according to user
Executes SQL statements on various databases and interprets the returned
Mountpoint usage (Basically the values `df(1)' delivers)
Disk utilization: Sectors read/written, number of read/write actions,
average time an IO-operation took to complete.
DNS traffic: Query types, response codes, opcodes and traffic/octets
Collect DPDK interface statistics.
See docs/BUILD.dpdkstat.md for detailed build instructions.
Collect individual drbd resource statistics.
Email statistics: Count, traffic, spam scores and checks.
Amount of entropy available to the system.
Network interface card statistics.
Values gathered by a custom program or script.
File handles statistics.
Count the number of files in directories.
Linux file-system based caching framework statistics.
Receive multicast traffic from Ganglia instances.
Monitor gps related data through gpsd.
Hard disk temperatures using hddtempd.
Report the number of used and free hugepages. More info on
hugepages can be found here:
The intel_pmu plugin reads performance counters provided by the Linux
kernel perf interface. The plugin uses jevents library to resolve named
events to perf events and access perf interface.
The intel_rdt plugin collects information provided by monitoring features
of Intel Resource Director Technology (Intel(R) RDT) like Cache Monitoring
Technology (CMT), Memory Bandwidth Monitoring (MBM). These features
provide information about utilization of shared resources like last level
cache occupancy, local memory bandwidth usage, remote memory bandwidth
usage, instructions per clock.
Interface traffic: Number of octets, packets and errors for each
IPC counters: semaphores used, number of allocated segments in shared
memory and more.
IPMI (Intelligent Platform Management Interface) sensors information.
Iptables' counters: Number of bytes that were matched by a certain
IPVS connection statistics (number of connections, octets and packets
for each service and destination).
IRQ counters: Frequency in which certain interrupts occur.
Integrates a `Java Virtual Machine' (JVM) to execute plugins in Java
See docs/BUILD.java.md for detailed build instructions.
System load average over the last 1, 5 and 15 minutes.
Detailed CPU statistics of the “Logical Partitions” virtualization
technique built into IBM's POWER processors.
The Lua plugin implements a Lua interpreter into collectd. This
makes it possible to write plugins in Lua which are executed by
collectd without the need to start a heavy interpreter every interval.
See collectd-lua(5) for details.
Size of “Logical Volumes” (LV) and “Volume Groups” (VG) of Linux'
“Logical Volume Manager” (LVM).
Queries very detailed usage statistics from wireless LAN adapters and
interfaces that use the Atheros chipset and the MadWifi driver.
Motherboard sensors: temperature, fan speed and voltage information,
Monitor machine check exceptions (hardware errors detected by hardware
and reported to software) reported by mcelog and generate appropriate
notifications when machine check exceptions are detected.
Linux software-RAID device information (number of active, failed, spare
and missing disks).
Query and parse data from a memcache daemon (memcached).
Statistics of the memcached distributed caching system.
Memory utilization: Memory occupied by running processes, page cache,
buffer cache and free.
Collects CPU usage, memory usage, temperatures and power consumption from
Intel Many Integrated Core (MIC) CPUs.
Reads values from Modbus/TCP enabled devices. Supports reading values
from multiple "slaves" so gateway devices can be used.
Information provided by serial multimeters, such as the `Metex
MySQL server statistics: Commands issued, handlers triggered, thread
usage, query cache utilization and traffic/octets sent and received.
Plugin to query performance values from a NetApp storage system using the
“Manage ONTAP” SDK provided by NetApp.
Very detailed Linux network interface and routing statistics. You can get
(detailed) information on interfaces, qdiscs, classes, and, if you can
make use of it, filters.
Receive values that were collected by other hosts. Large setups will
want to collect the data on one dedicated machine, and this is the
plugin of choice for that.
NFS Procedures: Which NFS command were called how often.
Collects statistics from `nginx' (speak: engine X), a HTTP and mail
NTP daemon statistics: Local clock drift, offset to peers, etc.
Information about Non-Uniform Memory Access (NUMA).
Network UPS tools: UPS current, voltage, power, charge, utilisation,
temperature, etc. See upsd(8).
Queries routing information from the “Optimized Link State Routing”
- onewire (EXPERIMENTAL!)
Read onewire sensors using the owcapu library of the owfs project.
Please read in collectd.conf(5) why this plugin is experimental.
Read monitoring information from OpenLDAP's cn=Monitor subtree.
RX and TX of each client in openvpn-status.log (status-version 2).
Query data from an Oracle database.
The plugin monitors the link status of Open vSwitch (OVS) connected
interfaces, dispatches the values to collectd and sends the notification
whenever the link state change occurs in the OVS database. It requires
YAJL library to be installed.
Detailed instructions for installing and setting up Open vSwitch, see
The plugin collects the statistics of OVS connected bridges and
interfaces. It requires YAJL library to be installed.
Detailed instructions for installing and setting up Open vSwitch, see
The perl plugin implements a Perl-interpreter into collectd. You can
write your own plugins in Perl and return arbitrary values using this
API. See collectd-perl(5).
Query statistics from BSD's packet filter "pf".
Receive and dispatch timing values from Pinba, a profiling extension for
Network latency: Time to reach the default gateway or another given
PostgreSQL database statistics: active server connections, transaction
numbers, block IO, table row manipulations.
PowerDNS name server statistics.
Process counts: Number of running, sleeping, zombie, ... processes.
Counts various aspects of network protocols such as IP, TCP, UDP, etc.
The python plugin implements a Python interpreter into collectd. This
makes it possible to write plugins in Python which are executed by
collectd without the need to start a heavy interpreter every interval.
See collectd-python(5) for details.
The redis plugin gathers information from a Redis server, including:
uptime, used memory, total connections etc.
Query interface and wireless registration statistics from RouterOS.
RRDtool caching daemon (RRDcacheD) statistics.
System sensors, accessed using lm_sensors: Voltages, temperatures and
fan rotation speeds.
RX and TX of serial interfaces. Linux only; needs root privileges.
Uses libsigrok as a backend, allowing any sigrok-supported device
to have its measurements fed to collectd. This includes multimeters,
sound level meters, thermometers, and much more.
Collect SMART statistics, notably load cycle count, temperature
and bad sectors.
Read values from SNMP (Simple Network Management Protocol) enabled
network devices such as switches, routers, thermometers, rack monitoring
servers, etc. See collectd-snmp(5).
Acts as a StatsD server, reading values sent over the network from StatsD
clients and calculating rates and other aggregates out of these values.
Pages swapped out onto hard disk or whatever is called `swap' by the OS..
Parse table-like structured files.
Follows (tails) log files, parses them by lines and submits matched
Follows (tails) files in CSV format, parses each line and submits
Bytes and operations read and written on tape devices. Solaris only.
Number of TCP connections to specific local and remote ports.
TeamSpeak2 server statistics.
Plugin to read values from `The Energy Detective' (TED).
Linux ACPI thermal zone information.
Reads the number of records and file size from a running Tokyo Tyrant
Reads CPU frequency and C-state residency on modern Intel
System uptime statistics.
Users currently logged in.
Various statistics from Varnish, an HTTP accelerator.
CPU, memory, disk and network I/O statistics from virtual machines.
Virtual memory statistics, e.g. the number of page-ins/-outs or the
number of pagefaults.
System resources used by Linux VServers.
Link quality of wireless cards. Linux only.
XEN Hypervisor CPU stats.
Bitrate and frequency of music played with XMMS.
Statistics for ZFS' “Adaptive Replacement Cache” (ARC).
Measures the percentage of cpu load per container (zone) under Solaris 10
Read data from Zookeeper's MNTR command.
* Output can be written or sent to various destinations by the following
Sends JSON-encoded data to an Advanced Message Queuing Protocol (AMQP)
server, such as RabbitMQ.
Write to comma separated values (CSV) files. This needs lots of
diskspace but is extremely portable and can be analysed with almost
every program that can analyse anything. Even Microsoft's Excel..
Send and receive values over the network using the gRPC framework.
It's possible to implement write plugins in Lua using the Lua
plugin. See collectd-lua(5) for details.
Publishes and subscribes to MQTT topics.
Send the data to a remote host to save the data somehow. This is useful
for large setups where the data should be saved by a dedicated machine.
Of course the values are propagated to plugins written in Perl, too, so
you can easily do weird stuff with the plugins we didn't dare think of
;) See collectd-perl(5).
It's possible to implement write plugins in Python using the python
plugin. See collectd-python(5) for details.
Output to round-robin-database (RRD) files using the RRDtool caching
daemon (RRDcacheD) - see rrdcached(1). That daemon provides a general
implementation of the caching done by the `rrdtool' plugin.
Output to round-robin-database (RRD) files using librrd. See rrdtool(1).
This is likely the most popular destination for such values. Since
updates to RRD-files are somewhat expensive this plugin can cache
updates to the files and write a bunch of updates at once, which lessens
system load a lot.
Receives and handles queries from SNMP master agent and returns the data
collected by read plugins. Handles requests only for OIDs specified in
configuration file. To handle SNMP queries the plugin gets data from
collectd and translates requested values from collectd's internal format
to SNMP format.
One can query the values from the unixsock plugin whenever they're
needed. Please read collectd-unixsock(5) for a description on how that's
Sends data to Carbon, the storage layer of Graphite using TCP or UDP. It
can be configured to avoid logging send errors (especially useful when
Sends the values collected by collectd to a web-server using HTTP POST
requests. The transmitted data is either in a form understood by the
Exec plugin or formatted in JSON.
Sends data to Apache Kafka, a distributed queue.
Writes data to the log
Sends data to MongoDB, a NoSQL database.
Publish values using an embedded HTTP server, in a format compatible
with Prometheus' collectd_exporter.
Sends the values to a Redis key-value database server.
Sends data to Riemann, a stream processing and monitoring system.
Sends data to Sensu, a stream processing and monitoring system, via the
Sensu client local TCP socket.
Sends data OpenTSDB, a scalable no master, no shared state time series
* Logging is, as everything in collectd, provided by plugins. The following
plugins keep us informed about what's going on:
Writes log messages to a file or STDOUT/STDERR.
Log messages are propagated to plugins written in Perl as well.
It's possible to implement log plugins in Python using the python plugin.
See collectd-python(5) for details.
Logs to the standard UNIX logging mechanism, syslog.
Writes log messages formatted as logstash JSON events.
* Notifications can be handled by the following plugins:
Send a desktop notification to a notification daemon, as defined in
the Desktop Notification Specification. To actually display the
notifications, notification-daemon is required.
Send an E-mail with the notification message to the configured
Submit notifications as passive check results to a local nagios instance.
Execute a program or script to handle the notification.
Writes the notification message to a file or STDOUT/STDERR.
Send the notification to a remote host to handle it somehow.
Notifications are propagated to plugins written in Perl as well.
It's possible to implement notification plugins in Python using the
python plugin. See collectd-python(5) for details.
* Value processing can be controlled using the "filter chain" infrastructure
and "matches" and "targets". The following plugins are available:
Match counter values which are currently zero.
Match values using a hash function of the hostname.
Match values by their identifier based on regular expressions.
Match values with an invalid timestamp.
Select values by their data sources' values.
Create and dispatch a notification.
Replace parts of an identifier using regular expressions.
Scale (multiply) values by an arbitrary value.
Set (overwrite) entire parts of an identifier.
* Miscellaneous plugins:
Selects multiple value lists based on patterns or regular expressions
and creates new aggregated values lists from those.
Checks values against configured thresholds and creates notifications if
values are out of bounds. See collectd-threshold(5) for details.
Sets the hostname to a unique identifier. This is meant for setups
where each client may migrate to another physical host, possibly going
through one or more name changes in the process.
* Performance: Since collectd is running as a daemon it doesn't spend much
time starting up again and again. With the exception of the exec plugin no
processes are forked. Caching in output plugins, such as the rrdtool and
network plugins, makes sure your resources are used efficiently. Also,
since collectd is programmed multithreaded it benefits from hyper-threading
and multicore processors and makes sure that the daemon isn't idle if only
one plugin waits for an IO-operation to complete.
* Once set up, hardly any maintenance is necessary. Setup is kept as easy
as possible and the default values should be okay for most users.
* collectd's configuration file can be found at `sysconfdir'/collectd.conf.
Run `collectd -h' for a list of built-in defaults. See `collectd.conf(5)'
for a list of options and a syntax description.
* When the `csv' or `rrdtool' plugins are loaded they'll write the values to
files. The usual place for these files is beneath `/var/lib/collectd'.
* When using some of the plugins, collectd needs to run as user root, since
only root can do certain things, such as craft ICMP packages needed to ping
other hosts. collectd should NOT be installed setuid root since it can be
used to overwrite valuable files!
* Sample scripts to generate graphs reside in `contrib/' in the source
package or somewhere near `/usr/share/doc/collectd' in most distributions.
Please be aware that those script are meant as a starting point for your
own experiments.. Some of them require the `RRDs' Perl module.
(`librrds-perl' on Debian) If you have written a more sophisticated
solution please share it with us.
* The RRAs of the automatically created RRD files depend on the `step'
and `heartbeat' settings given. If change these settings you may need to
re-create the files, losing all data. Please be aware of that when changing
the values and read the rrdtool(1) manpage thoroughly.
collectd and chkrootkit
If you are using the `dns' plugin chkrootkit(1) will report collectd as a
packet sniffer (": PACKET SNIFFER(/usr/sbin/collectd)"). The
plugin captures all UDP packets on port 53 to analyze the DNS traffic. In
this case, collectd is a legitimate sniffer and the report should be
considered to be a false positive. However, you might want to check that
this really is collectd and not some other, illegitimate sniffer.
To compile collectd from source you will need:
* Usual suspects: C compiler, linker, preprocessor, make, ...
collectd makes use of some common C99 features, e.g. compound literals and
mixed declarations, and therefore requires a C99 compatible compiler.
On Debian and Ubuntu, the "build-essential" package should pull in
everything that's necessary.
* A POSIX-threads (pthread) implementation.
Since gathering some statistics is slow (network connections, slow devices,
etc) collectd is parallelized. The POSIX threads interface is being
used and should be found in various implementations for hopefully all
* When building from the Git repository, flex (tokenizer) and bison (parser
generator) are required. Release tarballs include the generated files – you
don't need these packages in that case.
* aerotools-ng (optional)
Used by the `aquaero' plugin. Currently, the `libaquaero5' library, which
is used by the `aerotools-ng' toolkit, is not compiled as a shared object
nor does it feature an installation routine. Therefore, you need to point
collectd's configure script at the source directory of the `aerotools-ng'
* CoreFoundation.framework and IOKit.framework (optional)
For compiling on Darwin in general and the `apple_sensors' plugin in
* libatasmart (optional)
Used by the `smart' plugin.
* libcap (optional)
The `turbostat' plugin can optionally build Linux Capabilities support,
which avoids full privileges requirement (aka. running as root) to read
* libclntsh (optional)
Used by the `oracle' plugin.
* libhiredis (optional)
Used by the redis plugin. Please note that you require a 0.10.0 version
* libcurl (optional)
If you want to use the `apache', `ascent', `bind', `curl', `curl_json',
`curl_xml', `nginx', or `write_http' plugin.
* libdbi (optional)
Used by the `dbi' plugin to connect to various databases.
* libesmtp (optional)
For the `notify_email' plugin.
* libganglia (optional)
Used by the `gmond' plugin to process data received from Ganglia.
* libgrpc (optional)
Used by the `grpc' plugin. gRPC requires a C++ compiler supporting the
* libgcrypt (optional)
Used by the `network' plugin for encryption and authentication.
* libgps (optional)
Used by the `gps' plugin.
* libi2c-dev (optional)
Used for the plugin `barometer', provides just the i2c-dev.h header file
for user space i2c development.
* libiptc (optional)
For querying iptables counters.
* libjevents (optional)
The jevents library is used by the `intel_pmu' plugin to access the Linux
kernel perf interface.
Note: the library should be build with -fPIC flag to be linked with
intel_pmu shared object correctly.
* libjvm (optional)
Library that encapsulates the `Java Virtual Machine' (JVM). This library is
used by the `java' plugin to execute Java bytecode.
See docs/BUILD.java.md for detailed build instructions.
* libldap (optional)
Used by the `openldap' plugin.
* liblua (optional)
Used by the `lua' plugin. Currently, Lua 5.1 and later are supported.
* liblvm2 (optional)
Used by the `lvm' plugin.
* libmemcached (optional)
Used by the `memcachec' plugin to connect to a memcache daemon.
* libmicrohttpd (optional)
Used by the write_prometheus plugin to run an http daemon.
* libmnl (optional)
Used by the `netlink' plugin.
* libmodbus (optional)
Used by the `modbus' plugin to communicate with Modbus/TCP devices. The
`modbus' plugin works with version 2.0.3 of the library – due to frequent
API changes other versions may or may not compile cleanly.
* libmysqlclient (optional)
Unsurprisingly used by the `mysql' plugin.
* libnetapp (optional)
Required for the `netapp' plugin.
This library is part of the “Manage ONTAP SDK” published by NetApp.
* libnetsnmp (optional)
For the `snmp' and 'snmp_agent' plugins.
* libnetsnmpagent (optional)
Required for the 'snmp_agent' plugin.
* libnotify (optional)
For the `notify_desktop' plugin.
* libopenipmi (optional)
Used by the `ipmi' plugin to prove IPMI devices.
* liboping (optional)
Used by the `ping' plugin to send and receive ICMP packets.
* libowcapi (optional)
Used by the `onewire' plugin to read values from onewire sensors (or the
* libpcap (optional)
Used to capture packets by the `dns' plugin.
* libperfstat (optional)
Used by various plugins to gather statistics under AIX.
* libperl (optional)
Obviously used by the `perl' plugin. The library has to be compiled with
ithread support (introduced in Perl 5.6.0).
* libpq (optional)
The PostgreSQL C client library used by the `postgresql' plugin.
* libpqos (optional)
The PQoS library for Intel(R) Resource Director Technology used by the
* libprotobuf, protoc 3.0+ (optional)
Used by the `grpc' plugin to generate service stubs and code to handle
network packets of collectd's protobuf-based network protocol.
* libprotobuf-c, protoc-c (optional)
Used by the `pinba' plugin to generate a parser for the network packets
sent by the Pinba PHP extension.
* libpython (optional)
Used by the `python' plugin. Currently, Python 2.6 and later and Python 3
* librabbitmq (optional; also called “rabbitmq-c”)
Used by the `amqp' plugin for AMQP connections, for example to RabbitMQ.
* librdkafka (optional; also called “rdkafka”)
Used by the `write_kafka' plugin for producing messages and sending them
to a Kafka broker.
* librouteros (optional)
Used by the `routeros' plugin to connect to a device running `RouterOS'.
* librrd (optional)
Used by the `rrdtool' and `rrdcached' plugins. The latter requires RRDtool
client support which was added after version 1.3 of RRDtool. Versions 1.0,
1.2 and 1.3 are known to work with the `rrdtool' plugin.
* librt, libsocket, libkstat, libdevinfo (optional)
Various standard Solaris libraries which provide system functions.
* libsensors (optional)
To read from `lm_sensors', see the `sensors' plugin.
* libsigrok (optional)
Used by the `sigrok' plugin. In addition, libsigrok depends on glib,
libzip, and optionally (depending on which drivers are enabled) on
libusb, libftdi and libudev.
* libstatgrab (optional)
Used by various plugins to collect statistics on systems other than Linux
* libtokyotyrant (optional)
Used by the `tokyotyrant' plugin.
* libupsclient/nut (optional)
For the `nut' plugin which queries nut's `upsd'.
* libvirt (optional)
Collect statistics from virtual machines.
* libxml2 (optional)
Parse XML data. This is needed for the `ascent', `bind', `curl_xml' and
* libxen (optional)
Used by the `xencpu' plugin.
* libxmms (optional)
* libyajl (optional)
Parse JSON data. This is needed for the `ceph', `curl_json', 'ovs_events',
'ovs_stats' and `log_logstash' plugins.
* libvarnish (optional)
Fetches statistics from a Varnish instance. This is needed for the
* riemann-c-client (optional)
For the `write_riemann' plugin.
Configuring / Compiling / Installing
To configure, build and install collectd with the default settings, run
`./configure && make && make install'. For detailed, generic instructions
see INSTALL. For a complete list of configure options and their description,
run `./configure --help'.
By default, the configure script will check for all build dependencies and
disable all plugins whose requirements cannot be fulfilled (any other plugin
will be enabled). To enable a plugin, install missing dependencies (see
section `Prerequisites' above) and rerun `configure'. If you specify the
`--enable-' configure option, the script will fail if the depen-
dencies for the specified plugin are not met. In that case you can force the
plugin to be built using the `--enable-=force' configure option.
This will most likely fail though unless you're working in a very unusual
setup and you really know what you're doing. If you specify the
`--disable-' configure option, the plugin will not be built. If you
specify the `--enable-all-plugins' or `--disable-all-plugins' configure
options, all plugins will be enabled or disabled respectively by default.
Explicitly enabling or disabling a plugin overwrites the default for the
specified plugin. These options are meant for package maintainers and should
not be used in everyday situations.
By default, collectd will be installed into `/opt/collectd'. You can adjust
this setting by specifying the `--prefix' configure option - see INSTALL for
details. If you pass DESTDIR= to `make install', will be
prefixed to all installation directories. This might be useful when creating
packages for collectd.
Generating the configure script
Collectd ships with a `build.sh' script to generate the `configure'
script shipped with releases.
To generate the `configure` script, you'll need the following dependencies:
The `build.sh' script takes no arguments.
To compile correctly collectd needs to be able to initialize static
variables to NAN (Not A Number). Some C libraries, especially the GNU
libc, have a problem with that.
Luckily, with GCC it's possible to work around that problem: One can define
NAN as being (0.0 / 0.0) and `isnan' as `f != f'. However, to test this
``implementation'' the configure script needs to compile and run a short
test program. Obviously running a test program when doing a cross-
compilation is, well, challenging.
If you run into this problem, you can use the `--with-nan-emulation'
configure option to force the use of this implementation. We can't promise
that the compiled binary actually behaves as it should, but since NANs
are likely never passed to the libm you have a good chance to be lucky.
Likewise, collectd needs to know the layout of doubles in memory, in order
to craft uniform network packets over different architectures. For this, it
needs to know how to convert doubles into the memory layout used by x86. The
configure script tries to figure this out by compiling and running a few
small test programs. This is of course not possible when cross-compiling.
You can use the `--with-fp-layout' option to tell the configure script which
conversion method to assume. Valid arguments are:
* `nothing' (12345678 -> 12345678)
* `endianflip' (12345678 -> 87654321)
* `intswap' (12345678 -> 56781234)
Please use GitHub to report bugs and submit pull requests:
See CONTRIBUTING.md for details.
For questions, development information and basically all other concerns please
send an email to collectd's mailing list at
For live discussion and more personal contact visit us in IRC, we're in
channel #collectd on freenode.
Florian octo Forster ,
Sebastian tokkee Harl ,
and many contributors (see `AUTHORS').