From charlesreid1

The Netdata url schema exposes all metrics being measured by Netdata as a JSON-exportable REST url.


Querying Netdata API with Python

To obtain the data that Netdata is reading, then, is a simple matter of making a URL request and translating the result into JSON. This is a breeze with the Python 3 requests library:

import requests, json
my_url = 'http://10.6.0.1:19999/api/v1/allmetrics?format=json&help=yes'
r = requests.get(url=my_url)

# dump resulting json
with open('output.json','w') as f:
    json.dump( r.json(), f, indent=4 )

# print resulting json
print(r.json())

This displays a huge dictionary full of key-value pairs - all the quantities netdata is monitoring.

At this point, the data can be inserted into the database, or it can be parsed to extract particular quantities of interest. Each key has a timestamp associated with it, in Unix epoch format (e.g., 1518321718).

$ head -n30 output.json
{
    "ipv4.tcpofo": {
        "name": "ipv4.tcpofo",
        "context": "ipv4.tcpofo",
        "units": "packets/s",
        "last_updated": 1518321718,
        "dimensions": {
            "TCPOFOQueue": {
                "name": "inqueue",
                "value": 0.0
            },
            "TCPOFODrop": {
                "name": "dropped",
                "value": 0.0
            },
            "TCPOFOMerge": {
                "name": "merged",
                "value": 0.0
            },
            "OfoPruned": {
                "name": "pruned",
                "value": 0.0
            }
        }
    },
    "cgroup_happy_mongo.merged_ops": {
        "name": "cgroup_happy_mongo.merged_ops",
        "context": "cgroup.merged_ops",
        "units": "operations/s",
        "last_updated": 1518321718,

Parsing Netdata Output

Example Netdata Output

The result of a single API call to Netdata is an extremely large JSON full of measurements. There are 248 keys, each containing dictionaries of their own. Here are the key names:

In [1]: import requests, json

In [2]: my_url = 'http://10.6.0.1:19999/api/v1/allmetrics?format=json&help=yes'
   ...:

In [3]: r = requests.get(url=my_url)

In [4]: with open('output.json','w') as f:
   ...:         json.dump( r.json(), f, indent=4 )
   ...:

In [5]: d = r.json()

In [6]: print(len(d.keys()))
248

In [7]: print(d.keys())
dict_keys(['ipv4.tcpofo', 'cgroup_happy_mongo.merged_ops', 'cgroup_mex.merged_ops', 'cgroup_mex.throttle_serviced_ops', 'cgroup_mex.throttle_io', 'cgroup_mex.net_packets_eth0', 'cgroup_mex.serviced_ops', 'cgroup_mex.net_eth0', 'cgroup_mex.io', 'cgroup_mex.mem_usage', 'cgroup_mex.pgfaults', 'cgroup_mex.mem_activity', 'cgroup_mex.writeback', 'cgroup_mex.mem', 'cgroup_mex.cpu_per_core', 'cgroup_mex.cpu', 'cgroup_happy_mongo.queued_ops', 'cgroup_happy_mongo.throttle_serviced_ops', 'cgroup_happy_mongo.throttle_io', 'cgroup_happy_mongo.serviced_ops', 'cgroup_happy_mongo.io', 'cgroup_happy_mongo.mem_usage', 'cgroup_happy_mongo.pgfaults', 'cgroup_happy_mongo.mem_activity', 'cgroup_happy_mongo.net_packets_eth0', 'cgroup_happy_mongo.writeback', 'cgroup_happy_mongo.net_eth0', 'cgroup_happy_mongo.mem', 'cgroup_happy_mongo.cpu_per_core', 'cgroup_happy_mongo.cpu', 'ipv4.sockstat_tcp_mem', 'net_packets.docker0', 'net.docker0', 'sensors.coretemp-isa-0000_temperature', 'cpu.cpu1_cpuidle', 'cpu.cpu0_cpuidle', 'cpu.cpufreq', 'netdata.runtime_sensors', 'netdata.runtime_cpuidle', 'netdata.runtime_cpufreq', 'disk_svctm.dm-1', 'disk_avgsz.dm-1', 'disk_await.dm-1', 'disk_svctm.dm-0', 'disk_avgsz.dm-0', 'disk_await.dm-0', 'disk_svctm.sda', 'disk_avgsz.sda', 'disk_await.sda', 'groups.pipes', 'groups.sockets', 'groups.files', 'netdata.compression_ratio', 'netdata.response_time', 'groups.lwrites', 'groups.lreads', 'netdata.net', 'groups.pwrites', 'netdata.requests', 'netdata.clients', 'netdata.server_cpu', 'netdata.plugin_proc_cpu', 'groups.preads', 'groups.minor_faults', 'netdata.plugin_proc_modules', 'system.ipc_semaphore_arrays', 'system.ipc_semaphores', 'groups.major_faults', 'system.io', 'disk_iotime.dm-1', 'groups.cpu_system', 'disk_util.dm-1', 'groups.cpu_user', 'disk_backlog.dm-1', 'disk_ops.dm-1', 'disk.dm-1', 'disk_iotime.dm-0', 'groups.processes', 'disk_util.dm-0', 'disk_backlog.dm-0', 'groups.threads', 'disk_qops.dm-0', 'disk_ops.dm-0', 'disk.dm-0', 'disk_iotime.sda', 'groups.vmem', 'disk_mops.sda', 'disk_util.sda', 'groups.mem', 'disk_backlog.sda', 'groups.cpu', 'disk_qops.sda', 'disk_ops.sda', 'disk.sda', 'netfilter.conntrack_sockets', 'cpu.cpu1_softnet_stat', 'users.pipes', 'cpu.cpu0_softnet_stat', 'system.softnet_stat', 'users.sockets', 'ipv6.ect', 'users.files', 'ipv6.icmptypes', 'ipv6.icmpmldv2', 'ipv6.icmpneighbor', 'ipv6.icmprouter', 'users.lwrites', 'users.lreads', 'ipv6.icmperrors', 'ipv6.icmp', 'users.pwrites', 'ipv6.mcastpkts', 'ipv6.mcast', 'users.preads', 'users.minor_faults', 'users.major_faults', 'ipv6.udperrors', 'ipv6.udppackets', 'users.cpu_system', 'ipv6.packets', 'system.ipv6', 'users.cpu_user', 'ipv4.udplite_errors', 'ipv4.udplite', 'users.processes', 'users.threads', 'ipv4.udperrors', 'users.vmem', 'ipv4.udppackets', 'ipv4.tcphandshake', 'users.mem', 'ipv4.tcpopens', 'users.cpu', 'ipv4.tcperrors', 'ipv4.tcppackets', 'ipv4.tcpsock', 'apps.pipes', 'apps.sockets', 'ipv4.icmpmsg', 'ipv4.icmp_errors', 'ipv4.icmp', 'ipv4.errors', 'ipv4.fragsin', 'ipv4.fragsout', 'apps.files', 'ipv4.packets', 'ipv4.ecnpkts', 'ipv4.bcastpkts', 'ipv4.mcastpkts', 'apps.lwrites', 'ipv4.bcast', 'ipv4.mcast', 'system.ipv4', 'ipv6.sockstat6_raw_sockets', 'ipv6.sockstat6_udp_sockets', 'ipv6.sockstat6_tcp_sockets', 'ipv4.sockstat_udp_mem', 'ipv4.sockstat_udp_sockets', 'ipv4.sockstat_tcp_sockets', 'apps.lreads', 'ipv4.sockstat_sockets', 'system.net', 'net_packets.wlx7cdd906c3ef0', 'net.wlx7cdd906c3ef0', 'net_packets.master', 'net.master', 'mem.slab', 'mem.kernel', 'apps.pwrites', 'mem.writeback', 'mem.committed', 'system.swap', 'apps.preads', 'mem.available', 'system.ram', 'mem.pgfaults', 'system.pgpgio', 'apps.minor_faults', 'cpu.cpu1_softirqs', 'cpu.cpu0_softirqs', 'apps.major_faults', 'system.softirqs', 'apps.cpu_system', 'cpu.cpu1_interrupts', 'apps.cpu_user', 'apps.processes', 'cpu.cpu0_interrupts', 'apps.threads', 'services.merged_io_ops_write', 'services.merged_io_ops_read', 'services.queued_io_ops_write', 'services.queued_io_ops_read', 'services.throttle_io_ops_write', 'services.throttle_io_ops_read', 'services.throttle_io_write', 'services.throttle_io_read', 'services.io_ops_write', 'services.io_ops_read', 'services.io_write', 'services.io_read', 'services.mem_usage', 'services.cpu', 'netdata.plugin_diskspace_dt', 'netdata.plugin_diskspace', 'system.interrupts', 'apps.vmem', 'system.entropy', 'disk_inodes._boot', 'system.active_processes', 'disk_space._boot', 'system.load', 'disk_inodes._run_lock', 'system.uptime', 'disk_space._run_lock', 'apps.mem', 'cpu.core_throttling', 'disk_inodes._dev_shm', 'system.processes', 'system.forks', 'system.ctxt', 'disk_space._dev_shm', 'system.intr', 'netdata.private_charts', 'disk_inodes._', 'apps.cpu', 'netdata.tcp_connected', 'disk_space._', 'netdata.apps_children_fix', 'cpu.cpu1', 'netdata.tcp_connects', 'disk_inodes._run', 'netdata.plugin_tc_time', 'netdata.apps_fix', 'netdata.statsd_packets', 'disk_space._run', 'cpu.cpu0', 'netdata.plugin_tc_cpu', 'netdata.statsd_bytes', 'netdata.apps_sizes', 'disk_inodes._dev', 'netdata.statsd_reads', 'netdata.apps_cpu', 'disk_space._dev', 'system.cpu', 'netdata.plugin_cgroups_cpu', 'netdata.statsd_events', 'netdata.statsd_metrics', 'system.idlejitter'])

Netdata Storage Schema

There are a couple of approaches to storing Netdata data in a database.

The first option is to create a single monolithic netdata table, with 248 columns corresponding to the 248 measurement groups, and each cell of the table containing a large nested dictionary:

In [8]: d['cpu.cpu1']
Out[8]:
{'context': 'cpu.cpu',
 'dimensions': {'guest': {'name': 'guest', 'value': 0.0},
  'guest_nice': {'name': 'guest_nice', 'value': 0.0},
  'idle': {'name': 'idle', 'value': 98.989899},
  'iowait': {'name': 'iowait', 'value': 0.0},
  'irq': {'name': 'irq', 'value': 0.0},
  'nice': {'name': 'nice', 'value': 0.0},
  'softirq': {'name': 'softirq', 'value': 0.0},
  'steal': {'name': 'steal', 'value': 0.0},
  'system': {'name': 'system', 'value': 1.010101},
  'user': {'name': 'user', 'value': 0.0}},
 'last_updated': 1518323931,
 'name': 'cpu.cpu1',
 'units': 'percentage'}

Another approach is to use each of the keys to create a table (resulting in 248 tables). In this case each key in the sub-dictionary would become a column in a table.

However, the approach we used is a combination of these.

Notice that the "dimensions" key always maps to a sub-dictionary containing the actual numerical values. We can throw out everything except this dimensions dictionary, and extract each value in the dictionary to create a series.

For example, in the dictionary above we have several time series that would result:

  • cpu.cpu1.guest
  • cpu.cpu1.guest_nice
  • cpu.cpu1.idle
  • cpu.cpu1.iowait
  • cpu.cpu1.irq
  • etc.

Here is another example:

In [9]: d['ipv4.sockstat_tcp_sockets']
Out[9]:
{'context': 'ipv4.sockstat_tcp_sockets',
 'dimensions': {'alloc': {'name': 'alloc', 'value': 26.0},
  'inuse': {'name': 'inuse', 'value': 11.0},
  'orphan': {'name': 'orphan', 'value': 0.0},
  'timewait': {'name': 'timewait', 'value': 0.0}},
 'last_updated': 1518323931,
 'name': 'ipv4.sockstat_tcp_sockets',
 'units': 'sockets'}

Recipe

The recipe to implement a table for each key is as follows:

create mongodb database connection
create mongodb collection

while true (forever loop):

    get netdata data in json format from url

    for key in json keys:
        get timestamp
        get dimensions sub-dictionary
        for keys in sub-dictionary:
            create key, value pair
        collection.insert( all key value pairs, plus timestamp )

    sleep for N seconds

Recipe Code

Link: https://git.charlesreid1.com/data/netdata/src/master/netdata_mongo.py


from datetime import datetime
import pymongo, time, requests, json

"""
Netdata Mongo

This script requests data from the Netdata API in JSON format,
parses the result, and stores it in a MongoDB database. 
"""

db_name = 'netdata'
collection_name = 'jupiter'

# MongoDB
client = pymongo.MongoClient('10.6.0.1', 27017)
db = client[db_name]

while(True):

    # Netdata
    my_url = 'http://10.6.0.1:19999/api/v1/allmetrics?format=json'
    r = requests.get(url=my_url)
    d = r.json()

    collection = db[collection_name]

    # Use each key in the original Netdata bundle
    # as a "key prefix"
    to_insert = {}
    for key in d.keys():
        
        data = d[key]

        prefix = data['name']
        timestamp = datetime.fromtimestamp(data['last_updated'])

        # values_data is list of dictionaries: key is label, value is numerical value
        values_data = list(data['dimensions'].values())

        # Assemble final key-value pair going into MongoDB
        for v in values_data:
            this_label = prefix + "." + v['name']
            this_value = v['value']
            to_insert[this_label] = this_value
        to_insert['timestamp'] = timestamp

    # Insert into MongoDB
    try:
        collection.insert(to_insert, check_keys=False)
    except pymongo.errors.OperationFailure as e:
        print(e.code)
        print(e.details)

    print("Inserted document into collection.")
    time.sleep(10)

Flags