Date created: Sunday, October 17, 2021 5:44:02 PM. Last modified: Sunday, December 3, 2023 4:22:47 PM

Prometheus + InfluxDB + Grafana (Docker on ARM64)

Running Prometheus (and exporters) and Grafana in Docker on Ubuntu 20.04 on ARM64 (ODOIRD-N2).

Docker Install and Test

# Install Docker
wget https://get.docker.com -O docker-install.sh && chmod a+x docker-install.sh
$ ./docker-install.sh

$ sudo systemctl enable docker

$ sudo usermod -aG docker bensley

# Run the test container (this will download the hello-world container from Docker hub)
$ sudo docker run hello-world

# With IPv6 only connectivity, if the hello-world container is missing (it will be on a fresh install),
# then docker will fail to download it, because the Docker hub is IPv4 only.
# IPv6 isn't natively supported on Docker hub yet (still in beta), so one must;
# Register for a free account at: https://hub.docker.com
# Run: $ sudo docker login registry.ipv6.docker.com
# Login with the new free account details
# Manually pull any images whilst specifying the IPv6 hub address:
# e.g. $ sudo docker pull registry.ipv6.docker.com/library/hello-world:latest
# or $ sudo docker pull registry.ipv6.docker.com/docker/foobar:latest
# Not that the /library URL is for offical docker images
# Then run the container using:
# $ sudo docker run registry.ipv6.docker.com/library/hello-world
# In any Dockerfile, instead of using: FROM ubuntu:20.04
# One must now use: FROM registry.ipv6.docker.com/library/ubuntu:20.04

$ sudo docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f6553123d598 hello-world "/hello" 12 minutes ago Exited (0) 12 minutes ago wizardly_varahamihira

$ sudo docker container rm f6553123d598
f6553123d598

$ sudo docker image rm 18e5af790473
Untagged: hello-world:latest
Untagged: hello-world@sha256:37a0b92b08d4919615c3ee023f7ddb068d12b8387475d64c622ac30f45c29c51
Deleted: sha256:18e5af7904737ba5ef7fbbd7d59de5ebe6c4437907bd7fc436bf9b3ef3149ea9
Deleted: sha256:64df1d35ad6c0c754bb2fb894dc41c8d22497dec795ee030971903774ad1c00d

# Try an ARM64 Ubuntu container
$ sudo docker image pull ubuntu:21.10

$ sudo docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu 21.10 f99cc3098237 38 hours ago 69.9MB

$ sudo docker run -it --rm ubuntu:21.10

# Ensure docker and containerd are set to start on system boot:
$ sudo systemctl enable docker.service
$ sudo systemctl enable containerd.service

 

Install Prometheus Container

# Pull Prometheus container
$ sudo docker pull prom/prometheus

# Create a directory which will store the Prometheus config and create a base config.
$ sudo mkdir /opt/prometheus
$ sudo chown bensley:bensley /opt/prometheus
$ cd /opt/prometheus
$ vi prometheus.yml

# Example starter config which allows Prometheus to monitor itself:
global:
scrape_interval: 15s # Default: 1m
scrape_configs:
# This is the local Prometheus instance...
- job_name: 'prometheus'
#metrics_path: "/metrics"
#scheme: "http"
scrape_interval: 5s # Overide global default
static_configs:
- targets: ['localhost:9090']

# Start the container in interactive mode with self-deletion after exit, and bind-mount this config file to test it's working
$ docker run -it --rm -p 9090:9090 -v /opt/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus
# Prometheus will now have a primitive web interface running on port 9090, it exposes local stats at http://localhost:9090/metrics

# To start in daemon mode with a name use:
$ sudo docker run -d -p 9090:9090 --restart always -v /opt/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml --name prometheus prom/prometheus

 

Updating Prometheus Config

When updates are made to prometheus.yml, Prometheus can be asked to reload it's config in one of two ways;

# 1. Send SIGHUP signal to Prometheus docker
$ docker ps | grep prometheus
71cd0226d49b prom/prometheus "/bin/prometheus --c…" 2 days ago Up 42 hours 0.0.0.0:9090->9090/tcp, :::9090->9090/tcp prometheus

$ sudo docker kill --signal="SIGHUP" 71cd0226d49b


# 2. Post to the reload API. Note: This must have been enabled when the docker container was first started (--web.enable-lifecycle) - it is disabled by default for security reasons.
# When adding a new CLI Arg to the docker run command, all the defaults are whipped out and need to be specified too:

$ sudo docker run -d -p 9090:9090 --restart always -v /opt/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml --name prometheus prom/prometheus --config.file=/etc/prometheus/prometheus.yml --storage.tsdb.path=/prometheus --web.console.libraries=/usr/share/prometheus/console_libraries --web.console.templates=/usr/share/prometheus/consoles --web.enable-lifecycle

# Then one can POST to the reload API endpoint using curl;
$ curl -X POST http://localhost:9090/-/reload

 

Install Grafana Container

# Pull Grafana container
$ sudo docker pull grafana/grafana
# Test it in interactive mode with self-deletion after exit
$ sudo docker run -it --rm -p 3000:3000 grafana/grafana
# The main Grafana web interface will now be available at http://localhost:3000

# To start in daemon mode with a name use:
$ sudo docker run -d -p 3000:3000 --restart always --name grafana grafana/grafana

# Then one can start/stop by name
$ sudo docker stop grafana
$ sudo docker start grafana

 

Install and Config InfluxDB as Container

# UPDATE: Influxdb v2.x doesn't work with Promtheus anymore due ot Influx API updates.
# The latest v1.x InfluxDB must be used instead (v 1.8 at the time of writing).

##################################################
# v1.8 SETUP
# Grab latest v1.x version
docker pull influxdb:1.8

# Create mount point for persistent database storage and to store config file on host
$ sudo mkdir /opt/influxdb
$ sudo chown bensley:bensley /opt/influxdb/
$ cd $_

# Build the default config file:
$ docker run --rm influxdb:1.8 influxd config > influxdb.conf

# Start the container bind-mounting the config file and volume-mounting the persistent storage, on the host
# Influx v1.8 listens on two TCP ports:
# 8086 is for client-server communication using the InfluxDB API.
# 8088 is for the RPC service to perform back up and restore operations.

# Run a test:
$ docker run -it --rm -p 8086:8086 -p 8088:8088 --name influxdb --volume /opt/influxdb/:/var/lib/influxdb -v /opt/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf influxdb:1.8

# If all is OK, then run as a daemon:
$ docker run -d -p 8086:8086 -p 8088:8088 --restart always --name influxdb --volume /opt/influxdb/:/var/lib/influxdb -v /opt/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf influxdb:1.8

# Connect to the InfluxDB instance and create a new database for Prometheus.
# To do this - start bash in the container then use the influx CLI tool (which connects to localhost:8086 by default).
$ docker exec -it influxdb bash
$ influx -precision rfc3339
# Only the default _internal database should exist initially:
> show databases;
# Create new DB called "prometheus" and check
> CREATE DATABASE prometheus;
> show databases;

# Optionally, one can create a retention policy because the default policy has an infinite retention period:
> use prometheus
> show RETENTION POLICIES
name duration shardGroupDuration replicaN default
---- -------- ------------------ -------- -------
autogen 0s 168h0m0s 1 true
> ALTER RETENTION POLICY "autogen" on "prometheus" DURATION 26w
> show RETENTION POLICIES
name duration shardGroupDuration replicaN default
---- -------- ------------------ -------- -------
autogen 4368h0m0s 168h0m0s 1 true

# Add the InfluxDB API endpoints to to the Prometheus config file
$ vi /opt/prometheus/prometheus.yml

remote_write:
- url: "http://192.168.58.6:8086/api/v1/prom/write?db=prometheus"

remote_read:
- url: "http://192.168.58.6:8086/api/v1/prom/read?db=prometheus"

# Signal Prometheus to reload it's config
$ docker kill --signal="SIGHUP" prometheus
##################################################

##################################################
# v2.0.9 SETUP - Doesn't work with Prometheus!!!!!
# Get latest docker image (v2.0.9 at the time of writing)
# $ docker pull influxdb:latest

# Create mount point for persistent data storage and to store config file on host
$ sudo mkdir /opt/influxdb
$ sudo chown bensley:bensley /opt/influxdb/
$ cd $_

# Build the default config file:
$ docker run --rm influxdb:1.8 influxd print-config > config.yml

# Start the container bind-mounting the config file, volume-mounting persistent data storage on the host, and using the Influx CLI ARG --reporting-disabled to stop Influx from sending telemetry back to Influx Corp.

$ docker run -d -p 8086:8086 -p 8088:8088 --restart always --name influxdb --volume /opt/influxdb/:/var/lib/influxdb2 -v /opt/influxdb/config.yml:/etc/influxdb2/config.yml influxdb:1.8 --reporting-disabled

# Web GUI for initial setup is now available at http://localhost:8086
# Don't need to use Influx CLI for InfluxDB 2.x
# In the Influx GUI create API token via web GUI for Prometheus.
# Test with:
$ curl --get "http://localhost:8086/api/v2" \
--header "Authorization: Token YOUR_API_TOKEN" \
--header 'Content-type: application/json' \
--data-urlencode "db=mydb" \
--data-urlencode "q=SELECT * FROM cpu_usage"
##################################################

 

Install and Configure snmp_exporter as Container

# Ubuntu 20.04...
$ sudo apt-get install golang git make unzip build-essential libsnmp-dev p7zip-full

# This system only has go 1.12 in the repo - need to manually install go1.14 to meet snmp_exporter minimum requirements
$ wget https://golang.org/dl/go1.14.15.linux-arm64.tar.gz
$ tar -xvf go1.14.15.linux-arm64.tar.gz
$ sudo mv go /opt/go1.14.15
$ PATH=/opt/go1.14.15/bin/:$PATH

# Normal install resumes here..
$ git clone https://github.com/prometheus/snmp_exporter.git
$ cd snmp_exporter
$ make

# Comment out the line in the SNMP config which prevents the loading of mibs.
# Commenting this out will enabling the loading of MIBs by SNMP libraries so that we can get the OID names:
sudo vi /etc/snmp/snmp.conf

# Use the generator to compile SNMP MIBs into a config file for snmp_exporter
cd generator/
mkdir ~/.snmp/mibs/
./generator generate
./generator parse_errors
cp ./snmp.yml ../
cd ../

# Now start snmp_exporter using this generated config file
$ ./snmp_exporter

# A HTTP endpoint is now available http://localhost:9116/snmp
# When combined with the "module" and "target" arugment, snmp_exporter will
# poll the specific OIDs under "module" in the config file, against "target"
# http://localhost:9116/snmp?module=cisco_ios&target=192.0.2.1

# Set up and run snmp_exporter as a docker container
$ vi Dockerfile
# Change the Dockerfile ARCH to be "arm64"
# Docker wouldn't build until the COPY line for the snmp_exporter binary was changed to:
# COPY ./snmp_exporter /bin/snmp_exporter
$ docker build -t snmp_exporter-0.20.0 .

$ sudo mkdir /opt/snmp_exporter
$ sudo chown bensley:bensley /opt/snmp_exporter
$ cp snmp.yml /opt/snmp_exporter/

# Quick test:
$ docker run -it --rm -p 9116:9116 -v /opt/snmp_exporter/snmp.yml:/etc/snmp_exporter/snmp.yml --name snmp_exporter snmp_exporter-0.20.0
$ curl "http://localhost:9116/snmp?module=cisco_ios&target=192.0.2.1"
# Now run a daemon
$ docker run -d -p 9116:9116 --restart always -v /opt/snmp_exporter/snmp.yml:/etc/snmp_exporter/snmp.yml --name snmp_exporter snmp_exporter-0.20.0

# Add snmp_exporter to Prometheus conconfig
sudo vi /opt/prometheus/prometheus.yml
# Add the following:
- job_name: 'ios_cpe'
static_configs:
- targets:
- 192.168.58.1 # Cisco CPE
metrics_path: /snmp
params:
module: [cisco_ios]
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: 192.168.58.6:9116 # The SNMP exporter's real hostname:port.

# Reload Prometheus config
$ curl -X POST http://localhost:9090/-/reload

 

Install and Configure blackbox_exporter as Container

# Prereq:
# Pre-req: One needs to enabled IPv6 for Docker if it isn't already (assuming IPv6 probes are wanted)!
# Example (create this file if it doesn't exist then restart docker):
#$ cat /etc/docker/daemon.json
#{
# "default-address-pools": [
# {"base":"172.16.0.0/16","size":24}
# ],
# "ipv6": true,
# "fixed-cidr-v6": "2001:db8:1::/64"
#}
# This creates a routed subnet with the ::1 address on docker0 bridge interface.
# Or bind the docker container to the host's IPv4/IPv6 address(es) using
# docker --network host


# Install latest version at the time of writing:
wget https://github.com/prometheus/blackbox_exporter/releases/download/v0.19.0/blackbox_exporter-0.19.0.linux-arm64.tar.gz
tar -xvf blackbox_exporter-0.19.0.linux-arm64.tar.gz
cd blackbox_exporter-0.19.0.linux-arm64

# All IPv4 ICMP tests will fail by default - a special capability is required to use ICMP.
# We can add it to the binary for testing:
$ sudo setcap cap_net_raw+ep blackbox_exporter

# Create a config, example below:
vi blackbox.yml

modules:
icmpv4:
timeout: 3s
prober: icmp
icmp:
preferred_ip_protocol: "ip4"
ip_protocol_fallback: false
dont_fragment: true

# Check config and test blackbox_exporter:
$ ./blackbox_exporter --config.check
$ ./blackbox_exporter
# Web GUI is now available at http://localhost:9115
# Test a ping using:
$ curl http://localhost:9115/probe?target=192.0.2.1\&module=icmpv4

# Now build a docker. We can add the ICMP capability to the docker container too using:
# --cap-add CAP_NET_RAW
# This requires that the container be run as root though!
$ wget https://raw.githubusercontent.com/prometheus/blackbox_exporter/master/Dockerfile
# Edit Dockerfile ARCH to "arm64"
# Change binary COPY line to:
# COPY ./blackbox_exporter /bin/blackbox_exporter
vi Dockerfile
$ docker build -t blackbox_exporter-0.19.0 .

$ sudo mkdir /opt/blackbox_exporter/
$ sudo chown bensley:bensley /opt/blackbox_exporter
$ cp blackbox.yml /opt/blackbox_exporter/

$ sudo docker run -d --network host -p 9115:9115 --restart always --cap-add CAP_NET_RAW -v /opt/blackbox_exporter/blackbox.yml:/etc/blackbox_exporter/config.yml --name blackbox_exporter blackbox_exporter-0.19.0 --config.file=/etc/blackbox_exporter/config.yml

# Add the targets to Prometheus:
$ vi /opt/prometheus/prometheus.yml

  - job_name: 'blackbox-icmpv4'
metrics_path: /probe
params:
module: [icmpv4]
static_configs:
- targets:
- 192.168.0.1 # gateway
- e.root-servers.net
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: localhost:9115 # The blackbox exporter's hostname:port

# Restart Prometheus to load new config
$ curl -X POST http://localhost:9090/-/reload

 

Install and Configure node_exporter as Container

# Download latest version at time of writing
$ wget https://github.com/prometheus/node_exporter/releases/download/v1.2.2/node_exporter-1.2.2.linux-arm64.tar.gz
$ tar -xzvf node_exporter-1.2.2.linux-arm64.tar.gz
$ cd node_exporter-1.2.2.linux-arm64

# Create a Dockerfile because one isn't included with node_exporter:
vi ./Dockerfile
ARG ARCH="arm64" # "armv7" for RaspPi
ARG OS="linux"

FROM ubuntu:20.04
COPY ./node_exporter /bin/node_exporter

EXPOSE 9100
ENTRYPOINT [ "/bin/node_exporter" ]
CMD [ "--no-collector.arp", "--no-collector.bcache", "--no-collector.btrfs", "--no-collector.entropy", "--no-collector.fibrechannel", "--no-collector.infiniband", "--no-collector.ipvs", "--no-collector.nfs", "--no-collector.nfsd", "--no-collector.schedstat", "--no-collector.tapestats", "--no-collector.uname", "--no-collector.xfs", "--no-collector.zfs", "--no-collector.cpufreq", "--no-collector.netstat", "--no-collector.sockstat", "--no-collector.timex", "--no-collector.vmstat" ]

$ docker build -t node_exporter-1.2.2 .
# Test the container:
$ docker run -it --rm -p 9100:9100 --name node_exporter node_exporter-1.2.2

# If OK, run as daemon (node_exporter is design to run as an unprivileged, run as root only if you need to monitor something not available to an unprivileged user):
$ docker run -d -p 9100:9100 --restart always --name node_exporter node_exporter-1.2.2

# Optionally, one can expose the host file system to the container as read-only, to expose more stats.
# Add the following to the CMD list in the Dockerfile and build the container again:
"--path.rootfs=/hostfs",
# Then create a RO bind mount between host and container using the docker command:
$ docker run -d -p 9100:9100 -v "/:/hostfs:ro,rslave" --restart always --name node_exporter node_exporter-1.2.2


# HTTP endpoint is now available at http://localhost/metrics
# Enable in Prometheus:
$ vi /opt/prometheus/prometheus.yml

- job_name: 'node_exporter'
metrics_path: /metrics
static_configs:
- targets: ['192.168.58.6:9100']
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: 192.168.58.6:9100 # The blackbox exporter's hostname:port

 

Install and Configure pihole_exporter as Container

This can run on the Docker server and locally polled by Prometheus. The PiHole needs a web API token. This is automatically created if the admin password was created when PiHole was set up. If no admin password was set, first set one using "$ pihole -a -p", then get the derived API token using "$ grep WEBPASSWORD /etc/pihole/setupVars.conf".

# Don't pull the latest version from Docker, the latest is for amd64 only, pull the most recent release which supports arm64
$ docker pull ekofr/pihole-exporter:v0.0.11 # Give it a test run. # Parse environment variables to the container to specify the details of the PiHole server. $ docker run -it --rm -e "PIHOLE_HOSTNAME=192.168.0.1" -e "PIHOLE_API_TOKEN=abc123" -e "INTERVAL=60s" -e "PORT=9617" --name pihole-exporter ekofr/pihole-exporter:v0.0.11

# Acces the metrics page to check it's working:
$ curl http://127.0.0.1:9617/metrics

# CTRL+C the test container and now run it permanently:
$ docker run -d -p 9617:9617 --restart always -e "PIHOLE_HOSTNAME=192.168.0.1" -e "PIHOLE_API_TOKEN=abc123" -e "INTERVAL=60s" -e "PORT=9617" --name pihole-exporter ekofr/pihole-exporter:v0.0.11

# Update Prometheus to poll pihole_exporter:
$ vi /opt/prometheus/prometheus

- job_name: 'pihole'
metrics_path: /metrics
static_configs:
- targets: ['192.168.58.6:9617'] # The pihole exporter's hostname:port

 

Install and Configure unbound_exporter as Container

This is run on the unbound server and remotely polled by Prometheus. This required an unbound upgrade to a newer version than that in the Raspbian repo. The repo version generates SSL certificates withou a SAN. Statistic collection won't work from unbound_exporter because Go has deprecated support for SSL certs without a SAN. After building the latest unbound and running "$ sudo unbound-control-setup", generated certs were still created without a SAN. "unbound-control-setup" is a shell script, manually added the patch details here: https://github.com/letsencrypt/unbound_exporter/issues/20 and then use "$sudo unbound-control-setup -r" to regenerate the certificates. Use "sudo openssl x509 -in /etc/unbound/unbound_server.pem -text -noout" to check if a "X509v3 Subject Alternative Name" and "X509v3 Basic Constraints: critical" are present.

Compiling the newest unbound_exporter also required the installation of a new version of Go than was available in the Raspbian repo.

$ cd /opt
$ git clone https://github.com/letsencrypt/unbound_exporter.git
$ cd unbound_exporter
$ go build
$ go install

# Enable remote statistics in the unbound config: $ sudo vi /etc/unbound/unbound.conf server: # Enable extra stats for Prometheus. # Requires remote control be also enabled... extended-statistics: yes statistics-cumulative: no # enable remote-control remote-control: control-enable: yes control-interface: 172.17.0.1 # Note that this is the docker0 interface on the unbound server control-port: 8953 # Restart unbound # Create a Dockerfile because one isn't included with unbound_exporter: vi ./Dockerfile ARG ARCH="armv7" # for RaspPi ARG OS="linux" FROM ubuntu:20.04 COPY ./unbound_exporter /bin/unbound_exporter EXPOSE 9167 ENTRYPOINT [ "/bin/unbound_exporter" ]
CMD ["-unbound.host", "tcp://172.17.0.1:8953"] # Note that this is the docker0 interface on the unbound server
# Build the container $ docker build -t unbound_exporter .
# Test the container.
# unbound_exporter needs access to the SSL certificates and keys to connect to the control socket to get stats/
# Map the directory containing these certificates and keys using the docker -v option: $ docker run -it --rm -p 9167:9167 --name unbound_exporter -v /etc/unbound/:/etc/unbound/ unbound_exporter # If OK, run as daemon: $ docker run -d -p 9167:9167 --restart always --name unbound_exporter -v /etc/unbound/:/etc/unbound/ unbound_exporter

# Now add to Prometheus
vi /opt/prometheus/prometheus

- job_name: 'unbound'
metrics_path: /metrics
static_configs:
- targets: ['192.168.58.2:9617'] # The unbound exporters hostname:port

 


Previous page: Pound Proxy
Next page: Ptkgen