Skip to main content
  1. Posts/

Deploy monit in OpenShift

·1117 words·6 mins·

Monit is a tried-and-true method for monitoring all kinds of systems, services, and network endpoints. Deploying monit is easy. There’s only one binary daemon to run and it reads monitoring configuration from files in a directory you specify.

Most Linux distributions have a package for monit and the package usually contains some basic configuration along with a systemd unit file to run the daemon reliably.

However, this post is all about how to deploy it inside OpenShift. Deploying monit inside OpenShift allows you to monitor services inside OpenShift that might not have a route or a NodePort configured, but you can monitor systems outside OpenShift, too.

Monit in a container #

Before we can put monit into a container, we need to think about what it requires. At the most basic level, we will need:

  • the monit daemon binary
  • a very basic config, the .monitrc
  • a directory to hold lots of additional monitoring configs
  • any packages needed for running monitoring scripts

In my case, some of the scripts I want to run require curl, httpie (for complex HTTP/JSON requests), and jq (for parsing json). I’ve added those, along with some requirements for the monit binary, to my container build file:

FROM fedora:latest

# Upgrade packages and install monit.
RUN dnf -y upgrade
RUN dnf -y install coreutils httpie jq libnsl libxcrypt-compat
RUN dnf clean all

# Install monit.
RUN curl -Lso /tmp/monit.tgz
RUN cd /tmp && tar xf monit.tgz
RUN mv /tmp/monit-*/bin/monit /usr/local/bin/monit
RUN rm -rf /tmp/monit*

# Remove monit user/group.
RUN sed -i '/^monit/d' /etc/passwd
RUN sed -i '/^monit/d' /etc/group

# Work around OpenShift's arbitrary UID/GIDs.
RUN chmod g=u /etc/passwd /etc/group

# The monit server listens on 2812.

# Set up a volume for /config.
VOLUME ["/config"]

# Start monit when the container starts.
COPY extras/ /opt/
RUN chmod +x /opt/
CMD ["/opt/"]

Let’s break down what’s here in the container build file:

  • Install some basic packages that we need in the container
  • Download monit and install it to /usr/local/bin/monit
  • Remove the monit user/group (more on this later)
  • Make /etc/passwd and /etc/group writable by the root group (more on this later)
  • Expose the default monit port
  • Run our special startup script

The last three parts help us run with OpenShift’s strict security requirements.

Startup script #

Monit has some strict security requirements for startup. It requires that the monit daemon is started with the same user/group combination that owns the initial configuration file (.monitrc). That’s why we removed the monit user/group and made /etc/passwd and /etc/shadow writable during the build step. We need to add those back in once the container starts and we’ve received our arbitrary UID from OpenShift.

(For more on OpenShift’s arbitrary UIDs, read my other post about Running Ansible in OpenShift with arbitrary UIDs.)

Here’s the startup script:

set -euxo pipefail

echo "The home directory is: ${HOME}"

# Work around OpenShift's arbitrary UID/GIDs.
if [ -w '/etc/passwd' ]; then
    echo "monit❌`id -u`:`id -g`:,,,:${HOME}:/bin/bash" >> /etc/passwd
if [ -w '/etc/group' ]; then
    echo "monit❌$(id -G | cut -d' ' -f 2)" >> /etc/group

# Make a basic monitrc.
echo "set daemon 30" >> "${HOME}"/monitrc
echo "include /config/*" >> "${HOME}"/monitrc
chmod 0700 "${HOME}"/monitrc

# Ensure the UID/GID mapping works.

# Run monit.
/usr/local/bin/monit -v -I -c "${HOME}"/monitrc

Let’s talk about what is happening in the script:

  1. Add the monit user to /etc/passwd with the arbitrary UID
  2. Do the same for the monit group in /etc/group
  3. Create a very basic .monitrc that is owned by the monit user and group
  4. Run monit in verbose mode in the foreground with our .monitrc

OpenShift will make an emptyDir volume in /config that we can modify since we specified a volume in the container build file.

Deploying monit #

Now that we have a container and a startup script, it’s time to deploy monit in OpenShift.

kind: DeploymentConfig
  generation: 1
    app: monit
  name: monit
  replicas: 1
  revisionHistoryLimit: 10
    app: monit
    deploymentconfig: monit
    activeDeadlineSeconds: 21600
    resources: {}
      intervalSeconds: 1
      maxSurge: 25%
      maxUnavailable: 25%
      timeoutSeconds: 600
      updatePeriodSeconds: 1
    type: Rolling
        app: monit
        deploymentconfig: monit
      - image:
        imagePullPolicy: Always
        name: monit
            cpu: 100m
            memory: 512Mi
            cpu: 100m
            memory: 512Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        - mountPath: /config
          name: monit-config
        - mountPath: /scripts
          name: monit-scripts
      dnsPolicy: ClusterFirst
      hostname: monit-in-openshift
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      - configMap:
          defaultMode: 0420
          name: monit-config
        name: monit-config
      - configMap:
          defaultMode: 0755
          name: monit-scripts
        name: monit-scripts
  test: false
  - type: ConfigChange

There is a lot of text here, but there are two important parts:

  • The container image is pre-built from my monit GitLab repository (feel free to use it!)
  • The volumes refer to the OpenShift configmaps that hold the monit configurations as well as the scripts that are called for monitoring

Next comes the service (which allows the monit web port to be exposed inside the OpenShift cluster):

apiVersion: v1
kind: Service
    app: monit
  name: monit
  - port: 2812
    protocol: TCP
    targetPort: 2812
    app: monit
    deploymentconfig: monit
  sessionAffinity: None
  type: ClusterIP

And finally, the route (which exposes the monit web port service outside the OpenShift cluster):

kind: Route
    app: monit
  name: monit
    insecureEdgeTerminationPolicy: Redirect
    termination: edge
    kind: Service
    name: monit
    weight: 100
  wildcardPolicy: None

Monitoring configuration and scripts #

The deploymentConfig for monit refers to a configMap called monit-config. This config map contains all of the additional monitoring configuration for monit outside of the .monitrc. Here is a basic configMap for checking that is accessible:

apiVersion: v1
kind: ConfigMap
  name: monit-config
  config: |
    set daemon 30
    set httpd port 2812
    set alert
    set mailserver

    check host "icanhazheaders responding" with address
      if failed
        port 80
        for 2 cycles
      then alert

    check program "icanhazheaders header check"
      with path "/scripts/ ACCEPT-ENCODING 'gzip'"
      if status gt 0
        then exec "/scripts/"
        else if succeeded then exec "/scripts/"    

This configuration will check and only alert if the check fails for two check periods. Each check period is 30 seconds, so the site would need to be inaccessible for 60 seconds before an alert would be sent.

Also, there is a second check that runs a script. Let’s deploy the script to OpenShift as well:

apiVersion: v1
kind: ConfigMap
  name: monit-scripts
data: |
    set -euo pipefail


    HEADER_VALUE=$(curl -s ${URL} | jq -r ${HEADER})

    if [[ $HEADER_VALUE == $EXPECTED_VALUE ]]; then
      exit 0
      exit 1

Use oc apply to deploy all of these YAML files to your OpenShift cluster and monit should be up and running within seconds!