Prometheus is becoming more of a standard monitoring tool for various application and infrastructure needs. There is a vast open-source ecosystem of software that directly gives you Prometheus metrics. These Metrics allow you to debug the system’s state, compare it with the previous healthy state, make alerts and dashboards.

Prometheus simplifies DevOps engineers’ life, as you don’t have to write custom bash scripts to monitor systems. Instead, you have a metric platform to build monitoring and alerting infrastructure. As mentioned before, many open-source systems expose Prometheus metrics directly; others have an open-source community made exporters, sidecar applications that give you Prometheus metrics by parsing internal application format. This approach works great; you don’t have to keep reinventing metric collection, as you can use open-source software.

Are you interested in learning more about Prometheus? Check out Monitoring Systems and Services with Prometheus; it’s an excellent course that will get you up to speed.

But what about alerts and Grafana dashboards?

Monitoring Mixins

Monitoring Mixins are trying to solve this problem. The idea is simple – we bundle up typical alerting configuration, Grafana dashboards, and Runbooks into a package. DevOps engineers can download and install Monitoring Mixin into their own Prometheus. This way, we can ease up DevOps burden of writing alerting rules, Grafana dashboards, and runbooks.

One crucial feature of Monitoring Mixins is its flexible configuration, which doesn’t mandate specific labels or scraping intervals. You can configure and overwrite everything. For example, you can use your job selector labels, different metric scraping intervals, etc. Monitoring mixins allow you to configure all that and generate correct Alerting rules and working Grafana dashboards. 

Jsonnet for customizations

To achieve this flexible configuration, Monitoring Mixins use data templating language called Jsonnet. Jsonnet allows you to patch the alerting rules provided by Monitoring Mixin. Jsonnet is an interesting language, and it has many standard programming langue features – variables, loops, functions. For example, you can iterate alerts and add a team label or remove the unwanted alerting rule from Mixin.

On top of using Jsonnet, Monitoring Mixin has conventions so that all monitoring packages have a similar format and setup experience.

First, you have to have a top-level _config object for various configuration parameters. Secondly, you should add dashboards under grafanaDashboards dictionary, keyed by file name. Thirdly, Prometheus alerts are under prometheusAlerts, and rules are in prometheusRules.

Here is an example empty bundle jsonnet object:

  _config+:: {
  grafanaDashboards+:: {
    “dashboard-name.json”: {...},
  prometheusAlerts+:: [...],
  prometheusRules+:: [...],

Package Management

Another critical problem that Monitoring Mixins solve is package management. Once you have a Monitoring Mixin package, you need to install it, keep track of versions and update them. 

This is where jsonnet-bundler comes in. It will keep track of your Monitoring Mixin dependencies in a jsonnetfile.json and currently installed versions in a jsonnetfile.lock.json file. To get started, you install jsonnet-bundler via:

GO111MODULE="on" go get

To learn how to use jsonnet-bundler, let’s use it to install Kubernetes Mixin. Kubernetes Mixin provides you with a set of Grafana dashboards and Prometheus alerts that help you monitor Kubernetes.

Kubernetes Mixin

Let’s start with a new folder:

mkdir my_mixin
cd my_mixin

Initialize jsonnet-bundler and get Kubernetes Mixin:

jb init
jb install

Now, create a new configuration file called config.libsonnet and put the following configuration:

local kubernetes = import 'kubernetes-mixin/mixin.libsonnet';

kubernetes {
  _config+:: {
    // Selectors are inserted between {} in Prometheus queries.
    cadvisorSelector: 'job="cadvisor"',
    kubeletSelector: 'job="kubelet"',
    kubeStateMetricsSelector: 'job="kube-state-metrics"',
    nodeExporterSelector: 'job="node-exporter"',
    kubeSchedulerSelector: 'job="kube-scheduler"',
    kubeControllerManagerSelector: 'job="kube-controller-manager"',
    kubeApiserverSelector: 'job="kube-apiserver"',
    kubeProxySelector: 'job="kube-proxy"',
    podLabel: 'pod',
    hostNetworkInterfaceSelector: 'device!~"veth.+"',
    hostMountpointSelector: 'mountpoint="/"',
    wmiExporterSelector: 'job="wmi-exporter"',

    // You can set some Grafana dashboard specific config
    grafanaK8s+:: {
      dashboardNamePrefix: 'Kubernetes / ',
      dashboardTags: ['kubernetes-mixin'],

      // For links between grafana dashboards, you need to tell us if your grafana
      // servers under some non-root path.
      linkPrefix: ' ',

    // Opt-in to multiCluster dashboards by overriding this and the clusterLabel.
    showMultiCluster: false,

    // There are more config options, checkout
    // But defaults are reasonable and should just work.

Make sure to change the config file according to your existing Prometheus job configurations. If you don’t scrape metrics from all of the Kubernetes components, you will have to set it up now.

Once you have this config file ready, we can use mixtool to generate Alerting rules and dashboards. mixtool is a tool, which can list, install and compile Monitoring Mixins. Note that you can achieve the same thing with the jsonnet compiler, but mixtool gives you a bit nicer experience.

In order to install mixtool execute:

GO111MODULE=on go get -u

Now to generate Alerts, Rules and Dashboards, you can execute:

mixtool generate all config.libsonnet

You should now see alerts.yaml, rules.yaml and dashboards_out directory, which contains Grafana Dashboards.

Now let’s see how we can use jsonnet to customize Prometheus alerting rules.

Customizing Alerts with jsonnet

Let’s say you want to add a team label to all of the alerts. You have an infrastructure team, and you want to use this label to route alerts correctly in the Alertmanager.

To do so, let’s add this little helper function in utils/utils.libsonnet:

  mapRuleGroups(f): {
    groups: [
      group {
        rules: [
          for rule in super.rules
      for group in super.groups

This helper function calls a passed function f for each alerting rule we have in Monitoring Mixin.

Now let’s use it to add a team label. Modify config.libsonnet with the following code:

local kubernetes = import 'kubernetes-mixin/mixin.libsonnet';
local utils = import 'utils/utils.libsonnet';

kubernetes {
  _config+:: {
     // Your previous config file here
+ {
    local addTeam(rule) = rule {
      [if 'alert' in rule then 'labels']+: {
        team: 'infra',

This code will go thru all alerts in Kubernetes Mixin and add a team label.

Removing an alert with jsonnet

Let’s say you don’t like how Kubernetes Mixin monitors storage utilization, and you would instead add your own alerting rule. Removing with jsonnet is a bit harder but still doable.

Modify your config.libsonnet and add the following patch:

kubernetes {
  _config+:: {
     // Your previous config file here
+ {
  prometheusAlerts+:: {
          if == 'kubernetes-storage' then
            group {
              rules: std.filter(
                  rule.alert != 'KubePersistentVolumeFillingUp',

This code iterates thru alerting rules and filters out KubePersistentVolumeFillingUp alert in the kubernetes-storage group.


Hopefully, by now, you know what Monitoring Mixins are and how to get started.  

Are you interested in learning more about Prometheus? Check out Monitoring Systems and Services with Prometheus, it’s an awesome course that will get you up to speed.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.