In our series, we review best-fit exporters for monitoring metrics that are used by NexClipper. Learn all about specific exporters, their most important metrics as well as recommended alert rules. We also discuss the related Grafana dashboard and Helm Chart for each specific exporter that we introduce. This fourth article of the series focuses on the MongoDB exporter – keep reading to find out more.
About MongoDB
Unlike PostgreSQL and MySQL, MongoDB is a NoSQL database. This means it is non-relational, document-oriented with a dynamic schema database. Instead of using tables and rows like in traditional relational databases, MongoDB makes use of collections and documents. Documents consist of key-value pairs, which is the basic unit of data in MongoDB. Collections contain sets of documents and functions, which are the equivalent of relational database tables.
Since databases are a critical resource and downtime can cause significant financial and reputation losses, monitoring is a must. A MongoDB exporter is required to monitor and expose MongoDB metrics. The MongoDB exporter queries MongoDB, scraps the data, and exposes the metrics to a Kubernetes service endpoint that can further be scrapped by Prometheus to ingest the time series data.
For monitoring of MongoDB, an external Prometheus exporter can be used, which is maintained by the Prometheus Community. On deployment, this exporter collects and exports oplog, replica set, server status, sharding, and storage engine metrics. It handles all metrics exposed by MongoDB monitoring commands. It loops over all the fields exposed in diagnostic commands and tries to get data from them. This way, the MongoDB exporter helps users get crucial and continuous information about the database which is difficult to get from the DB directly.
How do you set up an exporter for Prometheus?
With the latest version of Prometheus (2.33 as of February 2022), these are the ways to set up a Prometheus exporter:
Method 1 – Native
Supported by Prometheus since the beginning
To set up an exporter in native way a Prometheus config needs to be updated to add the target.
A sample configuration:
# scrape_config job
- job_name: mongodb-staging
scrape_interval: 45s
scrape_timeout: 30s
metrics_path: "/metrics"
static_configs:
- targets:
- <mongodb exporter endpoint>
Method 2 – Pod Discovery
This method is applicable for Kubernetes deployment only
With this, a default scrap config can be added to the prometheus.yaml file and an annotation can be added to the exporter service. With this, Prometheus will automatically start scrapping the data from the services with the mentioned path.
Prometheus.yaml
- job_name: "kubernetes-pods"
kubernetes_sd_configs:
- role: pod
Exporter service:
annotations:
prometheus.io/path: /metrics
prometheus.io/scrape: "true"
Method 3 – Prometheus Operator
Setting up a service monitor
The Prometheus operator supports an automated way of scraping data from the exporters by setting up a service monitor Kubernetes object. A sample service monitor for MongoDB can be found here.
These are the necessary steps:
Step 1
Add/update Prometheus operator’s selectors. By default, the Prometheus operator comes with empty selectors which will select every service monitor available in the cluster for scrapping the data.
To check your Prometheus configuration:
Kubectl get prometheus -n <namespace> -o yaml
A sample output will look like this.
ruleNamespaceSelector: {}
ruleSelector:
matchLabels:
app: kube-prometheus-stack
release: kps
scrapeInterval: 1m
scrapeTimeout: 10s
securityContext:
fsGroup: 2000
runAsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
serviceAccountName: kps-kube-prometheus-stack-prometheus
serviceMonitorNamespaceSelector: {}
serviceMonitorSelector:
matchLabels:
release: kps
Here you can see that this Prometheus configuration is selecting all the service monitors with the label release = kps
So with this, if you are modifying the default Prometheus operator configuration for service monitor scrapping, make sure you use the right labels in your service monitor as well.
Step 2
Add a service monitor and make sure it has a matching label and namespace for the Prometheus service monitor selectors (serviceMonitorNamespaceSelector & serviceMonitorSelector).
To enable service monitor run:
helm install <RELEASE_NAME> prometheus-community/prometheus-mongodb-exporter
--set serviceMonitor.enabled=true
Sample configuration:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
annotations:
meta.helm.sh/release-name: mongodb-exporter
meta.helm.sh/release-namespace: monitor
generation: 1
labels:
app: prometheus-mongodb-exporter
app.kubernetes.io/managed-by: Helm
heritage: Helm
release: kps
name: mongodb-exporter-prometheus-mongodb-exporter
namespace: monitor
spec:
endpoints:
- interval: 15s
port: mongodb-exporter
selector:
matchLabels:
app: prometheus-mongodb-exporter
release: mongodb-exporter
Here you can see we have a matching label on the service monitor release = kps that we are specifying in the Prometheus operator scrapping configuration.
How do you set up an exporter: MongoDB exporter with sidecar exporter
There is another way of scrapping metrics from MongoDB is using the Bitnami images. With the Bitnami Helm charts, the MongoDB exporter can be deployed as a sidecar container in the same pod.
To enable the side car:
helm upgrade --install my-release bitnami/mongodb --set architecture=replicaset --set metrics.enabled=true --set metrics.extraFlags="--compatible-mode"
More details can be found here.
After enabling, sidecar Prometheus metrics are exported by the built-in container on the “/metrics” endpoint that can be scrapped by Prometheus. Once metrics are enabled, Helm will automatically add the annotation to the mongodb pods.
Annotation:
annotations:
prometheus.io/path: /metrics
prometheus.io/scrape: "true"
Now Prometheus will automatically start scraping the data if the pod discovery is enabled.
Prometheus configuration for pod discovery:
- job_name: "kubernetes-pods"
kubernetes_sd_configs:
- role: pod
Metrics
The following ones are handpicked metrics that will provide insights into MongoDB. Metrics keys will be different based on the type of MongoDB exporter that is deployed, but the functionality is the same for all exporters.
- MongoDB is up
This shows whether the last scrape of metrics from MongoDB was able to connect to the server.
➡ The key of the exporter metric is “mogodb_up”
➡ The value of the metric is a boolean – 1 or 0 which symbolizes if MongoDB is up or down respectively (1 for yes, 0 for no)
- Too many connections
MongoDB connections depend on the resources available on the system. Unless constrained by system-wide limits, the maximum number of incoming connections supported by MongoDB is configured with the maxIncomingConnections setting. The number of connections between the applications and the database can overwhelm the ability of the server to handle requests. Therefore, it is important to monitor the number of connections.
➡ The metric “mongodb_connections{state=”current”}” gives the total current connections on MongoDB
➡ The number can be calculated based on “mongodb_connections{state=”available”}” which shows the maximum connection availability of the database server
- MongoDB replication lag
Replication lag is a delay between an operation on the primary, and the application of that operation from the oplog to the secondary. Replication lag can be a significant issue and can seriously affect MongoDB replica set deployments. Excessive replication lag makes “lagged” members ineligible to quickly become primary and increases the possibility that distributed read operations will be inconsistent.
➡ The metric key is “mongodb_mongodb_replset_member_optime_date” (Promentheus community) or “mongodb_replset_member_optime_date”(bitnami) based on the exporter used
➡ The lag can be calculated by comparing the optime date between primary and secondary
- MongoDB replica set status
Each member of a replica set has a state represented by a number. The numbers 1 and 2 represent primary and secondary, and any other value indicates an issue. You can find the list of states here.
➡ The metric “mongodb_mongodb_replset_member_state” shows the member state
- MongoDB memory
This metric will give us insight into the target system architecture of MongoDB and current memory usage.
➡ The metric “mongodb_memory” includes 4 types of memory: mapped, mapped_with_journal, resident, and virtual
Alerting
After digging into all the valuable metrics, this section explains in detail how we can get critical alerts for the MongoDB exporter.
PromQL is a query language for the Prometheus monitoring system. It is designed for building powerful yet simple queries for graphs, alerts, or derived time series (aka recording rules). PromQL is designed from scratch and has zero common grounds with other query languages used in time series databases, such as SQL in TimescaleDB, InfluxQL, or Flux. More details can be found here.
Prometheus comes with a built-in Alert Manager that is responsible for sending alerts (could be email, Slack, or any other supported channel) when any of the trigger conditions is met. Alerting rules allow users to define alerts based on Prometheus query expressions. They are defined based on the available metrics scraped by the exporter. Click here for a good source for community-defined alerts.
A general alert looks as follows:
– alert:(Alert Name)
expr: (Metric exported from exporter) >/</==/<=/=> (Value)
for: (wait for a certain duration between first encountering a new expression output vector element and counting an alert as firing for this element)
labels: (allows specifying a set of additional labels to be attached to the alert)
annotation: (specifies a set of informational labels that can be used to store longer additional information)
Some of the recommended MongoDB exporter alerts are:
- Alert – MongoDB is Down
- alert: MongodbDown
expr: mongodb_up == 0
for: 0m
labels:
severity: critical
annotations:
summary: MongoDB Down (instance {{ $labels.instance }})
description: "MongoDB instance is down\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- Alert – MongoDB has too many connections
- alert: MongodbTooManyConnections
expr: avg by(instance) (rate(mongodb_connections{state="current"}[1m])) / avg by(instance) (sum (mongodb_connections) by (instance)) * 100 > 80
for: 2m
labels:
severity: warning
annotations:
summary: MongoDB too many connections (instance {{ $labels.instance }})
description: "Too many connections (> 80%)\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- Alert – MongoDB replication lag
- Prometheus community:
- alert: MongodbReplicationLag
expr: mongodb_mongod_replset_member_optime_date{state="PRIMARY"} - ON (set) mongodb_mongod_replset_member_optime_date{state="SECONDARY"} > 10
for: 0m
labels:
severity: critical
annotations:
summary: MongoDB replication lag (instance {{ $labels.instance }})
description: "Mongodb replication lag is more than 10s\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- Bitnami:
- alert: MongodbReplicationLag
expr: avg(mongodb_replset_member_optime_date{state="PRIMARY"}) - avg(mongodb_replset_member_optime_date{state="SECONDARY"}) > 10
for: 0m
labels:
severity: critical
annotations:
summary: MongoDB replication lag (instance {{ $labels.instance }})
description: "Mongodb replication lag is more than 10s\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- Alert – MongoDB replication status (for Prometheus community exporter)
- alert: MongodbReplicationStatus8
expr: mongodb_mongod_replset_member_state == 8
for: 0m
labels:
severity: critical
annotations:
summary: MongoDB replication Status 8 (instance {{ $labels.instance }})
description: "MongoDB Replication set member as seen from another member of the set, is unreachable\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- Alert – High memory usage
- alert: MongodbVirtualMemoryUsage
expr: (sum(mongodb_memory{type="virtual"}) BY (instance) / sum(mongodb_memory{type="mapped"}) BY (instance)) > 3
for: 2m
labels:
severity: warning
annotations:
summary: MongoDB virtual memory usage (instance {{ $labels.instance }})
description: "High memory usage\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
Additionally, here are some other useful alerts:
- Alert – MongoDB curser time out – happens when too many operations are happening on MongoDB
- With Prometheus community exporter:
- alert: MongodbCursorsTimeouts
expr: increase(mongodb_metrics_cursor_timed_out_total[1m]) > 100
for: 2m
labels:
severity: warning
annotations:
summary: MongoDB cursors timeouts (instance {{ $labels.instance }})
description: "Too many cursors are timing out\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- With Bitnami:
- alert: MongodbCursorsTimeouts
expr: increase(mongodb_mongod_metrics_cursor_timed_out_total[1m]) > 100
for: 2m
labels:
severity: warning
annotations:
summary: MongoDB cursors timeouts (instance {{ $labels.instance }})
description: "Too many cursors are timing out\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- Alert – Too many cursor open for clients
- For the Prometheus community exporter:
- alert: MongodbNumberCursorsOpen
expr: mongodb_mongod_metrics_cursor_open{state="total"} > 10 * 1000
for: 2m
labels:
severity: warning
annotations:
summary: MongoDB number cursors open (instance {{ $labels.instance }})
description: "Too many cursors opened by MongoDB for clients (> 10k)\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
- For Bitnami:
- alert: MongodbNumberCursorsOpen
expr: mongodb_metrics_cursor_open{state="total_open"} > 10000
for: 2m
labels:
severity: warning
annotations:
summary: MongoDB number cursors open (instance {{ $labels.instance }})
description: "Too many cursors opened by MongoDB for clients (> 10k)\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
Dashboard
Graphs are easier to understand and more user-friendly than a row of numbers. For this purpose, users can plot their time series data in visualized format using Grafana.
Grafana is an open-source dashboarding tool used for visualizing metrics with the help of customizable and illustrative charts and graphs. It connects very well with Prometheus and makes monitoring easy and informative. Dashboards in Grafana are made up of panels, with each panel running a PromQL query to fetch metrics from Prometheus.
Grafana supports community-driven graphs for most of the widely used software, which can be directly imported to the Grafana Community.
What is a Panel?
Panels are the most basic component of a dashboard and can display information in various ways, such as gauge, text, bar chart, graph, and so on. They provide information in a very interactive way. Users can view every panel separately and check the value of metrics within a specific time range.
The values on the panel are queried using PromQL, which is Prometheus Query Language. PromQL is a simple query language used to query metrics within Prometheus. It enables users to query data, aggregate and apply arithmetic functions to the metrics, and then further visualize them on panels.
Here are some examples of panels for the MongoDB exporter:
Helm Chart
Helm chart to install MySQL
If your MongoDB is not up and ready yet, you can start the MongoDB cluster using Helm:
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm upgrade --install my-release bitnami/mongodb --set architecture=replicaset --set metrics.enabled=true --set metrics.extraFlags="--compatible-mode"
With these commands, MongoDB will be up and running with a sidecar container to expose Prometheus metrics.
Please note: “–compatible-mode” is enabled explicitly to expose all kinds of metrics (new and old format), so that most of the community dashboards can be used without alterations.
In the case of an existing MongoDB or a MongoDB running outside of Kubernetes, we need to deploy an explicit exporter to get the metrics. For this, follow the steps below:
Installing MongoDB Exporter
The MongoDB exporter can be deployed in Kubernetes using the Helm chart. The Helm chart used for deployment is from the Prometheus community and can be found here. To deploy this Helm chart, users can either follow the steps in the above link or refer to the ones outlined below:
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update
$ helm install [RELEASE_NAME] prometheus-community/prometheus-mongodb-exporter
Some of the common parameters that should be changed in the values file include:
mongodb.uri: <[mongodb[+srv]://][user:pass@]host1[:port1][,host2[:port2],...][/database][?options]>
Ex:
mongodb.uri: “mongodb://root:password@mongodb.default.svc.cluster.local:27017/admin?authSource=admin”
In case the user wants to pass the credentials as secrets, they can create a secret and pass the secret name in the following value file.
existingSecret.name: <Secret name>
Additional parameters can be changed based on the individual needs – such as enabling and disabling collectors, parameters, and so on. All these parameters can be tuned via values.yaml file here.
In addition to the native way of setting up Prometheus monitoring, a service monitor can be deployed (if the Prometheus operator is being used) to scrap the data from MongoDB. Prometheus then scraps the data from the service monitor. With this approach, multiple MongoDB can be scrapped without altering the Prometheus configuration. Every MongoDB exporter comes with its own service monitor.
In the above-mentioned chart, a service monitor can be deployed by turning it on from the values.yaml file here. By default, it is set to true.
serviceMonitor:
enabled: true
interval: 30s
scrapeTimeout: 10s
namespace:
additionalLabels: {}
targetLabels: []
metricRelabelings: []
Another way of scraping metrics while having the pod discovery enabled in Prometheus is by updating the annotation section here with the following:
podAnnotations:
prometheus.io/path: /metrics
prometheus.io/scrape: "true"
Here is a sample values file:
mongodb:
uri: ""
# Name of an externally managed secret (in the same namespace) containing the connection uri as key `mongodb-uri`.
# If this is provided, the value mongodb.uri is ignored.
existingSecret:
name: ""
key: "mongodb-uri"
nameOverride: ""
nodeSelector: {}
podAnnotations: {}
# prometheus.io/scrape: "true"
# prometheus.io/port: "metrics"
port: "9216"
priorityClassName: ""
service:
labels: {}
annotations: {}
port: 9216
type: ClusterIP
serviceAccount:
create: true
# If create is true and name is not set, then a name is generated using the
# fullname template.
name:
serviceMonitor:
enabled: true
interval: 30s
scrapeTimeout: 10s
namespace:
additionalLabels: {}
targetLabels: []
metricRelabelings: []
This concludes our review of the exporter for MongoDB. Please feel free to reach out to us via support@nexclipper.io in case you have any questions. As always, stay tuned for more exporter reviews and tips coming soon!