Integrating Prometheus into Node/Express Js app, using the prom-client library to monitor app metrics.

Hey guys šš¾š! This article will give a thorough explanation of application monitoring using Prometheus, and integrating Prometheus to a Node/Express.Js application leveraging the prom-client
library.
What is Prometheus?
Prometheus is an open-source application designed for monitoring systems and applications. It collects real-time metrics, stores them in a time-series database, and alerts users of potential issues. This makes it particularly valuable for pinpointing problems during application outages.
Structure of Prometheus
Prometheus follows a modular architecture with several key components:
- Prometheus Server: This is the core component, responsible for scraping metrics from targets (applications or services being monitored), storing the data in a time-series format, and analyzing it. It also offers a user interface for querying the data.
- Data Model: Metrics are represented as time series data points, often accompanied by labels (key-value pairs) that provide additional context.
- Alertmanager (Optional): This component handles alerts generated by Prometheus rules based on metric evaluations. It can route them to various notification channels like email or chat platforms.
- Push Gateway (Optional): Useful for short-lived jobs or applications that cannot be directly scraped by Prometheus. The Pushgateway acts as an intermediary, allowing applications to push metrics to Prometheus on demand.
- Client Libraries: These libraries simplify embedding metric collection code directly within the applications being monitored. They are available in various languages like Go, Python, and Java.
- Exporters (Optional): For services that donāt natively support metric scraping, exporters act as translators, converting their metrics into a format consumable by Prometheus.
Integrating Prometheus in Node/Express Js app
Prometheus has multiple libraries for integration with multiple technologies and their frameworks, e.g.However Python, Java, C#, etc. However, in this article, weāll go through how to setup Prometheus in a Node/Express Js application using the prom-client
and swagger-stats
libraries.
Installing Node Js runtime
To begin, weāll make sure we have Node Js and npm running on our system. To confirm if Node Js is installed on our system, we run the following command.
node --version
We should receive an output similar to that below
v16.15.0
We also need to verify if the npm
package is available on our system we run the below command
npm --version
You should receive an output similar to that below
8.5.5
If you have neither Node Js nor npm installed on your system, please visit the links below to see how:
https://docs.npmjs.com/downloading-and-installing-node-js-and-npm
https://nodejs.org/en/download
Creating a Node/Express Js app
Now that we have Node Js and npm installed on our system, we can go ahead and create a Node/Express Js web server.
First, we need to set up a Node Js boilerplate using npm. We can do that by running the script below to create a package.json
file in our app folder.
npm init -y
Second, we need to install the express
, cors
, nodemon
, and dotenv
libraries in our application by running the script below.
npm install express cors nodemon dotenv
Third, we need to create a file in our application folder structure that will be run by Node Js, weāll call it index.js
. Afterward, we can paste the below code to create our web server.
const express = require("express");
const cors = require("cors");
const config = require("dotenv").config;
config()
const PORT = process.env.PORT
const app = express();
app.get("/", (req, res, next) => {
res.setHeader("Content-type", "text/html");
res.send("<h1>Hello world! I'm a Node/Express Js web server...</h1>");
next();
});
app.listen(PORT,
()=>{
console.log(`Server listening on port ${PORT}`)
}
)
We edit the package.json
file to automatically restart our app when changes are made using nodemon
{
"name": "YOUR APP NAME",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "nodemon index.js"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"cors": "^2.8.5",
"dotenv": "^16.4.5",
"express": "^4.19.1",
"nodemon": "^3.1.0"
}
}
After we modify our package.json
file, we create a .env
file within our app folder where we can store our environment variables like the port the app should run on. Then we paste the following line into it:
PORT=5000 // You can replace 5000 with any port of your choice, provided it's available/not in use
Now we can run our web server and view it on port 5000 (or on any port you specified), by executing the script below
npm start
Go to your desired web browser and enter http://localhost:5000
in the search bar. You should see something like this

Installing the prom-client
library in our Node Js application
Now that weāre done setting up the web server, we install the prom-client
library by running the script below
npm install prom-client
Next, we import the prom-client
library into our index.js
file. Copy the below code and add it to the existing index.js
file
const client = require("prom-client")
Then we create a new registry and collect the applicationās default metrics
// * REGISTERS A NEW PROMETHEUS CLIENT
const register = new client.Registry();
client.collectDefaultMetrics(
{
register: register,
prefix: "node_" // * Prefixes the default app metrics name with the specified string
}
);
Afterward, we create a new endpoint which will be called by the Prometheus application when it scrapes your application metrics.
/**
* Get's the metrics to be fed to the prometheus server
* @param req The express Js req object
* @param res The express Js response object
* @param next The express Js next function
*/
app.get("/metrics", async (req, res, next) => {
res.setHeader("Content-type", register.contentType);
res.send(await register.metrics());
next();
});
The index.js
file should now look like this
const express = require("express");
const cors = require("cors");
const config = require("dotenv").config;
const client = require("prom-client");
config();
const PORT = process.env.PORT;
const app = express();
// * REGISTERS A NEW PROMETHEUS CLIENT
const register = new client.Registry();
client.collectDefaultMetrics({
register: register,
prefix: "node_", // * Prefixes the default app metrics name with the specified string
});
/**
* Get's the metrics to be fed to the prometheus server
* @param req The express Js req object
* @param res The express Js response object
* @param next The express Js next function
*/
app.get("/metrics", async (req, res, next) => {
res.setHeader("Content-type", register.contentType);
res.send(await register.metrics());
next();
});
app.get("/", (req, res, next) => {
res.setHeader("Content-type", "text/html");
res.send("<h1>Hello world! I'm a Node/Express Js web server...</h1>");
next();
});
app.listen(PORT, () => {
console.log(`Server listening on port ${PORT}`);
});
Nodemon should automatically restart the webserver for us, if not, run Ctrl + c
, OR Cmd + C
in the terminal to stop the webserver, then npm start
to restart the webserver. To view the application metrics, visit http://localhost:5000/metrics
. You should see something similar to the image below.

Collecting custom application metrics
The above image displays the default metrics collected by the prom-client
library, e.g. the number of active requests made to the webserver, etc. To create your custom metrics, youāve got to go some steps further.
There are multiple custom metrics we can track in our application, e.g.
- The total number of requests made to the application
- The response rate for each HTTP request
- The application memory in use
- The applicationās CPU utilization
I will explain how to get each custom metric mentioned above.
The total number of requests made to the application
This tracks the total number of HTTP requests the web server has received since it was spun up. To track said metric, we create an object, a class for assigning labels to the metrics, and a counter that will be responsible for incrementing the total number of HTTP requests received, upon each new request.
// * CREATES A NEW OBJECT CONTAINING THE METRICS LABEL NAMES
const metric_label_enum = {
PATH: "path",
METHOD: "method",
STATUS_CODE: "status_code",
};
// * CREATES A NEW CLASS FOR ASSIGNING LABELS TO VARIOUS METRICS
class MetricLabelClass {
constructor(method, pathname, statusCode) {
this.method = method;
this.path = pathname;
this.status_code = statusCode;
}
}
// * The http_request counter for measuring the total no of requests made to the application
const http_request_total = new client.Counter({
name: "node_http_request_total",
help: "The total number of HTTP requests received",
labelNames: [
metric_label_enum.PATH,
metric_label_enum.METHOD,
metric_label_enum.STATUS_CODE,
],
});
The above piece of code creates a new instance from the Counter
class exposed by the prom-client
library.
Next, we register the metric, so the prom-client
library can begin tracking the metric
// * Registers the HTTP request counter metric
register.registerMetric(http_request_total);
Now, we create an Express Js middleware that will be responsible for incrementing the counter upon every HTTP request.
app.use((req, res, next) => {
// Get's the Req URL object
const req_url = new URL(req.url, `http://${req.headers.host}`);
// Copies the original res.send function to a variable
const original_res_send_function = res.send;
// Creates a new send function with the functionality of ending the timer, and incrementing the http_request_total metric whenever the response.send function is called
const res_send_interceptor = function (body) {
// Increment the http_request_total metric
http_request_total.inc(
new MetricLabelClass(req.method, req_url.pathname, res.statusCode)
);
// Calls the original response.send function
original_res_send_function.call(this, body);
};
// Overrides the existing response.send object/property with the function defined above
res.send = res_send_interceptor;
next();
});
The above code creates a middleware, i.e. a function that will be executed before any incoming HTTP request is processed in the specified route. N.B: It must be called before the route handlers. The req_url
variable stores the URL of the incoming HTTP request. The original_res_send_function
stores the original function in the res.send
property. The res_send_interceptor overrides the existing res.send
function, and increments the total number of HTTP requests made to the web server. Note that it also keeps track of the response method, pathname/endpoint, and status code. These are called labels and are used to filter metrics of the same name in Prometheus. For instance, if a GET
request is made to the /login
endpoint with a response code of 200
, and a POST
request is made to the /signup
endpoint with a response code of 400
, they both will be stored under the same metric name, but with different labels, i.e. For the initial request, it will be stored as node_http_request_total{path="/login", status_code="200", method="GET"}
, while for the 2nd request, it will be stored as node_http_request_total{path="/signup", status_code="400", method="POST"}
.
At the end, your index.js
file should look similar to that below
const express = require("express");
const cors = require("cors");
const config = require("dotenv").config;
const client = require("prom-client");
config();
const PORT = process.env.PORT;
const app = express();
// * CREATES A NEW OBJECT CONTAINING THE METRICS LABEL NAMES
const metric_label_enum = {
PATH: "path",
METHOD: "method",
STATUS_CODE: "status_code",
};
// * CREATES A NEW CLASS FOR ASSIGNING LABELS TO VARIOUS METRICS
class MetricLabelClass {
constructor(method, pathname, statusCode) {
this.method = method;
this.path = pathname;
this.status_code = statusCode;
}
}
// * REGISTERS A NEW PROMETHEUS CLIENT
const register = new client.Registry();
// * The http_request counter for measuring the total no of requests made to the application
const http_request_total = new client.Counter({
name: "node_http_request_total",
help: "The total number of HTTP requests received",
labelNames: [
metric_label_enum.PATH,
metric_label_enum.METHOD,
metric_label_enum.STATUS_CODE,
],
});
client.collectDefaultMetrics({
register: register,
prefix: "node_", // * Prefixes the default app metrics name with the specified string
});
// * Registers the HTTP request counter metric
register.registerMetric(http_request_total);
app.use((req, res, next) => {
// Get's the Req URL object
const req_url = new URL(req.url, `http://${req.headers.host}`);
// Copies the original res.send function to a variable
const original_res_send_function = res.send;
// Creates a new send function with the functionality of ending the timer, and incrementing the http_request_total metric whenever the response.send function is called
const res_send_interceptor = function (body) {
// Increment the http_request_total metric
http_request_total.inc(
new MetricLabelClass(req.method, req_url.pathname, res.statusCode)
);
// Calls the original response.send function
original_res_send_function.call(this, body);
};
// Overrides the existing response.send object/property with the function defined above
res.send = res_send_interceptor;
next();
});
/**
* Get's the metrics to be fed to the prometheus server
* @param req The express Js req object
* @param res The express Js response object
* @param next The express Js next function
*/
app.get("/metrics", async (req, res, next) => {
res.setHeader("Content-type", register.contentType);
res.send(await register.metrics());
next();
});
app.get("/", (req, res, next) => {
res.setHeader("Content-type", "text/html");
res.send("<h1>Hello world! I'm a Node/Express Js web server...</h1>");
next();
});
app.listen(PORT, () => {
console.log(`Server listening on port ${PORT}`);
});
Next, visit http://localhost:5000/
, and refresh the page 5ā6 times, then visit http://localhost:5000/metrics
, and scroll to the bottom of the page. You should see something similar to the one below, with the newly registered metric node_http_request_total
The response rate for each HTTP request
Next, weāre gonna look at how to get the time in milliseconds it takes each request to get completed.
We begin by creating a new instance of the prom-client
histogram class, that will store the rates of various requests made to the server in buckets. This will help us calculate things like the 95th percentile of the HTTP response rate.
// * The http_response rate histogram for measuring the response rates for each http request
const http_response_rate_histogram = new client.Histogram({
name: "node_http_duration",
labelNames: [
metric_label_enum.PATH,
metric_label_enum.METHOD,
metric_label_enum.STATUS_CODE,
],
help: "The duration of HTTP requests in seconds",
buckets: [
0.0, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3,
1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 10
],
});
In the piece of code above, we registered multiple buckets ranging from 0.0
, which means 0 seconds to 10.0
which means 10 seconds. This is to store HTTP response rates in various buckets. An epitome is, if a HTTP request takes 4 seconds to process, the total time of the HTTP response will be stored in the buckets 4.0
to 10
.
Next, we register the metric, by adding the below code
// * Registers the HTTP response rate metric
register.registerMetric(http_response_rate_histogram);
Then, we modify the Express Js middleware we created earlier
app.use((req, res, next) => {
// Get's the Req URL object
const req_url = new URL(req.url, `http://${req.headers.host}`);
// Start's the prom-client histogram timer for the request
const endTimer = http_response_rate_histogram.startTimer();
// Copies the original res.send function to a variable
const original_res_send_function = res.send;
// Creates a new send function with the functionality of ending the timer, and incrementing the http_request_total metric whenever the response.send function is called
const res_send_interceptor = function (body) {
// Ends the histogram timer for the request
const timer = endTimer(
new MetricLabelClass(req.method, req_url.pathname, res.statusCode)
);
console.log(`HTTP request took ${timer} milliseconds to process`)
// Increment the http_request_total metric
http_request_total.inc(
new MetricLabelClass(req.method, req_url.pathname, res.statusCode)
);
// Calls the original response.send function
original_res_send_function.call(this, body);
};
// Overrides the existing response.send object/property with the function defined above
res.send = res_send_interceptor;
next();
});
The above code still contains the same middleware from the previous section, except a few changes were made to it, i.e. The code responsible for starting and ending the histogramās timer for each HTTP request was added.
Your index.js
file should now look similar to the code below
const express = require("express");
const cors = require("cors");
const config = require("dotenv").config;
const client = require("prom-client");
config();
const PORT = process.env.PORT;
const app = express();
// * CREATES A NEW OBJECT CONTAINING THE METRICS LABEL NAMES
const metric_label_enum = {
PATH: "path",
METHOD: "method",
STATUS_CODE: "status_code",
};
// * CREATES A NEW CLASS FOR ASSIGNING LABELS TO VARIOUS METRICS
class MetricLabelClass {
constructor(method, pathname, statusCode) {
this.method = method;
this.path = pathname;
this.status_code = statusCode;
}
}
// * REGISTERS A NEW PROMETHEUS CLIENT
const register = new client.Registry();
// * The http_request counter for measuring the total no of requests made to the application
const http_request_total = new client.Counter({
name: "node_http_request_total",
help: "The total number of HTTP requests received",
labelNames: [
metric_label_enum.PATH,
metric_label_enum.METHOD,
metric_label_enum.STATUS_CODE,
],
});
// * The http_response rate histogram for measuring the response rates for each http request
const http_response_rate_histogram = new client.Histogram({
name: "node_http_duration",
labelNames: [
metric_label_enum.PATH,
metric_label_enum.METHOD,
metric_label_enum.STATUS_CODE,
],
help: "The duration of HTTP requests in seconds",
buckets: [
0.0, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3,
1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 10
],
});
client.collectDefaultMetrics({
register: register,
prefix: "node_", // * Prefixes the default app metrics name with the specified string
});
// * Registers the HTTP request counter metric
register.registerMetric(http_request_total);
// * Registers the HTTP response rate metric
register.registerMetric(http_response_rate_histogram);
app.use((req, res, next) => {
// Get's the Req URL object
const req_url = new URL(req.url, `http://${req.headers.host}`);
// Start's the prom-client histogram timer for the request
const endTimer = http_response_rate_histogram.startTimer();
// Copies the original res.send function to a variable
const original_res_send_function = res.send;
// Creates a new send function with the functionality of ending the timer, and incrementing the http_request_total metric whenever the response.send function is called
const res_send_interceptor = function (body) {
// Ends the histogram timer for the request
const timer = endTimer(
new MetricLabelClass(req.method, req_url.pathname, res.statusCode)
);
console.log(`HTTP request took ${timer} milliseconds to process`);
// Increment the http_request_total metric
http_request_total.inc(
new MetricLabelClass(req.method, req_url.pathname, res.statusCode)
);
// Calls the original response.send function
original_res_send_function.call(this, body);
};
// Overrides the existing response.send object/property with the function defined above
res.send = res_send_interceptor;
next();
});
/**
* Get's the metrics to be fed to the prometheus server
* @param req The express Js req object
* @param res The express Js response object
* @param next The express Js next function
*/
app.get("/metrics", async (req, res, next) => {
res.setHeader("Content-type", register.contentType);
res.send(await register.metrics());
next();
});
app.get("/", (req, res, next) => {
res.setHeader("Content-type", "text/html");
res.send("<h1>Hello world! I'm a Node/Express Js web server...</h1>");
next();
});
app.listen(PORT, () => {
console.log(`Server listening on port ${PORT}`);
});
Finally, visit http://loalhost:5000
on your browser, reload the page 2 or more times, and now visit http://loalhost:5000/metrics
. You should see an output similar to that below
The application memory in use
Next, weāll keep track of the applicationās memory which is currently in use.
We begin with importing the memoryUsage
function from the Node Js process
library, creating a new instance of the prom-clientGuage
class and registering the new metric. Afterward, we modify the Express Js middleware by adding a piece of code responsible for keeping a record of the current applicationās memory in use on every request made to the web server.
const memoryUsage = require("process").memoryUsage
// * The node_js memory guage for measuring the memory of the application in use
const nodejs_memory = new client.Gauge({
name: "node_memory_usage_bytes",
help: "Current memory usage of the Node.js process in bytes",
});
// * Registers the Node Js memory guage metric
register.registerMetric(nodejs_memory);
app.use((req, res, next) => {
// Get's the Req URL object
const req_url = new URL(req.url, `http://${req.headers.host}`);
// Start's the prom-client histogram timer for the request
const endTimer = http_response_rate_histogram.startTimer();
//Collect's the memory usage before processing the requests
const used_memory_before = memoryUsage().rss;
// Copies the original res.send function to a variable
const original_res_send_function = res.send;
// Creates a new send function with the functionality of ending the timer, and incrementing the http_request_total metric whenever the response.send function is called
const res_send_interceptor = function (body) {
// Ends the histogram timer for the request
const timer = endTimer(
new MetricLabelClass(req.method, req_url.pathname, res.statusCode)
);
console.log(`HTTP request took ${timer} seconds to process`);
//Collect's the memory usage after processing the requests
const used_memory_after = memoryUsage().rss;
// Increment the http_request_total metric
http_request_total.inc(
new MetricLabelClass(req.method, req_url.pathname, res.statusCode)
);
// Update the nodejs_memory guage with the differences in the memory usage
nodejs_memory.set(used_memory_after - used_memory_before);
// Calls the original response.send function
original_res_send_function.call(this, body);
};
// Overrides the existing response.send object/property with the function defined above
res.send = res_send_interceptor;
next();
});
The final code in the index.js
file should look similar to that below
const express = require("express");
const cors = require("cors");
const config = require("dotenv").config;
const client = require("prom-client");
const memoryUsage = require("process").memoryUsage;
config();
const PORT = process.env.PORT;
const app = express();
// * CREATES A NEW OBJECT CONTAINING THE METRICS LABEL NAMES
const metric_label_enum = {
PATH: "path",
METHOD: "method",
STATUS_CODE: "status_code",
};
// * CREATES A NEW CLASS FOR ASSIGNING LABELS TO VARIOUS METRICS
class MetricLabelClass {
constructor(method, pathname, statusCode) {
this.method = method;
this.path = pathname;
this.status_code = statusCode;
}
}
// * REGISTERS A NEW PROMETHEUS CLIENT
const register = new client.Registry();
// * The http_request counter for measuring the total no of requests made to the application
const http_request_total = new client.Counter({
name: "node_http_request_total",
help: "The total number of HTTP requests received",
labelNames: [
metric_label_enum.PATH,
metric_label_enum.METHOD,
metric_label_enum.STATUS_CODE,
],
});
// * The http_response rate histogram for measuring the response rates for each http request
const http_response_rate_histogram = new client.Histogram({
name: "node_http_duration",
labelNames: [
metric_label_enum.PATH,
metric_label_enum.METHOD,
metric_label_enum.STATUS_CODE,
],
help: "The duration of HTTP requests in seconds",
buckets: [
0.0, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3,
1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 10,
],
});
// * The node_js memory guage for measuring the memory of the application in use
const nodejs_memory = new client.Gauge({
name: "node_memory_usage_bytes",
help: "Current memory usage of the Node.js process in bytes",
});
client.collectDefaultMetrics({
register: register,
prefix: "node_", // * Prefixes the default app metrics name with the specified string
});
// * Registers the HTTP request counter metric
register.registerMetric(http_request_total);
// * Registers the HTTP response rate metric
register.registerMetric(http_response_rate_histogram);
// * Registers the Node Js memory guage metric
register.registerMetric(nodejs_memory);
app.use((req, res, next) => {
// Get's the Req URL object
const req_url = new URL(req.url, `http://${req.headers.host}`);
// Start's the prom-client histogram timer for the request
const endTimer = http_response_rate_histogram.startTimer();
//Collect's the memory usage before processing the requests
const used_memory_before = memoryUsage().rss;
// Copies the original res.send function to a variable
const original_res_send_function = res.send;
// Creates a new send function with the functionality of ending the timer, and incrementing the http_request_total metric whenever the response.send function is called
const res_send_interceptor = function (body) {
// Ends the histogram timer for the request
const timer = endTimer(
new MetricLabelClass(req.method, req_url.pathname, res.statusCode)
);
console.log(`HTTP request took ${timer} seconds to process`);
//Collect's the memory usage after processing the requests
const used_memory_after = memoryUsage().rss;
// Increment the http_request_total metric
http_request_total.inc(
new MetricLabelClass(req.method, req_url.pathname, res.statusCode)
);
// Update the nodejs_memory guage with the differences in the memory usage
nodejs_memory.set(used_memory_after - used_memory_before);
// Calls the original response.send function
original_res_send_function.call(this, body);
};
// Overrides the existing response.send object/property with the function defined above
res.send = res_send_interceptor;
next();
});
/**
* Get's the metrics to be fed to the prometheus server
* @param req The express Js req object
* @param res The express Js response object
* @param next The express Js next function
*/
app.get("/metrics", async (req, res, next) => {
res.setHeader("Content-type", register.contentType);
res.send(await register.metrics());
next();
});
app.get("/", (req, res, next) => {
res.setHeader("Content-type", "text/html");
res.send("<h1>Hello world! I'm a Node/Express Js web server...</h1>");
next();
});
app.listen(PORT, () => {
console.log(`Server listening on port ${PORT}`);
});
Lastly, visit http://loalhost:5000
on your browser, reload the page, and visit http://loalhost:5000/metrics
. You should see an output similar to that below

The applicationās CPU utilization
Finally, for the last metric on this list, weāll monitor the amount of the CPU the application is currently using.
We begin by installing the os
library
npm install os
Next, we import the cpus
function from the os
library
const cpus = require("os").cpus
Afterward, we create a function responsible for calculating the current amount of the CPU the app is utilizing
/**
* Calculates the current CPU usage
* @returns number
*/
const calculate_cpu_usage = () => {
const previousTotalTime = process.hrtime()[0]; // Store previous total CPU time
// Get current CPU usage data
const cpusData = cpus();
// Calculate cumulative CPU times
const currentTotalTime = cpusData.reduce(
(acc, cpu) => acc + Object.values(cpu.times).reduce((a, b) => a + b, 0),
0
);
// Calculate CPU usage based on time elapsed and total CPU time
const idleTime = currentTotalTime - previousTotalTime;
const cpuUsage = 100 - (idleTime / currentTotalTime) * 100;
// Store current total CPU time for the next calculation
process.hrtime()[0] = currentTotalTime;
return cpuUsage;
};
Then, we create and register a new instance of the Guage
class offered by the prom-client
library, and register it amongst other metrics.
// * The node_js CPU usage guage for measuring the memory of the application in use
const nodejs_cpu_usage = new client.Gauge({
name: "node_cpu_usage_percent",
help: "CPU utilization of the Node.js process in percentage",
});
// * Registers the Node Js cpu usage guage metric
register.registerMetric(nodejs_cpu_usage);
Next, we modify the Express Js middleware by adding a few lines of code responsible for updating the prom-client
Guage with the current CPU usage on every HTTP request.
app.use((req, res, next) => {
// Get's the Req URL object
const req_url = new URL(req.url, `http://${req.headers.host}`);
// Start's the prom-client histogram timer for the request
const endTimer = http_response_rate_histogram.startTimer();
//Collect's the memory usage before processing the requests
const used_memory_before = memoryUsage().rss;
//Collect's the CPU usage before processing the requests
const used_cpu_before = calculate_cpu_usage();
// Copies the original res.send function to a variable
const original_res_send_function = res.send;
// Creates a new send function with the functionality of ending the timer, and incrementing the http_request_total metric whenever the response.send function is called
const res_send_interceptor = function (body) {
// Ends the histogram timer for the request
const timer = endTimer(
new MetricLabelClass(req.method, req_url.pathname, res.statusCode)
);
console.log(`HTTP request took ${timer} seconds to process`);
//Collect's the memory usage after processing the requests
const used_memory_after = memoryUsage().rss;
//Collect's the CPU usage after processing the requests
const used_cpu_after = calculate_cpu_usage();
// Increment the http_request_total metric
http_request_total.inc(
new MetricLabelClass(req.method, req_url.pathname, res.statusCode)
);
// Update the nodejs_memory guage with the differences in the memory usage
nodejs_memory.set(used_memory_after - used_memory_before);
// Update the nodejs_cpu_usage guage with the differences in the cpu usage
nodejs_cpu_usage.set(used_cpu_after - used_cpu_before);
// Calls the original response.send function
original_res_send_function.call(this, body);
};
// Overrides the existing response.send object/property with the function defined above
res.send = res_send_interceptor;
next();
});
Your final index.js
file should look similar to that below
const express = require("express");
const cors = require("cors");
const config = require("dotenv").config;
const client = require("prom-client");
const memoryUsage = require("process").memoryUsage;
const cpus = require("os").cpus;
config();
const PORT = process.env.PORT;
const app = express();
// * CREATES A NEW OBJECT CONTAINING THE METRICS LABEL NAMES
const metric_label_enum = {
PATH: "path",
METHOD: "method",
STATUS_CODE: "status_code",
};
// * CREATES A NEW CLASS FOR ASSIGNING LABELS TO VARIOUS METRICS
class MetricLabelClass {
constructor(method, pathname, statusCode) {
this.method = method;
this.path = pathname;
this.status_code = statusCode;
}
}
// * REGISTERS A NEW PROMETHEUS CLIENT
const register = new client.Registry();
// * The http_request counter for measuring the total no of requests made to the application
const http_request_total = new client.Counter({
name: "node_http_request_total",
help: "The total number of HTTP requests received",
labelNames: [
metric_label_enum.PATH,
metric_label_enum.METHOD,
metric_label_enum.STATUS_CODE,
],
});
// * The http_response rate histogram for measuring the response rates for each http request
const http_response_rate_histogram = new client.Histogram({
name: "node_http_duration",
labelNames: [
metric_label_enum.PATH,
metric_label_enum.METHOD,
metric_label_enum.STATUS_CODE,
],
help: "The duration of HTTP requests in seconds",
buckets: [
0.0, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3,
1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 10,
],
});
// * The node_js memory guage for measuring the memory of the application in use
const nodejs_memory = new client.Gauge({
name: "node_memory_usage_bytes",
help: "Current memory usage of the Node.js process in bytes",
});
// * The node_js CPU usage guage for measuring the memory of the application in use
const nodejs_cpu_usage = new client.Gauge({
name: "node_cpu_usage_percent",
help: "CPU utilization of the Node.js process in percentage",
});
client.collectDefaultMetrics({
register: register,
prefix: "node_", // * Prefixes the default app metrics name with the specified string
});
// * Registers the HTTP request counter metric
register.registerMetric(http_request_total);
// * Registers the HTTP response rate metric
register.registerMetric(http_response_rate_histogram);
// * Registers the Node Js memory guage metric
register.registerMetric(nodejs_memory);
// * Registers the Node Js cpu usage guage metric
register.registerMetric(nodejs_cpu_usage);
/**
* Calculates the current CPU usage
* @returns number
*/
const calculate_cpu_usage = () => {
const previousTotalTime = process.hrtime()[0]; // Store previous total CPU time
// Get current CPU usage data
const cpusData = cpus();
// Calculate cumulative CPU times
const currentTotalTime = cpusData.reduce(
(acc, cpu) => acc + Object.values(cpu.times).reduce((a, b) => a + b, 0),
0
);
// Calculate CPU usage based on time elapsed and total CPU time
const idleTime = currentTotalTime - previousTotalTime;
const cpuUsage = 100 - (idleTime / currentTotalTime) * 100;
// Store current total CPU time for the next calculation
process.hrtime()[0] = currentTotalTime;
return cpuUsage;
};
app.use((req, res, next) => {
// Get's the Req URL object
const req_url = new URL(req.url, `http://${req.headers.host}`);
// Start's the prom-client histogram timer for the request
const endTimer = http_response_rate_histogram.startTimer();
//Collect's the memory usage before processing the requests
const used_memory_before = memoryUsage().rss;
//Collect's the CPU usage before processing the requests
const used_cpu_before = calculate_cpu_usage();
// Copies the original res.send function to a variable
const original_res_send_function = res.send;
// Creates a new send function with the functionality of ending the timer, and incrementing the http_request_total metric whenever the response.send function is called
const res_send_interceptor = function (body) {
// Ends the histogram timer for the request
const timer = endTimer(
new MetricLabelClass(req.method, req_url.pathname, res.statusCode)
);
console.log(`HTTP request took ${timer} seconds to process`);
//Collect's the memory usage after processing the requests
const used_memory_after = memoryUsage().rss;
//Collect's the CPU usage after processing the requests
const used_cpu_after = calculate_cpu_usage();
// Increment the http_request_total metric
http_request_total.inc(
new MetricLabelClass(req.method, req_url.pathname, res.statusCode)
);
// Update the nodejs_memory guage with the differences in the memory usage
nodejs_memory.set(used_memory_after - used_memory_before);
// Update the nodejs_cpu_usage guage with the differences in the cpu usage
nodejs_cpu_usage.set(used_cpu_after - used_cpu_before);
// Calls the original response.send function
original_res_send_function.call(this, body);
};
// Overrides the existing response.send object/property with the function defined above
res.send = res_send_interceptor;
next();
});
/**
* Get's the metrics to be fed to the prometheus server
* @param req The express Js req object
* @param res The express Js response object
* @param next The express Js next function
*/
app.get("/metrics", async (req, res, next) => {
res.setHeader("Content-type", register.contentType);
res.send(await register.metrics());
next();
});
app.get("/", (req, res, next) => {
res.setHeader("Content-type", "text/html");
res.send("<h1>Hello world! I'm a Node/Express Js web server...</h1>");
next();
});
app.listen(PORT, () => {
console.log(`Server listening on port ${PORT}`);
});
Lastly, visit http://loalhost:5000
on your browser, reload the page, and visit http://loalhost:5000/metrics
. You should see an output similar to that below

Conclusion
Thanks for reading this article, hope it helped you on your journey to Node Js application monitoring using Prometheus. Iāll be sure to create another article on how to implement these using a shorter approach via the swagger-stats
library sooner or later. For those who still have a hard time understanding how to use Prometheus alongside Node/Express Js, you can explore the below links:
https://www.techtarget.com/searchapparchitecture/tip/5-application-performance-metrics-all-dev-teams-should-track (other useful metrics to track)
https://github.com/RisingStack/example-prometheus-nodejs (analyzing, and displaying said metrics on the Prometheus client)