It is a common knowledge that Node.js is a single-threaded single-core process. It sounds like an issue, but it gives you full freedom over how many processes you can run and how many cores want to utilize.
There are several ways you can scale Node over whole machine: run multiple processes, use Node to run subprocesses, or use virtualization software. We will focus on the latter, and see how we can also load-balance requests between all of them with a quiet primitive Nginx configuration.
Overview
It is not rare to have a machine with 4 cores, or 6, or 12… Especially if that’s a server machine.
It might be tempting to run multiple processes, but we will use Docker as a virtualization software. It will be easier to version control your deployment, and to automate deployment process. In our setup, we will have 1 image of Nginx, and X images of our Node app (depending on the number of cores your CPU has).
In real life, production-grade applications, you would build your own Node image with your app, all related modules, binaries and configuration scripts. You would then wrap the whole process into a CI/CD pipeline, but that’s a different scenario… Here will keep it simple.
Node application code
To keep our example simple, we will create one single index.js
file and we will use the following code:
const http = require('http');
const port = process.env.PORT || 3000;
const host = process.env.HOST || 'app';
const hostname = process.env.HOSTNAME;
const server = http.createServer((req, res) => {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end(`Hello from server ${hostname}\n`);
});
server.listen(port, `${host}`);
console.log(`Server running at ${host}:${port}/`);
You will notice that we are listening to the host app
, but we are also taking the hostname
from the environment variables – it will be automatically provided by Docker. This will help us debug which server is responding to our request.
Now, create a folder for our project and prepare the following structure:
balancer
|- app
|- index.js
|- package.json
|- Dockerfile
|- docker-compose.yml
|- nginx.conf
We will look into Nginx and Docker configuration next.
Nginx
I’m going to create a simple proxy_pass and provide a list of App instances in the upstream. Additionally, I will use the “Least connected” method of choosing the host, which means the actual response time will be considered and the least busy server will be chosen. Additionally, we added resolver 127.0.0.11 valid=30s;
, which is Docker’s DNS resolving service. It will make sure that Nginx knows about all app
instances.
events {
worker_connections 4096;
}
http {
upstream app {
least_conn;
server 0.0.0.1; # placeholder
}
server {
listen 80;
server_name localhost;
resolver 127.0.0.11 valid=30s;
set $upstream app;
location / {
proxy_pass http://$upstream:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
As you might’ve guessed, this goes into nginx.conf
. Every scaled container will receive an index suffix, i.e. app_1
, app_2
, app_3
and app
_4.
Now we will deploy our Node application to 4 containers.
Docker configuration
Create a Dockerfile
in the project’s root.
FROM node:18
WORKDIR /app
COPY app/package.json /app
RUN npm install --production
COPY app /app
CMD [ "node", "index.js" ]
This simple Dockerfile will instruct Docker to use Node 18 as a base, set the working directory to /app, then copy package.json, install packages and copy all other files from the app folder, and finally set the image startup command to node index.js
.
Now the final step: glue up everything we did: the docker-compose.yml
:
version: '3.9'
services:
app:
build: .
environment:
- PORT=3000
deploy:
replicas: 4
nginx:
image: nginx:latest
ports:
- 80:80
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
Docker uses .yml
configuration files to run multiple containers. Additionally, for our app, we provide replicas: 4
under the deploy
collection. This will instruct docker to create 4 instances of the same container.
Now, the last step is to run docker-compose, docker-compose up --build
. I provide --build
to make sure the images are rebuilt after I did any change. If you are on Mac, and you want to expose port 80, you will need to run it with sudo
.
When everything is fine, your output will look something like this:
docker-compose up --build
Building app
[+] Building 1.2s (10/10) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 36B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/node:18 1.1s
=> [1/5] FROM docker.io/library/node:18@sha256:8d9a875ee427897ef245302e31e2319385b092f1c3368b497e89790f240368f5 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 92B 0.0s
=> CACHED [2/5] WORKDIR /app 0.0s
=> CACHED [3/5] COPY app/package.json /app 0.0s
=> CACHED [4/5] RUN npm install --production 0.0s
=> CACHED [5/5] COPY app /app 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:b2dcd17b112cac757a0ea68596def3c1418416fcead260b8245562c17ffcad40 0.0s
=> => naming to docker.io/library/balancer_app 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
Starting balancer_app_1 ... done
Starting balancer_app_2 ... done
Starting balancer_app_3 ... done
Starting balancer_app_4 ... done
Recreating balancer_nginx_1 ... done
Attaching to balancer_app_1, balancer_app_2, balancer_app_3, balancer_app_4, balancer_nginx_1
app_1 | Server running at app:3000/
app_2 | Server running at app:3000/
app_3 | Server running at app:3000/
app_4 | Server running at app:3000/
nginx_1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx_1 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
nginx_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx_1 | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
nginx_1 | 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
nginx_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
nginx_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx_1 | /docker-entrypoint.sh: Configuration complete; ready for start up
As you see, it created 4 app instances and 1 Nginx instance. Let’s confirm it with docker ps
:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eca9ebd87e5d nginx:latest "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 0.0.0.0:80->80/tcp balancer_nginx_1
7d6591b686ab balancer_app "docker-entrypoint.s…" 17 hours ago Up 2 minutes balancer_app_4
14646923eb00 balancer_app "docker-entrypoint.s…" 17 hours ago Up 2 minutes balancer_app_2
cef2baa59b91 balancer_app "docker-entrypoint.s…" 17 hours ago Up 2 minutes balancer_app_3
231e3063c7d1 balancer_app "docker-entrypoint.s…" 17 hours ago Up 2 minutes balancer_app_1
That’s it. Our project just grew up and managed to occupy all CPU cores, and to serve even more requests per second.
Resources
- Nginx load balancing: http://nginx.org/en/docs/http/load_balancing.html
- Nginx Docker image: https://hub.docker.com/_/nginx
- Node Docker image guide: https://docs.docker.com/language/nodejs/
- Docker container scale: https://docs.docker.com/engine/reference/commandline/service_scale/
Did you know that I made an in-browser image converter, where you can convert your images without uploading them anywhere? Try it out and let me know what you think. It's free.