Docker and Docker Compose
Docker allows developers to package their applications along with all dependencies into a single unit called a container. This ensures consistency across different environments, such as development, testing, and production, reducing the "it works on my machine" problem.
Install Docker: https://docs.docker.com/engine/install/
Dockerfile
A Dockerfile is a text file that contains instructions for building a Docker image. It defines the steps needed to create a Docker image, which serves as a blueprint for launching Docker containers.
Add Dockerfile:
# Use the official Node.js image with specified version
FROM node:20.11-alpine3.18
# Set the working directory inside the container
WORKDIR /usr/src/app
# Copy package.json and package-lock.json to the working directory
COPY package*.json ./
# Install project dependencies
RUN npm install
# Copy the source code to the working directory
COPY src/ /usr/src/app/src
COPY tsconfig.json ./
# Build TypeScript code
RUN npm run build
# Expose the required ports
EXPOSE 3000
# Command to run the application
CMD ["node", "dist/main.js"]
Alpine Linux is a lightweight and secure Linux distribution that is particularly well-suited for containerized environments, embedded systems, and resource-constrained environments where efficiency and security are paramount.
Build your docker image like this:
docker build . -t lrbooks-nodejs:latest
Note:
Usesudo docker ...
in case you have any permission issue.
Do docker images
to check your images:
docker images
Result:
REPOSITORY TAG IMAGE ID CREATED SIZE
lrbooks-nodejs latest 47e247df12a8 46 seconds ago 215MB
...
Docker Compose
Docker Compose is a tool for defining and running multi-container Docker applications. It allows you to use a YAML file to configure your application's services, networks, and volumes, and then spin up all the containers required to run your application with a single command.
Note:
The easiest and recommended way to get Docker Compose is to install Docker Desktop. Docker Desktop includes Docker Compose along with Docker Engine and Docker CLI which are Compose prerequisites.Install Compose if needed: https://docs.docker.com/compose/install/
Add compose/docker-compose.yml:
services:
lr-rest-books:
image: lrbooks-nodejs:latest
ports:
- 3000:3000
volumes:
- ./config.json:/usr/src/app/config.json
depends_on:
mysql:
condition: service_healthy
redis:
condition: service_started
mongo:
condition: service_started
redis:
image: docker.io/bitnami/redis:7.0
environment:
- REDIS_PASSWORD=${REDIS_PASSWORD}
ports:
- 6379:6379
mysql:
image: docker.io/bitnami/mysql:5.7.43
environment:
- MYSQL_DATABASE=lr_book
- MYSQL_USER=test_user
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
ports:
- 3306:3306
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
timeout: 20s
retries: 10
volumes:
- ~/lr-mysql-data:/bitnami/mysql/data
mongo:
image: bitnami/mongodb:latest
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- 27017:27017
volumes:
- ~/lr-mongodb-data:/bitnami/mongodb
Add compose/config.json:
{
"app": {
"port": 3000,
"page_size": 5,
"token_secret": "LiteRank_in_Compose",
"token_hours": 48
},
"db": {
"file_name": "test.db",
"dsn": "mysql://test_user:test_pass@mysql:3306/lr_book?charset=utf8mb4",
"mongo_uri": "mongodb://mongo:27017",
"mongo_db_name": "lr_book"
},
"cache": {
"host": "redis",
"port": 6379,
"password": "test_pass",
"db": 0,
"timeout": 5000
}
}
Add compose/.env:
REDIS_PASSWORD=test_pass
MYSQL_PASSWORD=test_pass
MYSQL_ROOT_PASSWORD=test_root_pass
Caution:
.env
files should be ignored in.gitignore
.
Changes in .gitignore:
test.db
+.env
Run it:
cd compose
docker compose up
You should see something like this:
[+] Running 4/4
✔ Container compose-redis-1 Created 0.0s
✔ Container compose-mysql-1 Recreated 0.1s
✔ Container compose-mongo-1 Recreated 0.1s
✔ Container compose-lr-rest-books-1 Recreated 0.0s
Attaching to lr-rest-books-1, mongo-1, mysql-1, redis-1
redis-1 | redis 13:24:52.38
redis-1 | redis 13:24:52.39 Welcome to the Bitnami redis container
...
mongo-1 | mongodb 13:24:52.60 INFO ==>
mongo-1 | mongodb 13:24:52.60 INFO ==> Welcome to the Bitnami mongodb container
mongo-1 | mongodb 13:24:52.61 INFO ==> ** Starting MongoDB setup **
...
mysql-1 | mysql 13:24:52.61
mysql-1 | mysql 13:24:52.62 Welcome to the Bitnami mysql container
mysql-1 | mysql 13:24:52.63 INFO ==> ** Starting MySQL setup **
...
You don't need to manually install or setup those databases anymore. They're all in good hands with docker compose.
If you send some requests to your api server on port 3000, you should see logs like this as well:
lr-rest-books-1 | ::ffff:192.168.65.1 - - [02/Mar/2024:12:35:11 +0000] "GET /books/4/reviews?q=masterpiece HTTP/1.1" 200 2 "-" "curl/8.1.2" - 19.527 ms
lr-rest-books-1 | ::ffff:192.168.65.1 - - [02/Mar/2024:12:35:21 +0000] "GET /books/2/reviews?q=masterpiece HTTP/1.1" 200 2 "-" "curl/8.1.2" - 3.162 ms
lr-rest-books-1 | ::ffff:192.168.65.1 - - [02/Mar/2024:12:35:26 +0000] "GET /books/2/reviews?q=masterpiece HTTP/1.1" 200 2 "-" "curl/8.1.2" - 2.242 ms
...
If you want to skip the API server’s docker image building step, you may tune the compose/docker-compose.yml:
@@ -1,6 +1,8 @@
services:
lr-rest-books:
- image: lrbooks-nodejs:latest
+ build:
+ context: ../
+ dockerfile: Dockerfile
ports:
- 3000:3000
volumes:
Run it again:
docker compose up
The compose plugin will build the image for you if needed.
[+] Building 3.5s (13/13) FINISHED docker:desktop-linux
=> [lr-rest-books internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 593B 0.0s
=> [lr-rest-books internal] load metadata for docker.io/library/node:20.11-alpine3.18 3.3s
=> [lr-rest-books auth] library/node:pull token for registry-1.docker.io 0.0s
=> [lr-rest-books internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [lr-rest-books 1/7] FROM docker.io/library/node:20.11-alpine3.18@sha256:a02826c7340c37a29179152723190bcc3044f933c925f3c2d78abb20f794de3f 0.0s
=> [lr-rest-books internal] load build context 0.2s
=> => transferring context: 2.21kB 0.2s
=> CACHED [lr-rest-books 2/7] WORKDIR /usr/src/app 0.0s
=> CACHED [lr-rest-books 3/7] COPY package*.json ./ 0.0s
=> CACHED [lr-rest-books 4/7] RUN npm install 0.0s
=> CACHED [lr-rest-books 5/7] COPY src/ /usr/src/app/src 0.0s
=> CACHED [lr-rest-books 6/7] COPY tsconfig.json ./ 0.0s
=> CACHED [lr-rest-books 7/7] RUN npm run build 0.0s
=> [lr-rest-books] exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:fe1c6e9f44aee52d0eb80032cdc1d9c961f2c6c2b18aeed69c763bb3464241f0 0.0s
=> => naming to docker.io/library/compose-lr-rest-books 0.0s
[+] Running 4/4
✔ Container compose-mongo-1 Created 0.0s
✔ Container compose-redis-1 Created 0.0s
✔ Container compose-mysql-1 Created 0.0s
✔ Container compose-lr-rest-books-1 Created 0.1s
Attaching to lr-rest-books-1, mongo-1, mysql-1, redis-1
...
Now, you may try all those endpoints with curl. Everything should work even more smoothly than you expected.
Kubernetes
If you want to push your cloud-native solution even further, please try Kubernetes.
It's also known as K8s and is an open-source system for automating deployment, scaling, and management of containerized applications.
You can make a deployment yaml like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lr-books-deployment
spec:
replicas: 3
selector:
matchLabels:
app: lr-books
template:
metadata:
labels:
app: lr-books
spec:
containers:
- name: lr-books-api
image: lrbooks-nodejs:latest
ports:
- containerPort: 3000
And apply it with this command:
kubectl apply -f lr-books-deployment.yaml