Docker and Docker Compose
Docker allows developers to package their applications along with all dependencies into a single unit called a container. This ensures consistency across different environments, such as development, testing, and production, reducing the "it works on my machine" problem.
Install Docker: https://docs.docker.com/engine/install/
Dockerfile
A Dockerfile is a text file that contains instructions for building a Docker image. It defines the steps needed to create a Docker image, which serves as a blueprint for launching Docker containers.
Add Dockerfile:
# Use Alpine Linux with Python 3 as base image
FROM python:3.9.18-alpine3.19
# Set the working directory in the container
WORKDIR /app
# Copy the dependencies file to the working directory
COPY requirements.txt .
# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt
RUN pip install gunicorn
# Copy the current directory contents into the container at /app
COPY books/ /app/books
COPY main.py /app
# Expose port
EXPOSE 5000
# Command to run the Flask application with Gunicorn
CMD ["gunicorn", "-w", "2", "-b", "0.0.0.0:5000", "main:app"]
Alpine Linux is a lightweight and secure Linux distribution that is particularly well-suited for containerized environments, embedded systems, and resource-constrained environments where efficiency and security are paramount.
Build your docker image:
docker build . -t lrbooks-py:latest
Note:
Usesudo docker ...
in case you have any permission issue.
Do docker images
to check your images:
docker images
Result:
REPOSITORY TAG IMAGE ID CREATED SIZE
lrbooks-py latest 741a7999d19f 9 seconds ago 74.1MB
...
Docker Compose
Docker Compose is a tool for defining and running multi-container Docker applications. It allows you to use a YAML file to configure your application's services, networks, and volumes, and then spin up all the containers required to run your application with a single command.
Note:
The easiest and recommended way to get Docker Compose is to install Docker Desktop. Docker Desktop includes Docker Compose along with Docker Engine and Docker CLI which are Compose prerequisites.Install Compose if needed: https://docs.docker.com/compose/install/
Add compose/docker-compose.yml:
services:
lr-rest-books:
image: lrbooks-py:latest
ports:
- 5000:5000
volumes:
- ./config.yml:/app/config.yml
depends_on:
mysql:
condition: service_healthy
redis:
condition: service_started
mongo:
condition: service_started
redis:
image: docker.io/bitnami/redis:7.0
environment:
- REDIS_PASSWORD=${REDIS_PASSWORD}
ports:
- 6379:6379
mysql:
image: docker.io/bitnami/mysql:5.7.43
environment:
- MYSQL_DATABASE=lr_book
- MYSQL_USER=test_user
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
ports:
- 3306:3306
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
timeout: 20s
retries: 10
volumes:
- ~/lr-mysql-data:/bitnami/mysql/data
mongo:
image: bitnami/mongodb:latest
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- 27017:27017
volumes:
- ~/lr-mongodb-data:/bitnami/mongodb
Add compose/config.yml:
app:
port: 5000
page_size: 5
token_secret: "I_Love_LiteRank"
token_hours: 24
db:
file_name: "test.db"
host: mysql
port: 3306
user: "test_user"
password: "test_pass"
database: "lr_book"
mongo_uri: "mongodb://mongo:27017"
mongo_db_name: "lr_book"
cache:
host: redis
port: 6379
password: "test_pass"
db: 0
Add compose/.env:
REDIS_PASSWORD=test_pass
MYSQL_PASSWORD=test_pass
MYSQL_ROOT_PASSWORD=test_root_pass
Caution:
.env
files should be ignored in.gitignore
.
Changes in .gitignore:
lrFlaskEnv/
test.db
+.env
Run it:
cd compose
docker compose up
You should see something like this:
[+] Running 4/4
✔ Container compose-mongo-1 Created 0.0s
✔ Container compose-redis-1 Created 0.0s
✔ Container compose-mysql-1 Recreated 0.1s
✔ Container compose-lr-rest-books-1 Recreated 0.0s
Attaching to lr-rest-books-1, mongo-1, mysql-1, redis-1
redis-1 | redis 13:24:52.38
redis-1 | redis 13:24:52.39 Welcome to the Bitnami redis container
...
mongo-1 | mongodb 13:24:52.60 INFO ==>
mongo-1 | mongodb 13:24:52.60 INFO ==> Welcome to the Bitnami mongodb container
mongo-1 | mongodb 13:24:52.61 INFO ==> ** Starting MongoDB setup **
...
mysql-1 | mysql 13:24:52.61
mysql-1 | mysql 13:24:52.62 Welcome to the Bitnami mysql container
mysql-1 | mysql 13:24:52.63 INFO ==> ** Starting MySQL setup **
...
You don't need to manually install or setup those databases anymore. They're all in good hands with docker compose.
If you send some requests to your api server on port 5000, you should see the responses as good as before.
lr-rest-books-1 | [2024-03-08 15:46:08 +0000] [8] [DEBUG] GET /books
lr-rest-books-1 | [2024-03-08 15:46:09 +0000] [8] [DEBUG] GET /books
lr-rest-books-1 | [2024-03-08 15:46:10 +0000] [8] [DEBUG] GET /books
lr-rest-books-1 | [2024-03-08 15:46:11 +0000] [8] [DEBUG] GET /books
If you want to skip the API server’s docker image building step, you may tune the compose/docker-compose.yml:
@@ -1,6 +1,8 @@
services:
lr-rest-books:
- image: lrbooks-py:latest
+ build:
+ context: ../
+ dockerfile: Dockerfile
ports:
- 5000:5000
volumes:
Run again:
docker compose up
The compose plugin will build the image for you if needed.
[+] Building 2.8s (13/13) FINISHED docker:desktop-linux
=> [lr-rest-books internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 633B 0.0s
=> [lr-rest-books internal] load metadata for docker.io/library/python:3.9.18-alpine3.19 2.7s
=> [lr-rest-books auth] library/python:pull token for registry-1.docker.io 0.0s
=> [lr-rest-books internal] load .dockerignore 0.0s
=> => transferring context: 89B 0.0s
=> [lr-rest-books 1/7] FROM docker.io/library/python:3.9.18-alpine3.19@sha256:ce83ae657ad10635ea43ecd5efb6ca50bec62183148e37fba075e18a8a34868f 0.0s
=> [lr-rest-books internal] load build context 0.0s
=> => transferring context: 2.37kB 0.0s
=> CACHED [lr-rest-books 2/7] WORKDIR /app 0.0s
=> CACHED [lr-rest-books 3/7] COPY requirements.txt . 0.0s
=> CACHED [lr-rest-books 4/7] RUN pip install --no-cache-dir -r requirements.txt 0.0s
=> CACHED [lr-rest-books 5/7] RUN pip install gunicorn 0.0s
=> CACHED [lr-rest-books 6/7] COPY books/ /app/books 0.0s
=> CACHED [lr-rest-books 7/7] COPY main.py /app 0.0s
=> [lr-rest-books] exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:4fb63b26ccd7a5803055ec9898b324a6a78c0b0f5ae5ca8f516cbcdde5b921f3 0.0s
=> => naming to docker.io/library/compose-lr-rest-books
...
Now, you may try all those endpoints with curl. Everything should work even more smoothly than you expected.
Kubernetes
If you want to push your cloud-native solution even further, please try Kubernetes.
It's also known as K8s and is an open-source system for automating deployment, scaling, and management of containerized applications.
You can make a deployment yaml like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lr-books-deployment
spec:
replicas: 3
selector:
matchLabels:
app: lr-books
template:
metadata:
labels:
app: lr-books
spec:
containers:
- name: lr-books-api
image: lrbooks:latest
ports:
- containerPort: 5000
And apply it with this command:
kubectl apply -f lr-books-deployment.yaml