Dockerfile
Docker allows developers to package their applications along with all dependencies into a single unit called a container. This ensures consistency across different environments, such as development, testing, and production, reducing the "it works on my machine" problem.
Install Docker: https://docs.docker.com/engine/install/
Polish Code
Use port config value in main.rs
Update src/main.rs:
@@ -10,6 +10,8 @@ async fn main() {
let c = infrastructure::parse_config(CONFIG_FILE);
let wire_helper = application::WireHelper::new(&c).expect("Failed to create WireHelper");
let app = adapter::make_router(&wire_helper);
- let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await.unwrap();
+ let listener = tokio::net::TcpListener::bind(format!("0.0.0.0:{}", c.app.port))
+ .await
+ .unwrap();
axum::serve(listener, app).await.unwrap();
}
Remove unnecessary question mark
Update src/application/executor/book_operator.rs:
@@ -14,10 +14,10 @@ impl BookOperator {
}
pub async fn create_book(&self, b: model::Book) -> Result<String, Box<dyn Error>> {
- Ok(self.book_manager.index_book(&b).await?)
+ self.book_manager.index_book(&b).await
}
pub async fn search_books(&self, q: &str) -> Result<Vec<model::Book>, Box<dyn Error>> {
- Ok(self.book_manager.search_books(q).await?)
+ self.book_manager.search_books(q).await
}
}
Use let Some
Update src/infrastructure/search/es.rs:
@@ -57,10 +57,12 @@ impl BookManager for ElasticSearchEngine {
.await?;
let response_body = response.json::<Value>().await?;
let mut books: Vec<model::Book> = vec![];
- for hit in response_body["hits"]["hits"].as_array().unwrap() {
- let source = hit["_source"].clone();
- let book: model::Book = serde_json::from_value(source).unwrap();
- books.push(book);
+ if let Some(hits) = response_body["hits"]["hits"].as_array() {
+ for hit in hits {
+ let source = hit["_source"].clone();
+ let book: model::Book = serde_json::from_value(source).unwrap();
+ books.push(book);
+ }
}
Ok(books)
}
Add Makefile
A Makefile is a special file used in software development projects, particularly in Unix-like operating systems, to automate the compilation and building of executable programs or libraries from source code.
Add Makefile:
# Binary name
BINARY_NAME=lr_ft_books
.PHONY: lint
lint:
@echo "Linting..."
cargo clippy
build:
@echo "Building $(BINARY_NAME)..."
cargo build --release --bin $(BINARY_NAME)
Clippy is a well-known linting tool for the Rust.
Update Cargo.toml to include the bin
:
@@ -3,6 +3,10 @@ name = "lr_fulltext_search_rust"
version = "0.1.0"
edition = "2021"
+[[bin]]
+name = "lr_ft_books"
+path = "src/main.rs"
+
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
Run make build
to build a binary:
make build
This is equivalent to cargo build --release --bin lr_ft_books
. It will create a binary file named lr_ft_books
in your project’s target/release
folder.
Then, you can just run it as a standalone server:
./target/release/lr_ft_books
Dockerfile for the service
Add Dockerfile:
# Use a Rust Docker image as the base
FROM rust:1.77-alpine3.19 as builder
# Set the working directory inside the container
WORKDIR /app
# Install necessary packages
RUN apk update && \
apk add --no-cache musl-dev pkgconfig openssl-dev
# Copy the Rust project files to the container
COPY Cargo.toml .
COPY src/ /app/src
# Define a build argument with a default value
ARG BINARY_NAME=lr_ft_books
# Build the Rust project
# See: https://github.com/rust-lang/rust/issues/115430
RUN RUSTFLAGS="-Ctarget-feature=-crt-static" cargo build --release --bin ${BINARY_NAME}
# Start a new stage from Alpine Linux
FROM alpine:3.19
# Install required packages
RUN apk update && \
apk add --no-cache libgcc
# Define an environment variable from the build argument
ENV BINARY_NAME=lr_ft_books
# Set the working directory inside the container
WORKDIR /app
# Copy the built binary from the previous stage to the current stage
COPY --from=builder /app/target/release/${BINARY_NAME} .
# Command to run the binary when the container starts
CMD ./${BINARY_NAME}
Alpine Linux is a lightweight and secure Linux distribution that is particularly well-suited for containerized environments, embedded systems, and resource-constrained environments where efficiency and security are paramount.
With this Dockerfile, we‘re ready to run it with Elasticsearch in the Docker Compose.