» Node.js: Build a REST API with Express » 2. Development » 2.9 Cache: Redis

Cache: Redis

Big queries in MySQL or large aggregations in MongoDB may take seconds or even minutes to finish. You definitely don't want to trigger these operations frequently for each user request.

Caching the results in memory is a great way to alleviate this issue. If your API server is running on a single machine or node, simply putting these results in in-memory HashMaps or Dictionaries should solve the problem. If you have multiple machines or nodes running the API server and sharing common memory, Redis is your best choice here.

Try Redis

  1. Install Redis on your machine and start it.

  2. Add redis dependency.

npm i ioredis
  1. Update code.

Add infrastructure/cache/helper.ts:

export interface CacheHelper {
  save(key: string, value: string): Promise<void>;
  load(key: string): Promise<string | null>;
}

Use redis in infrastructure/cache/redis.ts:

import Redis, { RedisOptions } from "ioredis";

import { CacheConfig } from "@/infrastructure/config/config";
import { CacheHelper } from "./helper";

const defaultTTL = 3600; // seconds

export class RedisCache implements CacheHelper {
  private client: Redis;

  constructor(c: CacheConfig) {
    const options: RedisOptions = {
      host: c.host,
      port: c.port,
      password: c.password,
      db: c.db,
      commandTimeout: c.timeout,
    };
    this.client = new Redis(options);
    console.log("Connected to Redis");
  }

  async save(key: string, value: string): Promise<void> {
    await this.client.set(key, value, "EX", defaultTTL);
  }

  async load(key: string): Promise<string | null> {
    return await this.client.get(key);
  }

  close(): void {
    this.client.disconnect();
  }
}

Export these in infrastructure/cache/index.ts:

export { RedisCache } from "./redis";
export { CacheHelper } from "./helper";

Add related config items in infrastructure/config/config.ts:

@@ -11,9 +11,18 @@ interface ApplicationConfig {
   port: number;
 }
 
+export interface CacheConfig {
+  host: string;
+  port: number;
+  password: string;
+  db: number;
+  timeout: number; // in milliseconds
+}
+
 export interface Config {
   app: ApplicationConfig;
   db: DBConfig;
+  cache: CacheConfig;
 }
 
 export function parseConfig(filename: string): Config {

Put in new values in config.json:

@@ -7,5 +7,12 @@
     "dsn": "mysql://test_user:test_pass@127.0.0.1:3306/lr_book?charset=utf8mb4",
     "mongo_uri": "mongodb://localhost:27017",
     "mongo_db_name": "lr_book"
+  },
+  "cache": {
+    "host": "localhost",
+    "port": 6379,
+    "password": "test_pass",
+    "db": 0,
+    "timeout": 5000
   }
 }

Wire in redis connection in application/wire_helper.ts:

@@ -1,11 +1,13 @@
-import { MySQLPersistence, MongoPersistence } from "@/infrastructure/database";
 import { Config } from "@/infrastructure/config";
 import { BookManager, ReviewManager } from "@/domain/gateway";
+import { MySQLPersistence, MongoPersistence } from "@/infrastructure/database";
+import { RedisCache, CacheHelper } from "@/infrastructure/cache";
 
 // WireHelper is the helper for dependency injection
 export class WireHelper {
   private sql_persistence: MySQLPersistence;
   private no_sql_persistence: MongoPersistence;
+  private kv_store: RedisCache;
 
   constructor(c: Config) {
     this.sql_persistence = new MySQLPersistence(c.db.dsn);
@@ -13,6 +15,7 @@ export class WireHelper {
       c.db.mongo_uri,
       c.db.mongo_db_name
     );
+    this.kv_store = new RedisCache(c.cache);
   }
 
   bookManager(): BookManager {
@@ -22,4 +25,8 @@ export class WireHelper {
   reviewManager(): ReviewManager {
     return this.no_sql_persistence;
   }
+
+  cacheHelper(): CacheHelper {
+    return this.kv_store;
+  }
 }

Suppose listing all books is a resource-intensive query in your database. In such cases, you may choose to store its query result in Redis for faster retrieval in the future.

Changes in application/executor/book_operator.ts:

@@ -1,11 +1,16 @@
 import { BookManager } from "@/domain/gateway";
 import { Book } from "@/domain/model";
+import { CacheHelper } from "@/infrastructure/cache";
+
+const booksKey = "lr-books";
 
 export class BookOperator {
   private bookManager: BookManager;
+  private cacheHelper: CacheHelper;
 
-  constructor(b: BookManager) {
+  constructor(b: BookManager, c: CacheHelper) {
     this.bookManager = b;
+    this.cacheHelper = c;
   }
 
   async createBook(b: Book): Promise<Book> {
@@ -19,7 +24,13 @@ export class BookOperator {
   }
 
   async getBooks(): Promise<Book[]> {
-    return await this.bookManager.getBooks();
+    const cache_value = await this.cacheHelper.load(booksKey);
+    if (cache_value) {
+      return JSON.parse(cache_value);
+    }
+    const books = await this.bookManager.getBooks();
+    await this.cacheHelper.save(booksKey, JSON.stringify(books));
+    return books;
   }
 
   async updateBook(id: number, b: Book): Promise<Book> {

Tune a little bit in adapter/router.ts:

@@ -158,7 +158,7 @@ class RestHandler {
 // Create router
 function MakeRouter(wireHelper: WireHelper): express.Router {
   const restHandler = new RestHandler(
-    new BookOperator(wireHelper.bookManager()),
+    new BookOperator(wireHelper.bookManager(), wireHelper.cacheHelper()),
     new ReviewOperator(wireHelper.reviewManager())
   );
 
@@ -187,8 +187,12 @@ export function InitApp(wireHelper: WireHelper): express.Express {
   // Middleware to parse JSON bodies
   app.use(express.json());
 
-  // Use Morgan middleware with predefined 'combined' format
-  app.use(morgan("combined"));
+  // Use Morgan middleware with predefined tokens
+  app.use(
+    morgan(
+      ':remote-addr - :remote-user [:date[clf]] ":method :url HTTP/:http-version" :status :res[content-length] ":referrer" ":user-agent" - :response-time ms'
+    )
+  );
 
   // Define a health endpoint handler
   app.get("/", (req: Request, res: Response) => {

morgan("combined") log output doesn’t include response time, so we customize the predefined format here.

Those are all the changes you need to incorporate redis. Let's now try the endpoint powered by the new cache system.

Try with curl

List all books:

curl -X GET http://localhost:3000/books

The result is just as before, but the performance improves a lot here. You can see that from the logs of Morgan middleware.

::ffff:127.0.0.1 - - [02/Mar/2024:04:58:20 +0000] "GET /books HTTP/1.1" 200 483 "-" "curl/8.1.2" - 10.861 ms
::ffff:127.0.0.1 - - [02/Mar/2024:04:58:23 +0000] "GET /books HTTP/1.1" 200 483 "-" "curl/8.1.2" - 1.272 ms
::ffff:127.0.0.1 - - [02/Mar/2024:04:58:23 +0000] "GET /books HTTP/1.1" 200 483 "-" "curl/8.1.2" - 1.093 ms
::ffff:127.0.0.1 - - [02/Mar/2024:04:58:30 +0000] "GET /books HTTP/1.1" 200 483 "-" "curl/8.1.2" - 1.145 ms

Use redis-cli to check the values in Redis:

redis-cli

Play with keys in the redis client shell:

127.0.0.1:6379> keys *
1) "lr-books"
127.0.0.1:6379> get lr-books
"[{\"id\":2,\"title\":\"Sample Book 222\",\"author\":\"John Doe\",\"published_at\":\"2023-01-01\",\"description\":\"A sample book description\",\"isbn\":\"1234567890\",\"total_pages\":200,\"created_at\":\"2024-03-01T04:11:57.000Z\",\"updated_at\":\"2024-03-01T04:11:57.000Z\"},{\"id\":3,\"title\":\"Sample Book\",\"author\":\"John Doe\",\"published_at\":\"2023-01-01\",\"description\":\"A sample book description\",\"isbn\":\"1234567890\",\"total_pages\":200,\"created_at\":\"2024-03-01T04:40:16.000Z\",\"updated_at\":\"2024-03-01T04:40:16.000Z\"}]"
127.0.0.1:6379> del lr-books
(integer) 1

Awesome! Redis is at your service now! 💐