Redis Caching Patterns for Node.js Applications
Production caching — cache-aside, write-through, session management, rate limiting with sorted sets, pub/sub, and invalidation strategies.
Redis is the go-to caching layer for Node.js. Beyond key-value storage, it supports sorted sets, pub/sub, and atomic operations.
Setup
import { createClient } from "redis";
const redis = createClient({
url: process.env.REDIS_URL,
socket: { reconnectStrategy: (n) => Math.min(n * 100, 5000) },
});
redis.on("error", (err) => console.error("Redis:", err));
await redis.connect();
1. Cache-Aside Pattern
class ProductService {
private readonly TTL = 3600;
async getProduct(id: string): Promise<Product> {
const key = `product:${id}`;
const cached = await redis.get(key);
if (cached) return JSON.parse(cached);
const product = await db.products.findUnique({ where: { id } });
if (!product) throw new Error("Not found");
await redis.setEx(key, this.TTL, JSON.stringify(product));
return product;
}
async updateProduct(id: string, data: Partial<Product>): Promise<Product> {
const product = await db.products.update({ where: { id }, data });
await redis.del(`product:${id}`);
return product;
}
}
2. Cache Stampede Prevention
async function getWithLock<T>(key: string, ttl: number, fn: () => Promise<T>): Promise<T> {
const cached = await redis.get(key);
if (cached) return JSON.parse(cached);
const lockKey = `lock:${key}`;
const acquired = await redis.set(lockKey, "1", { EX: 10, NX: true });
if (acquired) {
try {
const data = await fn();
await redis.setEx(key, ttl, JSON.stringify(data));
return data;
} finally {
await redis.del(lockKey);
}
}
await new Promise((r) => setTimeout(r, 100));
return getWithLock(key, ttl, fn);
}
3. Session Management
const TTL = 86400;
async function createSession(userId: string): Promise<string> {
const id = crypto.randomUUID();
await redis.setEx(`session:${id}`, TTL, JSON.stringify({ userId, createdAt: Date.now() }));
return id;
}
async function getSession(id: string) {
const raw = await redis.get(`session:${id}`);
if (!raw) return null;
await redis.expire(`session:${id}`, TTL); // Sliding expiration
return JSON.parse(raw);
}
4. Rate Limiting with Sorted Sets
async function checkRateLimit(
id: string,
limit: number,
windowSec: number
): Promise<{ allowed: boolean; remaining: number }> {
const key = `ratelimit:${id}`;
const now = Date.now();
const windowStart = now - windowSec * 1000;
const pipeline = redis.multi();
pipeline.zRemRangeByScore(key, "-inf", windowStart);
pipeline.zAdd(key, { score: now, value: `${now}` });
pipeline.zCard(key);
pipeline.expire(key, windowSec);
const results = await pipeline.exec();
const count = results[2] as number;
return { allowed: count <= limit, remaining: Math.max(0, limit - count) };
}
5. Pub/Sub
const pub = redis.duplicate();
const sub = redis.duplicate();
await Promise.all([pub.connect(), sub.connect()]);
// Publish event
await pub.publish("inventory:updated", JSON.stringify({ productId: "123", qty: 50 }));
// Subscribe
await sub.subscribe("inventory:updated", (msg) => {
const { productId, qty } = JSON.parse(msg);
// Broadcast to WebSocket clients
});
Anti-Patterns
| Anti-Pattern | Problem | Fix | |-------------|---------|-----| | No TTL | Memory leak | Always set TTL | | Caching everything | Stale data, memory waste | Cache expensive reads | | Values > 100KB | Slow ops | Store references | | One key for all users | Invalidation storms | Per-resource keys |
Conclusion
Redis is most powerful when used for the right operations: cache-aside for DB offloading, sorted sets for rate limiting, atomic operations for distributed locks. Always measure hit rates and plan invalidation before writing cache code.
Related Articles
PostgreSQL Performance Tuning: Indexing, Query Plans, and Configuration
Practical PostgreSQL optimization — index strategies, EXPLAIN ANALYZE interpretation, connection pooling, partitioning, and production configuration settings.
Real-Time ETL Pipeline Design: From Kafka to Data Warehouse
A guide to designing real-time ETL pipelines with Apache Kafka, Flink, and Snowflake — processing millions of events per second with sub-second latency.