
Redis: 캐시 그 이상의 전략 (Cache-Aside부터 Eviction까지)
Redis를 그냥 '빠른 저장소'로만 쓰고 계신가요? Look-aside, Write-Through 전략의 장단점과 LRU 알고리즘, 그리고 데이터가 날아가지 않게 하는 RDB/AOF 지속성 설정을 정리합니다.

Redis를 그냥 '빠른 저장소'로만 쓰고 계신가요? Look-aside, Write-Through 전략의 장단점과 LRU 알고리즘, 그리고 데이터가 날아가지 않게 하는 RDB/AOF 지속성 설정을 정리합니다.
내 서버는 왜 걸핏하면 뻗을까? OS가 한정된 메모리를 쪼개 쓰는 처절한 사투. 단편화(Fragmentation)와의 전쟁.

로버트 C. 마틴(Uncle Bob)이 제안한 클린 아키텍처의 핵심은 무엇일까요? 양파 껍질 같은 계층 구조와 의존성 규칙(Dependency Rule)을 통해 프레임워크와 UI로부터 독립적인 소프트웨어를 만드는 방법을 정리합니다.

미로를 탈출하는 두 가지 방법. 넓게 퍼져나갈 것인가(BFS), 한 우물만 팔 것인가(DFS). 최단 경로는 누가 찾을까?

이름부터 빠릅니다. 피벗(Pivot)을 기준으로 나누고 또 나누는 분할 정복 알고리즘. 왜 최악엔 느린데도 가장 많이 쓰일까요?

Redis를 처음 접한 건 DB 쿼리가 느려지는 상황을 공부하면서였다. 트래픽이 늘어날수록 데이터베이스 부하가 선형으로 늘어나고, 결국 응답 시간이 벽에 부딪힌다는 내용이었다.
"캐시를 쓰면 성능이 크게 개선된다"는 말은 자주 들었다. 그런데 정확히 어떤 원리인지, 어떤 전략을 써야 하는지는 잘 몰랐다. Redis를 단순히 "빠른 Key-Value 저장소" 정도로만 받아들였던 것이다.
직접 파보니 생각보다 훨씬 깊었다. 캐싱 전략에 따라 데이터 일관성과 성능 사이의 트레이드오프가 달라지고, 메모리 관리 정책 하나로 서비스 동작이 크게 달라진다. 실제 사례를 보면 DB 쿼리 응답 시간과 캐시 히트 응답 시간의 차이가 수십 배에 달하는 경우도 흔하다.
Cache-Aside와 Write-Through의 차이, Thundering Herd 문제, LRU 알고리즘의 실제 동작 방식, RDB와 AOF의 tradeoff, 그리고 싱글 스레드 아키텍처가 왜 Redis의 강점이자 약점인지. 공부하면서 정리해본 내용이다.
HDD나 SSD 같은 디스크 저장소는 CPU 입장에서 보면 거북이입니다. 하드디스크는 물리적인 암(arm)이 움직여야 하고, SSD도 NAND 플래시 메모리에서 데이터를 읽는 전기적 처리가 필요합니다. 대략 디스크 I/O는 밀리초(ms) 단위입니다.
반면, RAM은 마이크로초(μs) 단위입니다. 1,000배 차이가 납니다. CPU가 RAM에 접근하는 속도는 나노초(ns) 수준이지만, 메모리에서 데이터를 가져오는 시간까지 포함하면 대략 μs 단위로 생각할 수 있습니다.
하지만 RAM은 전원을 끄면 데이터가 날아갑니다. 휘발성(Volatile)이죠. 그리고 비쌉니다. 1TB SSD가 10만 원대라면, 1TB RAM은 몇백만 원입니다.
Redis는 이 RAM의 속도와 디스크의 영속성(Persistence)을 결합한 In-Memory Data Structure Store입니다. "단순히 빠른 캐시"라고만 생각했던 제 인식이 완전히 바뀐 순간은, Redis가 String뿐만 아니라 List, Set, Sorted Set, Hash, HyperLogLog 같은 복잡한 자료구조를 지원한다는 걸 알았을 때였습니다.
이건 그냥 캐시가 아니라, 메모리 위에 올라간 데이터베이스였습니다. 그것도 엄청 빠른.
Redis를 데이터베이스 앞에 두면, 어떻게 데이터를 읽고 쓸지 결정해야 합니다. 처음에는 그냥 "Redis에 없으면 DB에서 읽어서 넣으면 되겠지" 정도로 생각했는데, 이게 생각보다 복잡한 문제였습니다.
가장 많이 쓰는 패턴입니다. 국룰이라고 봐도 됩니다.
async function getUser(userId) {
// 1. Redis에 먼저 물어봄
const cached = await redis.get(`user:${userId}`);
if (cached) {
return JSON.parse(cached); // Cache Hit!
}
// 2. 없으면 DB에서 읽음
const user = await db.query('SELECT * FROM users WHERE id = ?', [userId]);
// 3. 읽은 데이터를 Redis에 저장
await redis.set(`user:${userId}`, JSON.stringify(user), 'EX', 3600); // 1시간 TTL
return user;
}
동작 순서:
이 패턴을 처음 썼을 때, 제가 놓친 게 있었습니다. 바로 Cold Start 문제입니다. 서버를 재시작하면 Redis가 비어있기 때문에, 모든 요청이 Cache Miss가 됩니다. 순간적으로 DB에 엄청난 부하가 몰립니다.
해결책은 Cache Warming입니다. 서버 시작할 때 자주 쓰는 데이터를 미리 Redis에 올려두는 겁니다.
async function warmUpCache() {
console.log('Warming up cache...');
const popularUsers = await db.query('SELECT * FROM users ORDER BY visit_count DESC LIMIT 100');
for (const user of popularUsers) {
await redis.set(`user:${user.id}`, JSON.stringify(user), 'EX', 3600);
}
console.log('Cache warmed!');
}
이걸 서버 시작 스크립트에 넣어두니, Cold Start 문제가 많이 완화됐습니다.
데이터를 쓸 때, Redis와 DB에 동시에 씁니다.
async function updateUser(userId, data) {
// 1. DB에 저장
await db.query('UPDATE users SET name = ? WHERE id = ?', [data.name, userId]);
// 2. Redis에도 저장
await redis.set(`user:${userId}`, JSON.stringify(data), 'EX', 3600);
return data;
}
장점:
제가 처음에 Write-Through를 시도했다가 포기한 이유가 바로 이거였습니다. 사용자가 프로필을 수정하면, 그 데이터를 Redis와 DB에 둘 다 써야 합니다. 근데 Redis가 일시적으로 다운되면 어떻게 될까요? 트랜잭션 롤백을 해야 할까요?
이 문제를 제대로 처리하려면 Two-Phase Commit 같은 분산 트랜잭션 기법이 필요한데, 이건 너무 복잡합니다. 결국 저는 Cache-Aside로 돌아갔고, 대신 TTL을 짧게 잡아서 Stale Data 문제를 완화했습니다.
이건 좀 더 공격적인 전략입니다. 일단 Redis에만 쓰고, 나중에 DB에 쓰는 방식입니다.
async function updateUserScore(userId, score) {
// 1. Redis에만 일단 저장
await redis.set(`user:${userId}:score`, score);
// 2. 나중에 배치로 DB에 저장 (큐를 통해)
await queue.add('sync-score', { userId, score });
}
// 별도의 워커가 처리
async function syncWorker() {
const jobs = await queue.getJobs();
for (const job of jobs) {
await db.query('UPDATE users SET score = ? WHERE id = ?', [job.data.score, job.data.userId]);
}
}
장점:
이 패턴은 "데이터 유실을 어느 정도 감수할 수 있는" 상황에서만 씁니다. 예를 들어, 실시간 순위표(leaderboard) 같은 경우요. 게임 점수가 몇 초 늦게 DB에 저장돼도 괜찮으니까요.
결국 제가 이해한 건 이거였습니다: 캐싱 전략에 정답은 없다. 상황에 따라 다르다. 읽기가 많으면 Cache-Aside, 일관성이 중요하면 Write-Through, 쓰기 성능이 중요하고 데이터 유실을 감수할 수 있으면 Write-Behind.
RAM은 공간이 한정적입니다. Redis의 maxmemory 설정을 넘어서면, 뭔가를 버려야 합니다. 근데 뭘 버릴까요?
처음에 저는 이 개념을 제대로 이해하지 못했습니다. "그냥 오래된 거부터 버리면 되지 않나?" 싶었는데, 막상 서비스를 운영하다 보니 이게 생각보다 복잡한 문제라는 걸 깨달았습니다.
EXPIRE 설정(TTL)이 있는 키들 중에서만 LRU로 버립니다.저는 처음에 allkeys-lru를 썼는데, 세션 관리를 Redis로 하면서 문제가 생겼습니다. 로그인 세션 데이터가 메모리 부족으로 인해 삭제돼버린 겁니다. 사용자가 갑자기 로그아웃되는 현상이 발생했죠.
해결책은 세션 데이터와 캐시 데이터를 다른 Redis 인스턴스로 분리하는 거였습니다. 세션용 Redis는 noeviction으로 설정하고 충분한 메모리를 할당했고, 캐시용 Redis는 allkeys-lru로 설정해서 메모리를 효율적으로 관리했습니다.
이때 와닿았던 교훈: Redis 하나로 모든 걸 해결하려고 하지 마라. 용도를 분리해라.
LRU는 캐시 시스템에서 핵심적으로 활용되는 알고리즘입니다. 직접 구현해보면 동작 원리가 훨씬 명확해집니다.
핵심 아이디어는 이겁니다:
JavaScript의 Map은 삽입 순서를 기억하기 때문에, LRU 구현에 딱입니다.
class LRUCache {
constructor(capacity) {
this.capacity = capacity;
this.map = new Map(); // 삽입 순서를 기억함!
}
get(key) {
if (!this.map.has(key)) return -1;
// 핵심: 조회가 발생했으니, 삭제 후 다시 삽입해서 "최신"으로 만듦
const value = this.map.get(key);
this.map.delete(key);
this.map.set(key, value);
return value;
}
put(key, value) {
if (this.map.has(key)) {
// 기존 키가 있으면 일단 삭제 (순서를 갱신하기 위해)
this.map.delete(key);
} else if (this.map.size >= this.capacity) {
// 용량 초과: 가장 오래된(맨 앞의) 키 삭제
const firstKey = this.map.keys().next().value;
this.map.delete(firstKey);
}
this.map.set(key, value);
}
}
// 테스트
const cache = new LRUCache(2);
cache.put(1, 'A'); // {1: 'A'}
cache.put(2, 'B'); // {1: 'A', 2: 'B'}
console.log(cache.get(1)); // 'A' - {2: 'B', 1: 'A'} (1이 최신으로 이동)
cache.put(3, 'C'); // 용량 초과! 2 삭제 -> {1: 'A', 3: 'C'}
console.log(cache.get(2)); // -1 (없음)
이걸 실제 Redis 설정으로 옮기면:
# redis.conf
maxmemory 256mb
maxmemory-policy allkeys-lru
정리해본다면, LRU는 결국 "시간적 지역성(Temporal Locality)"을 활용한 알고리즘입니다. 최근에 접근한 데이터는 곧 다시 접근할 가능성이 높다는 거죠. 이 개념이 머릿속에 확 들어왔을 때, CPU 캐시 메모리의 동작 원리도 함께 이해됐습니다.
"Redis는 메모리라 끄면 날아간다며?"
반은 맞고 반은 틀립니다. Redis는 디스크에 데이터를 백업할 수 있습니다. 두 가지 방식이 있습니다.
특정 간격(예: 1시간마다, 또는 1000개 키가 변경될 때마다)마다 메모리 전체를 사진 찍듯 파일(.rdb)로 저장합니다.
# redis.conf
save 900 1 # 900초(15분) 동안 1개 이상 키가 변경되면 저장
save 300 10 # 300초(5분) 동안 10개 이상 키가 변경되면 저장
save 60 10000 # 60초 동안 10000개 이상 키가 변경되면 저장
장점:
.rdb 파일 하나만 복사하면 됩니다.fork()를 하기 때문에, 메모리 사용량이 일시적으로 2배가 될 수 있습니다.제가 처음에 RDB만 쓰다가 데이터 유실을 경험했습니다. 스냅샷을 15분마다 찍도록 설정했는데, 서버가 갑자기 죽어버린 겁니다. 마지막 스냅샷 이후 14분 동안의 데이터가 다 날아갔습니다. 사용자 액션 로그가 수천 건 사라진 거죠.
이때 깨달았습니다. "아, RDB는 완벽한 백업이 아니구나."
모든 쓰기 명령(SET, DEL, INCR 등)을 일기장(.aof)에 순서대로 기록합니다. 서버를 재시작하면 이 일기장을 처음부터 다시 실행해서 데이터를 복구합니다.
# redis.conf
appendonly yes
appendfsync everysec # 매 초마다 디스크에 flush
appendfsync 옵션:
always: 쓰기 명령마다 즉시 디스크에 기록. 느리지만 안전.everysec: 매 초마다 기록. 적당한 성능과 안전성.no: OS가 알아서 flush. 빠르지만 위험.everysec 기준).AOF Rewrite란, 중복된 명령을 정리하는 작업입니다. 예를 들어:
SET user:1 "Alice"
SET user:1 "Bob"
SET user:1 "Charlie"
이 세 줄은 사실 마지막 줄만 있으면 됩니다. Rewrite를 하면:
SET user:1 "Charlie"
이렇게 압축됩니다.
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
이 설정은 "AOF 파일이 64MB 이상이고, 이전 크기보다 100% 이상 커지면 자동으로 Rewrite해라"는 뜻입니다.
제가 정리해본 결과, 가장 안전한 방법은 둘 다 켜는 것입니다.
# RDB: 주기적 백업
save 900 1
save 300 10
save 60 10000
# AOF: 실시간 로그
appendonly yes
appendfsync everysec
# AOF Rewrite
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
이렇게 하면:
재시작할 때 Redis는 AOF를 우선적으로 읽습니다. AOF가 없으면 RDB를 읽습니다.
결국 이거였습니다: RDB는 백업, AOF는 복구. 둘 다 필요하다.
"Redis는 싱글 스레드라서 원자성(Atomicity)이 보장된다."
이 말을 처음 들었을 때, "싱글 스레드면 느린 거 아닌가?" 싶었습니다. 근데 실제로 써보니, 싱글 스레드가 Redis의 가장 큰 강점이자 약점이라는 걸 이해했습니다.
Redis는 모든 명령을 하나의 스레드에서 순차적으로 처리합니다. 여러 클라이언트가 동시에 명령을 보내도, Redis는 줄 세워서 하나씩 처리합니다.
예를 들어, 조회수 증가 기능을 만든다고 치자.
// 일반적인 방법 (Race Condition 발생 가능)
const views = await db.query('SELECT views FROM posts WHERE id = ?', [postId]);
await db.query('UPDATE posts SET views = ? WHERE id = ?', [views + 1, postId]);
이 코드는 문제가 있습니다. 두 사용자가 동시에 조회하면:
views = 100을 읽음.views = 100을 읽음.views = 101로 업데이트.views = 101로 업데이트.결과: 조회수가 2 증가해야 하는데, 1만 증가했습니다.
Redis의 INCR 명령은 이 문제를 해결합니다.
await redis.incr(`post:${postId}:views`);
INCR은 원자적(Atomic)입니다. 읽기와 쓰기가 하나의 명령으로 처리되기 때문에, Race Condition이 발생하지 않습니다.
이걸 실제로 경험한 사례가 있습니다. 제한된 수량의 쿠폰을 발급하는 기능을 만들었는데, DB로 하면 동시성 문제가 계속 생겼습니다. Redis로 바꾸니까 문제가 싹 해결됐습니다.
async function issueCoupon(userId) {
const remaining = await redis.decr('coupon:remaining');
if (remaining < 0) {
await redis.incr('coupon:remaining'); // 롤백
return { success: false, message: '쿠폰이 모두 소진되었습니다.' };
}
await redis.sadd('coupon:issued', userId);
return { success: true };
}
DECR도 원자적이기 때문에, 1000명이 동시에 요청해도 정확히 선착순으로 처리됩니다.
싱글 스레드의 무서운 점은, 하나의 명령이 오래 걸리면 뒤에 있는 모든 명령이 블로킹된다는 겁니다.
제가 실수한 사례입니다. 개발 환경에서 디버깅하려고 이 명령을 썼습니다.
redis-cli KEYS *
프로덕션 Redis에 키가 100만 개 있었는데, 이 명령이 5초 동안 실행됐습니다. 그 5초 동안 모든 API 요청이 멈췄습니다. 사용자들은 로딩 스피너만 보고 있었죠.
절대 금지 명령:KEYS *: 모든 키를 조회. O(N).FLUSHALL: 모든 데이터 삭제. O(N).SMEMBERS large-set: 큰 Set을 전부 조회. O(N).HGETALL large-hash: 큰 Hash를 전부 조회. O(N).SCAN: 커서 기반으로 조금씩 조회. O(1) per call.SSCAN, HSCAN: Set, Hash용 SCAN.// ❌ 나쁜 예
const keys = await redis.keys('user:*'); // 블로킹!
// ✅ 좋은 예
let cursor = '0';
const keys = [];
do {
const result = await redis.scan(cursor, 'MATCH', 'user:*', 'COUNT', 100);
cursor = result[0];
keys.push(...result[1]);
} while (cursor !== '0');
SCAN은 100개씩 끊어서 가져오기 때문에, 다른 명령들이 중간에 실행될 수 있습니다. 블로킹이 없습니다.
정리해본다면: Redis의 싱글 스레드는 양날의 검이다. 원자성을 보장해주지만, 무거운 명령을 쓰면 전체가 멈춘다.
이 문제는 제가 가장 골치 아팠던 경험 중 하나입니다.
인기 있는 게시물의 조회수를 캐시하고 있었습니다. TTL을 1시간으로 설정했죠. 그런데 딱 1시간이 지나서 캐시가 만료되는 순간, 1000명의 사용자가 동시에 그 게시물을 조회했습니다.
무슨 일이 일어났을까요?
이걸 Thundering Herd Problem 또는 Cache Stampede라고 합니다. 캐시가 만료되는 순간, 수천 개의 요청이 우르르 몰려서 DB를 짓밟는 현상입니다.
첫 번째 요청만 DB에 가고, 나머지는 기다리게 만드는 겁니다.
async function getPostWithLock(postId) {
const cacheKey = `post:${postId}`;
const lockKey = `lock:${cacheKey}`;
// 1. 캐시 확인
const cached = await redis.get(cacheKey);
if (cached) return JSON.parse(cached);
// 2. 락 획득 시도 (10초 TTL)
const acquired = await redis.set(lockKey, '1', 'NX', 'EX', 10);
if (acquired) {
// 락을 획득한 요청만 DB에 접근
try {
const post = await db.query('SELECT * FROM posts WHERE id = ?', [postId]);
await redis.set(cacheKey, JSON.stringify(post), 'EX', 3600);
return post;
} finally {
await redis.del(lockKey); // 락 해제
}
} else {
// 락을 획득하지 못한 요청은 잠시 대기 후 재시도
await new Promise(resolve => setTimeout(resolve, 50));
return getPostWithLock(postId); // 재귀 호출
}
}
핵심: SET key value NX EX 10은 "키가 없을 때만 설정하고, 10초 후 자동 삭제"라는 뜻입니다. 이걸 락으로 쓸 수 있습니다.
캐시가 만료되기 전에 미리 갱신하는 방법입니다.
async function getPostWithEarlyExpiration(postId) {
const cacheKey = `post:${postId}`;
const result = await redis.get(cacheKey);
if (result) {
const { data, expiresAt } = JSON.parse(result);
const now = Date.now();
const timeToExpire = expiresAt - now;
// 만료 시간의 10% 이내면, 확률적으로 미리 갱신
const delta = Math.random() * 3600 * 1000 * 0.1; // 10% of 1 hour
if (timeToExpire < delta) {
// 비동기로 갱신 (응답은 일단 캐시 데이터로)
updateCache(postId);
}
return data;
}
// Cache Miss
return updateCache(postId);
}
async function updateCache(postId) {
const post = await db.query('SELECT * FROM posts WHERE id = ?', [postId]);
const expiresAt = Date.now() + 3600 * 1000;
await redis.set(`post:${postId}`, JSON.stringify({ data: post, expiresAt }), 'EX', 3600);
return post;
}
이 방법은 "캐시가 만료되기 직전에 미리 갱신해서, 실제 만료를 경험하는 사용자를 줄인다"는 아이디어입니다.
결국 와닿은 건 이거였습니다: 캐시는 단순히 데이터를 저장하는 게 아니라, 동시성을 제어하는 도구다.
Redis는 단순한 Key-Value 스토어가 아닙니다. 다양한 자료구조를 지원합니다.
게임 순위표를 만들 때, DB로 하면 ORDER BY score DESC LIMIT 100 같은 쿼리를 매번 실행해야 합니다. 느립니다.
Redis의 Sorted Set은 O(log N)으로 순위를 관리합니다.
// 점수 기록
await redis.zadd('leaderboard', 9500, 'user:123');
await redis.zadd('leaderboard', 8800, 'user:456');
await redis.zadd('leaderboard', 9200, 'user:789');
// 상위 10명 조회 (O(log N + M))
const top10 = await redis.zrevrange('leaderboard', 0, 9, 'WITHSCORES');
// ['user:123', '9500', 'user:789', '9200', 'user:456', '8800']
// 특정 유저의 순위 조회 (O(log N))
const rank = await redis.zrevrank('leaderboard', 'user:123');
console.log(`순위: ${rank + 1}등`); // 1등
// 특정 유저의 점수 조회 (O(1))
const score = await redis.zscore('leaderboard', 'user:123');
이걸 실제로 적용했을 때, 리더보드 조회 속도가 300ms에서 5ms로 줄어들었습니다.
일일 방문자 수(Unique Visitors)를 세는 기능을 만들 때, Set을 쓰면 메모리가 많이 듭니다. 100만 명이 방문하면 100만 개의 ID를 저장해야 하니까요.
HyperLogLog는 확률적 알고리즘으로 유니크 카운트를 추정합니다. 오차율 0.81%, 메모리 사용량 12KB로 고정.
// 방문자 추가 (O(1))
await redis.pfadd('visitors:2025-09-05', 'user:123');
await redis.pfadd('visitors:2025-09-05', 'user:456');
await redis.pfadd('visitors:2025-09-05', 'user:123'); // 중복은 무시됨
// 유니크 카운트 조회 (O(1))
const uniqueCount = await redis.pfcount('visitors:2025-09-05');
console.log(`일일 방문자: ${uniqueCount}명`); // 2명
100만 명의 유니크 방문자를 Set으로 저장하면 수십 MB가 필요하지만, HyperLogLog는 12KB면 충분합니다. 대신 정확도를 약간 포기합니다. "대략적인" 수치가 필요할 때 씁니다.
사용자의 최근 활동 10개를 보여주는 기능입니다.
// 활동 추가 (최신 활동을 왼쪽에 추가)
await redis.lpush('user:123:activities', JSON.stringify({ type: 'post', id: 456 }));
// 최근 10개 조회
const activities = await redis.lrange('user:123:activities', 0, 9);
// 리스트 길이를 10개로 제한 (trim)
await redis.ltrim('user:123:activities', 0, 9);
이 패턴은 타임라인 피드나 알림 목록에 유용합니다.
제가 Redis를 프로덕션에 올리고 나서 겪은 가장 큰 공포는 "Redis가 죽으면 어떡하지?"였습니다.
Sentinel은 Redis를 모니터링하다가, 마스터가 죽으면 자동으로 슬레이브를 마스터로 승격시킵니다.
┌─────────┐ ┌─────────┐
│ Master │────────▶│ Slave 1 │
└─────────┘ └─────────┘
│ ▲
│ │
▼ │
┌─────────┐ ┌─────────┐
│ Slave 2 │ │Sentinel │ (감시)
└─────────┘ └─────────┘
마스터가 죽으면:
┌─────────┐ ┌─────────┐
│ (죽음) │ │ Master! │ (승격)
└─────────┘ └─────────┘
▲
│
┌─────────┐ ┌─────────┐
│ Slave 2 │────────▶│Sentinel │
└─────────┘ └─────────┘
설정은 이렇습니다:
# sentinel.conf
sentinel monitor mymaster 127.0.0.1 6379 2
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
monitor mymaster: "mymaster"라는 이름의 Redis를 감시.down-after-milliseconds 5000: 5초 동안 응답 없으면 죽은 걸로 간주.failover-timeout 60000: 장애 조치 타임아웃 60초.데이터가 너무 많아서 한 대의 Redis로 감당이 안 되면, Cluster를 씁니다. 데이터를 여러 노드에 분산시키는 겁니다.
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Node 1 │ │ Node 2 │ │ Node 3 │
│ Slot │ │ Slot │ │ Slot │
│ 0-5460 │ │ 5461- │ │ 10923- │
│ │ │ 10922 │ │ 16383 │
└──────────┘ └──────────┘ └──────────┘
Redis는 키를 해시해서 0~16383 슬롯 중 하나로 매핑합니다. 각 노드는 슬롯 범위를 담당합니다.
const Redis = require('ioredis');
const cluster = new Redis.Cluster([
{ host: '127.0.0.1', port: 7000 },
{ host: '127.0.0.1', port: 7001 },
{ host: '127.0.0.1', port: 7002 },
]);
await cluster.set('user:123', 'Alice'); // 자동으로 적절한 노드로 라우팅
하지만 Cluster는 복잡합니다. 제가 써보니, 작은 프로젝트에는 과한 스펙입니다. 대신 Sentinel + Replication으로 충분했습니다.
Redis는 캐시뿐만 아니라, 실시간 메시지 전달에도 씁니다.
// 구독자
const subscriber = redis.duplicate();
await subscriber.subscribe('chat:room1');
subscriber.on('message', (channel, message) => {
console.log(`[${channel}] ${message}`);
});
// 발행자
await redis.publish('chat:room1', 'Hello, everyone!');
단점: 메시지가 휘발성입니다. 구독자가 오프라인이면 메시지를 놓칩니다.
Streams는 Kafka 같은 메시지 큐의 가벼운 버전입니다.
// 메시지 추가
await redis.xadd('mystream', '*', 'user', 'Alice', 'action', 'login');
// 메시지 읽기 (Consumer Group)
await redis.xgroup('CREATE', 'mystream', 'mygroup', '
Streams는 메시지를 디스크에 저장하기 때문에, 나중에 다시 읽을 수 있습니다. 백그라운드 작업 큐로 쓸 수 있습니다.
---
## Node.js에서 Redis 연결하기 (ioredis)
실제로는 `ioredis` 라이브러리를 많이 씁니다. `redis` 패키지보다 기능이 많습니다.
```javascript
const Redis = require('ioredis');
const redis = new Redis({
host: '127.0.0.1',
port: 6379,
password: 'your-password',
db: 0,
retryStrategy(times) {
const delay = Math.min(times * 50, 2000);
return delay; // 재연결 시도
},
maxRetriesPerRequest: 3,
});
redis.on('connect', () => console.log('Redis connected!'));
redis.on('error', (err) => console.error('Redis error:', err));
// 사용 예시
await redis.set('key', 'value', 'EX', 60);
const value = await redis.get('key');
Connection Pool 설정:
const redis = new Redis({
host: '127.0.0.1',
port: 6379,
lazyConnect: true, // 필요할 때만 연결
enableOfflineQueue: false, // 연결 끊기면 큐에 쌓지 말고 바로 에러
});
키 이름을 일관되게 짓는 게 중요합니다. 저는 이런 규칙을 씁니다.
{object}:{id}:{field}
user:123:profile
post:456:views
session:abc123
cache:product:789
콜론(:)으로 구분하면, Redis 관리 도구에서 계층 구조로 보입니다.
TTL 없는 키는 메모리 누수의 원인입니다. 모든 키에 TTL을 설정하세요.
// ❌ 나쁜 예
await redis.set('user:123', data);
// ✅ 좋은 예
await redis.set('user:123', data, 'EX', 3600); // 1시간
여러 명령을 한 번에 보내면 네트워크 왕복(RTT)을 줄일 수 있습니다.
// ❌ 느린 방법 (3번 왕복)
await redis.set('key1', 'value1');
await redis.set('key2', 'value2');
await redis.set('key3', 'value3');
// ✅ 빠른 방법 (1번 왕복)
const pipeline = redis.pipeline();
pipeline.set('key1', 'value1');
pipeline.set('key2', 'value2');
pipeline.set('key3', 'value3');
await pipeline.exec();
redis-cli의 INFO 명령으로 메모리, CPU, 히트율을 확인하세요.
redis-cli INFO stats
주요 지표:
used_memory_human: 메모리 사용량.evicted_keys: 축출된 키 개수.keyspace_hits / keyspace_misses: 캐시 히트율.히트율 = hits / (hits + misses) * 100
히트율이 90% 이상이면 좋습니다. 50% 이하면 캐시 전략을 재검토하세요.
Redis를 파고들면서 이해한 것들을 정리해봅니다.
Redis는 캐시 이상이다. 자료구조 스토어, 메시지 브로커, 세션 관리자, 분산 락... 다재다능합니다.
캐싱 전략에 정답은 없다. 읽기가 많으면 Cache-Aside, 일관성이 중요하면 Write-Through, 쓰기 성능이 중요하면 Write-Behind. 상황에 따라 다릅니다.
메모리 관리가 핵심이다. LRU, LFU, TTL을 잘 설정해야 메모리를 효율적으로 씁니다.
영속성은 RDB + AOF 조합이 최고다. RDB는 빠른 백업, AOF는 데이터 안전성.
싱글 스레드는 양날의 검이다. 원자성을 보장하지만, O(N) 명령을 쓰면 전체가 멈춥니다.
Thundering Herd 문제를 항상 의식하라. 캐시 만료 시점에 DB가 터질 수 있습니다. Locking이나 Early Expiration으로 대응하세요.
Sorted Set, HyperLogLog 같은 자료구조를 활용하라. 리더보드, 유니크 카운트 같은 문제를 간단히 해결할 수 있습니다.
고가용성을 원하면 Sentinel이나 Cluster를 쓰라. 하지만 작은 프로젝트에는 과합니다.
공부하면서 와닿은 가장 큰 교훈은 이거였습니다: Redis는 올바르게 쓰면 시스템을 살리고, 잘못 쓰면 더 복잡하게 만든다.
정리한 내용이 누군가에게 도움이 되길 바랍니다.
I first encountered Redis while studying what happens when DB queries slow down under load. The pattern is well-documented: as traffic grows, database load grows linearly, and eventually response times hit a wall.
"Caching dramatically improves performance" — I'd heard that many times. But I didn't really understand the mechanics, or which strategy to use. I treated Redis as just a "fast key-value store," nothing more.
Digging deeper, it turned out to be much richer than that. Different caching strategies create different tradeoffs between consistency and performance. One eviction policy change can fundamentally alter how a service behaves under memory pressure. Real-world case studies commonly show response time differences of tens of times or more between cached and uncached queries.
Cache-Aside vs Write-Through, Thundering Herd problems, how LRU actually works, RDB vs AOF tradeoffs, why single-threaded architecture is both Redis's strength and weakness — this post is my attempt to organize what I've learned.
From the CPU's perspective, disk storage (HDD or SSD) is a turtle. Hard drives have mechanical arms that need to move physically. SSDs need electrical processing to read from NAND flash memory. Disk I/O operates in milliseconds (ms).
RAM, on the other hand, operates in microseconds (μs). That's 1,000 times faster. CPU access to RAM happens in nanoseconds (ns), but including memory fetch time, we're looking at μs range.
But RAM is volatile. Power off, data gone. And it's expensive. A 1TB SSD costs around $100. 1TB of RAM? Several thousand dollars.
Redis combines the speed of RAM with disk persistence. It's an In-Memory Data Structure Store. My perception of Redis as "just a fast cache" completely changed when I learned it supports not just strings, but complex data structures: Lists, Sets, Sorted Sets, Hashes, HyperLogLog.
This wasn't just a cache. This was a database living in memory. An incredibly fast one.
When you put Redis in front of a database, you need to decide how to read and write data. At first, I thought "just read from DB if Redis doesn't have it." Turns out it's more complex than that.
This is the most common pattern. Industry standard.
async function getUser(userId) {
// 1. Check Redis first
const cached = await redis.get(`user:${userId}`);
if (cached) {
return JSON.parse(cached); // Cache Hit!
}
// 2. Not there? Query database
const user = await db.query('SELECT * FROM users WHERE id = ?', [userId]);
// 3. Save to Redis for next time
await redis.set(`user:${userId}`, JSON.stringify(user), 'EX', 3600); // 1 hour TTL
return user;
}
How it works:
When I first used this pattern, I missed something important: the Cold Start problem. After server restart, Redis is empty. Every request hits the database. Sudden massive DB load.
The solution is Cache Warming. Pre-load frequently accessed data when the server starts.
async function warmUpCache() {
console.log('Warming up cache...');
const popularUsers = await db.query('SELECT * FROM users ORDER BY visit_count DESC LIMIT 100');
for (const user of popularUsers) {
await redis.set(`user:${user.id}`, JSON.stringify(user), 'EX', 3600);
}
console.log('Cache warmed!');
}
Add this to your startup script. Cold Start problem mostly solved.
When writing data, write to both Redis and DB simultaneously.
async function updateUser(userId, data) {
// 1. Write to DB
await db.query('UPDATE users SET name = ? WHERE id = ?', [data.name, userId]);
// 2. Write to Redis
await redis.set(`user:${userId}`, JSON.stringify(data), 'EX', 3600);
return data;
}
Pros:
I tried Write-Through initially but gave up. When users update their profiles, you write to both Redis and DB. What if Redis temporarily goes down? Do you rollback the transaction?
Handling this properly requires distributed transaction mechanisms like Two-Phase Commit. Too complex. I went back to Cache-Aside and used short TTLs to mitigate stale data issues.
More aggressive strategy. Write to Redis first, sync to DB later.
async function updateUserScore(userId, score) {
// 1. Write to Redis immediately
await redis.set(`user:${userId}:score`, score);
// 2. Queue for later DB sync
await queue.add('sync-score', { userId, score });
}
// Separate worker handles syncing
async function syncWorker() {
const jobs = await queue.getJobs();
for (const job of jobs) {
await db.query('UPDATE users SET score = ? WHERE id = ?', [job.data.score, job.data.userId]);
}
}
Pros:
This pattern works when you can tolerate some data loss. Real-time leaderboards, for example. Game scores being a few seconds behind in the database is acceptable.
My takeaway: There's no one-size-fits-all caching strategy. Read-heavy? Cache-Aside. Consistency critical? Write-Through. Write performance matters and you can tolerate loss? Write-Behind.
RAM is finite. When Redis hits maxmemory, something has to go. But what?
Initially, I didn't understand this properly. I thought "just remove old stuff first" would work. But operating a production service taught me this is more nuanced.
EXPIRE set (TTL).I started with allkeys-lru, but ran into problems when using Redis for session management. Login session data was getting evicted due to memory pressure. Users randomly got logged out.
The solution: Separate Redis instances for different purposes. Session Redis with noeviction and adequate memory. Cache Redis with allkeys-lru for efficient memory management.
Lesson learned: Don't use one Redis instance for everything. Separate by purpose.
LRU shows up constantly in coding interviews and system design discussions. Implementing it yourself makes the concept click much faster than just reading about it.
Core idea:
JavaScript's Map remembers insertion order, perfect for LRU implementation.
class LRUCache {
constructor(capacity) {
this.capacity = capacity;
this.map = new Map(); // Remembers insertion order!
}
get(key) {
if (!this.map.has(key)) return -1;
// Key point: access happened, so delete and re-insert to make it "recent"
const value = this.map.get(key);
this.map.delete(key);
this.map.set(key, value);
return value;
}
put(key, value) {
if (this.map.has(key)) {
// Existing key: delete first to update order
this.map.delete(key);
} else if (this.map.size >= this.capacity) {
// Over capacity: delete oldest (first) key
const firstKey = this.map.keys().next().value;
this.map.delete(firstKey);
}
this.map.set(key, value);
}
}
// Test
const cache = new LRUCache(2);
cache.put(1, 'A'); // {1: 'A'}
cache.put(2, 'B'); // {1: 'A', 2: 'B'}
console.log(cache.get(1)); // 'A' - {2: 'B', 1: 'A'} (1 moved to recent)
cache.put(3, 'C'); // Over capacity! Remove 2 -> {1: 'A', 3: 'C'}
console.log(cache.get(2)); // -1 (gone)
In Redis configuration:
# redis.conf
maxmemory 256mb
maxmemory-policy allkeys-lru
Bottom line: LRU exploits temporal locality. Recently accessed data is likely to be accessed again soon. When this concept clicked, I also understood CPU cache memory behavior.
"Redis is in-memory, so data disappears when you power off, right?"
Half true, half false. Redis can back up data to disk. Two methods.
Periodically (e.g., every hour, or after 1000 key changes) saves entire memory to a .rdb file like taking a photo.
# redis.conf
save 900 1 # Save if 1+ keys changed in 900 seconds (15 min)
save 300 10 # Save if 10+ keys changed in 300 seconds (5 min)
save 60 10000 # Save if 10000+ keys changed in 60 seconds
Pros:
.rdb file.fork(), temporarily doubling memory usage.I used RDB-only at first and experienced data loss. Configured 15-minute snapshots, then the server crashed. Lost 14 minutes of data. Thousands of user action logs gone.
Lesson learned: RDB isn't perfect backup.
Logs every write command (SET, DEL, INCR) sequentially to .aof file. On restart, replays this log from the beginning to restore data.
# redis.conf
appendonly yes
appendfsync everysec # Flush to disk every second
appendfsync options:
always: Flush after every write command. Slow but safe.everysec: Flush every second. Balanced performance and safety.no: Let OS handle flushing. Fast but risky.everysec mode).AOF Rewrite compacts redundant commands. Example:
SET user:1 "Alice"
SET user:1 "Bob"
SET user:1 "Charlie"
After rewrite:
SET user:1 "Charlie"
Configuration:
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
Meaning: "Auto-rewrite when AOF exceeds 64MB and is 100% larger than previous size."
After experimenting, I concluded: Enable both.
# RDB: periodic backups
save 900 1
save 300 10
save 60 10000
# AOF: real-time log
appendonly yes
appendfsync everysec
# AOF Rewrite
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
Result:
On restart, Redis prefers AOF. If AOF doesn't exist, reads RDB.
Conclusion: RDB for backup, AOF for recovery. Both needed.
"Redis is single-threaded, so it guarantees atomicity."
When I first heard this, I thought "single-threaded must be slow." But after using it in production, I understood: single-threading is Redis's greatest strength and weakness.
Redis processes all commands sequentially in one thread. Multiple clients send commands simultaneously? Redis queues them and processes one at a time.
Example: implementing a view counter.
// Normal approach (Race Condition possible)
const views = await db.query('SELECT views FROM posts WHERE id = ?', [postId]);
await db.query('UPDATE posts SET views = ? WHERE id = ?', [views + 1, postId]);
This code has problems. Two users access simultaneously:
views = 100.views = 100.views = 101.views = 101.Result: Should increment by 2, only incremented by 1.
Redis's INCR command solves this.
await redis.incr(`post:${postId}:views`);
INCR is atomic. Read and write happen as one command, no Race Condition.
I experienced this firsthand. Built a limited coupon issuance feature. DB implementation kept having concurrency issues. Switched to Redis, problems vanished.
async function issueCoupon(userId) {
const remaining = await redis.decr('coupon:remaining');
if (remaining < 0) {
await redis.incr('coupon:remaining'); // Rollback
return { success: false, message: 'Coupons sold out.' };
}
await redis.sadd('coupon:issued', userId);
return { success: true };
}
DECR is also atomic, so even with 1000 simultaneous requests, it processes in strict order.
The scary part of single-threading: one slow command blocks all subsequent commands.
My mistake: I ran this command in production for debugging.
redis-cli KEYS *
Production Redis had 1 million keys. This command ran for 5 seconds. During those 5 seconds, every API request froze. Users stared at loading spinners.
Commands to Never Use:KEYS *: Lists all keys. O(N).FLUSHALL: Deletes all data. O(N).SMEMBERS large-set: Fetches entire large Set. O(N).HGETALL large-hash: Fetches entire large Hash. O(N).SCAN: Cursor-based incremental iteration. O(1) per call.SSCAN, HSCAN: SCAN for Sets and Hashes.// ❌ Bad: blocking
const keys = await redis.keys('user:*');
// ✅ Good: non-blocking
let cursor = '0';
const keys = [];
do {
const result = await redis.scan(cursor, 'MATCH', 'user:*', 'COUNT', 100);
cursor = result[0];
keys.push(...result[1]);
} while (cursor !== '0');
SCAN fetches 100 keys at a time, allowing other commands to execute between iterations. No blocking.
Summary: Redis's single-threading is a double-edged sword. Guarantees atomicity, but heavy commands freeze everything.
This was one of my worst experiences.
I was caching view counts for a popular post. Set TTL to 1 hour. Exactly 1 hour later when the cache expired, 1000 users simultaneously viewed that post.
What happened?
This is called the Thundering Herd Problem or Cache Stampede. When cache expires, thousands of requests stampede the database.
Only the first request goes to DB, others wait.
async function getPostWithLock(postId) {
const cacheKey = `post:${postId}`;
const lockKey = `lock:${cacheKey}`;
// 1. Check cache
const cached = await redis.get(cacheKey);
if (cached) return JSON.parse(cached);
// 2. Try acquiring lock (10 second TTL)
const acquired = await redis.set(lockKey, '1', 'NX', 'EX', 10);
if (acquired) {
// Lock acquired: access DB
try {
const post = await db.query('SELECT * FROM posts WHERE id = ?', [postId]);
await redis.set(cacheKey, JSON.stringify(post), 'EX', 3600);
return post;
} finally {
await redis.del(lockKey); // Release lock
}
} else {
// Lock not acquired: wait and retry
await new Promise(resolve => setTimeout(resolve, 50));
return getPostWithLock(postId); // Recursive retry
}
}
Key point: SET key value NX EX 10 means "set only if key doesn't exist, auto-delete after 10 seconds." Works as a distributed lock.
Refresh cache before it actually expires.
async function getPostWithEarlyExpiration(postId) {
const cacheKey = `post:${postId}`;
const result = await redis.get(cacheKey);
if (result) {
const { data, expiresAt } = JSON.parse(result);
const now = Date.now();
const timeToExpire = expiresAt - now;
// Within 10% of expiration time? Probabilistically refresh early
const delta = Math.random() * 3600 * 1000 * 0.1; // 10% of 1 hour
if (timeToExpire < delta) {
// Async refresh (respond with cached data first)
updateCache(postId);
}
return data;
}
// Cache Miss
return updateCache(postId);
}
async function updateCache(postId) {
const post = await db.query('SELECT * FROM posts WHERE id = ?', [postId]);
const expiresAt = Date.now() + 3600 * 1000;
await redis.set(`post:${postId}`, JSON.stringify({ data: post, expiresAt }), 'EX', 3600);
return post;
}
This approach: "Refresh cache just before expiration to reduce users experiencing actual expiration."
Key insight: Cache isn't just data storage, it's a concurrency control tool.
Redis isn't a simple key-value store. It supports various data structures.
Game rankings with DB require running ORDER BY score DESC LIMIT 100 every time. Slow.
Redis Sorted Set manages rankings in O(log N).
// Record scores
await redis.zadd('leaderboard', 9500, 'user:123');
await redis.zadd('leaderboard', 8800, 'user:456');
await redis.zadd('leaderboard', 9200, 'user:789');
// Get top 10 (O(log N + M))
const top10 = await redis.zrevrange('leaderboard', 0, 9, 'WITHSCORES');
// ['user:123', '9500', 'user:789', '9200', 'user:456', '8800']
// Get user's rank (O(log N))
const rank = await redis.zrevrank('leaderboard', 'user:123');
console.log(`Rank: ${rank + 1}`); // Rank: 1
// Get user's score (O(1))
const score = await redis.zscore('leaderboard', 'user:123');
When I implemented this, leaderboard query time dropped from 300ms to 5ms.
Counting daily unique visitors with Set requires storing every ID. 1 million visitors = 1 million IDs stored. Memory intensive.
HyperLogLog uses a probabilistic algorithm to estimate unique counts. 0.81% error rate, fixed 12KB memory.
// Add visitors (O(1))
await redis.pfadd('visitors:2025-09-05', 'user:123');
await redis.pfadd('visitors:2025-09-05', 'user:456');
await redis.pfadd('visitors:2025-09-05', 'user:123'); // Duplicate ignored
// Get unique count (O(1))
const uniqueCount = await redis.pfcount('visitors:2025-09-05');
console.log(`Daily visitors: ${uniqueCount}`); // 2
1 million unique visitors stored in Set requires tens of MB. HyperLogLog? 12KB. Trade precision for efficiency.
Show user's 10 most recent activities.
// Add activity (newest to left)
await redis.lpush('user:123:activities', JSON.stringify({ type: 'post', id: 456 }));
// Get recent 10
const activities = await redis.lrange('user:123:activities', 0, 9);
// Trim list to 10 items
await redis.ltrim('user:123:activities', 0, 9);
This pattern works great for timeline feeds or notification lists.
Biggest fear after deploying Redis to production: "What if Redis dies?"
Sentinel monitors Redis. When master dies, automatically promotes a slave to master.
┌─────────┐ ┌─────────┐
│ Master │────────▶│ Slave 1 │
└─────────┘ └─────────┘
│ ▲
│ │
▼ │
┌─────────┐ ┌─────────┐
│ Slave 2 │ │Sentinel │ (monitoring)
└─────────┘ └─────────┘
Master dies:
┌─────────┐ ┌─────────┐
│ (dead) │ │ Master! │ (promoted)
└─────────┘ └─────────┘
▲
│
┌─────────┐ ┌─────────┐
│ Slave 2 │────────▶│Sentinel │
└─────────┘ └─────────┘
Configuration:
# sentinel.conf
sentinel monitor mymaster 127.0.0.1 6379 2
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
monitor mymaster: Monitor Redis named "mymaster".down-after-milliseconds 5000: Consider dead after 5 seconds no response.failover-timeout 60000: Failover timeout 60 seconds.Too much data for one Redis instance? Use Cluster. Distributes data across multiple nodes.
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Node 1 │ │ Node 2 │ │ Node 3 │
│ Slot │ │ Slot │ │ Slot │
│ 0-5460 │ │ 5461- │ │ 10923- │
│ │ │ 10922 │ │ 16383 │
└──────────┘ └──────────┘ └──────────┘
Redis hashes keys to slots 0-16383. Each node manages a slot range.
const Redis = require('ioredis');
const cluster = new Redis.Cluster([
{ host: '127.0.0.1', port: 7000 },
{ host: '127.0.0.1', port: 7001 },
{ host: '127.0.0.1', port: 7002 },
]);
await cluster.set('user:123', 'Alice'); // Auto-routes to appropriate node
But Cluster is complex. For small projects, it's overkill. Sentinel + Replication was sufficient for me.
Redis isn't just for caching. It handles real-time messaging too.
// Subscriber
const subscriber = redis.duplicate();
await subscriber.subscribe('chat:room1');
subscriber.on('message', (channel, message) => {
console.log(`[${channel}] ${message}`);
});
// Publisher
await redis.publish('chat:room1', 'Hello, everyone!');
Downside: Messages are volatile. Offline subscribers miss messages.
Streams are a lightweight version of Kafka.
// Add message
await redis.xadd('mystream', '*', 'user', 'Alice', 'action', 'login');
// Read messages (Consumer Group)
await redis.xgroup('CREATE', 'mystream', 'mygroup', '
Streams persist messages to disk, can be read later. Works as background job queue.
---
## Connecting Redis in Node.js (ioredis)
In production, I use the `ioredis` library. More features than the `redis` package.
```javascript
const Redis = require('ioredis');
const redis = new Redis({
host: '127.0.0.1',
port: 6379,
password: 'your-password',
db: 0,
retryStrategy(times) {
const delay = Math.min(times * 50, 2000);
return delay; // Retry connection
},
maxRetriesPerRequest: 3,
});
redis.on('connect', () => console.log('Redis connected!'));
redis.on('error', (err) => console.error('Redis error:', err));
// Usage example
await redis.set('key', 'value', 'EX', 60);
const value = await redis.get('key');
Connection Pool settings:
const redis = new Redis({
host: '127.0.0.1',
port: 6379,
lazyConnect: true, // Connect only when needed
enableOfflineQueue: false, // Don't queue if disconnected, error immediately
});
Consistent key naming is crucial. I use this convention:
{object}:{id}:{field}
user:123:profile
post:456:views
session:abc123
cache:product:789
Colon (:) separators show hierarchical structure in Redis admin tools.
Keys without TTL cause memory leaks. Set TTL on every key.
// ❌ Bad
await redis.set('user:123', data);
// ✅ Good
await redis.set('user:123', data, 'EX', 3600); // 1 hour
Send multiple commands at once to reduce network round trips (RTT).
// ❌ Slow (3 round trips)
await redis.set('key1', 'value1');
await redis.set('key2', 'value2');
await redis.set('key3', 'value3');
// ✅ Fast (1 round trip)
const pipeline = redis.pipeline();
pipeline.set('key1', 'value1');
pipeline.set('key2', 'value2');
pipeline.set('key3', 'value3');
await pipeline.exec();
Use redis-cli INFO command to check memory, CPU, hit rate.
redis-cli INFO stats
Key metrics:
used_memory_human: Memory usage.evicted_keys: Number of evicted keys.keyspace_hits / keyspace_misses: Cache hit rate.Hit rate = hits / (hits + misses) * 100
Good hit rate: 90%+. Below 50%? Reconsider your caching strategy., 'MKSTREAM'); const messages = await redis.xreadgroup('GROUP', 'mygroup', 'consumer1', 'COUNT', 10, 'STREAMS', 'mystream', '>');
// ACK (처리 완료) await redis.xack('mystream', 'mygroup', messageId);
Streams는 메시지를 디스크에 저장하기 때문에, 나중에 다시 읽을 수 있습니다. 백그라운드 작업 큐로 쓸 수 있습니다.
---
## Node.js에서 Redis 연결하기 (ioredis)
실제로는 `ioredis` 라이브러리를 많이 씁니다. `redis` 패키지보다 기능이 많습니다.
```javascript
const Redis = require('ioredis');
const redis = new Redis({
host: '127.0.0.1',
port: 6379,
password: 'your-password',
db: 0,
retryStrategy(times) {
const delay = Math.min(times * 50, 2000);
return delay; // 재연결 시도
},
maxRetriesPerRequest: 3,
});
redis.on('connect', () => console.log('Redis connected!'));
redis.on('error', (err) => console.error('Redis error:', err));
// 사용 예시
await redis.set('key', 'value', 'EX', 60);
const value = await redis.get('key');
Connection Pool 설정:
const redis = new Redis({
host: '127.0.0.1',
port: 6379,
lazyConnect: true, // 필요할 때만 연결
enableOfflineQueue: false, // 연결 끊기면 큐에 쌓지 말고 바로 에러
});
키 이름을 일관되게 짓는 게 중요합니다. 저는 이런 규칙을 씁니다.
{object}:{id}:{field}
user:123:profile
post:456:views
session:abc123
cache:product:789
콜론(:)으로 구분하면, Redis 관리 도구에서 계층 구조로 보입니다.
TTL 없는 키는 메모리 누수의 원인입니다. 모든 키에 TTL을 설정하세요.
// ❌ 나쁜 예
await redis.set('user:123', data);
// ✅ 좋은 예
await redis.set('user:123', data, 'EX', 3600); // 1시간
여러 명령을 한 번에 보내면 네트워크 왕복(RTT)을 줄일 수 있습니다.
// ❌ 느린 방법 (3번 왕복)
await redis.set('key1', 'value1');
await redis.set('key2', 'value2');
await redis.set('key3', 'value3');
// ✅ 빠른 방법 (1번 왕복)
const pipeline = redis.pipeline();
pipeline.set('key1', 'value1');
pipeline.set('key2', 'value2');
pipeline.set('key3', 'value3');
await pipeline.exec();
redis-cli의 INFO 명령으로 메모리, CPU, 히트율을 확인하세요.
redis-cli INFO stats
주요 지표:
used_memory_human: 메모리 사용량.evicted_keys: 축출된 키 개수.keyspace_hits / keyspace_misses: 캐시 히트율.히트율 = hits / (hits + misses) * 100
히트율이 90% 이상이면 좋습니다. 50% 이하면 캐시 전략을 재검토하세요.
Redis를 파고들면서 이해한 것들을 정리해봅니다.
Redis는 캐시 이상이다. 자료구조 스토어, 메시지 브로커, 세션 관리자, 분산 락... 다재다능합니다.
캐싱 전략에 정답은 없다. 읽기가 많으면 Cache-Aside, 일관성이 중요하면 Write-Through, 쓰기 성능이 중요하면 Write-Behind. 상황에 따라 다릅니다.
메모리 관리가 핵심이다. LRU, LFU, TTL을 잘 설정해야 메모리를 효율적으로 씁니다.
영속성은 RDB + AOF 조합이 최고다. RDB는 빠른 백업, AOF는 데이터 안전성.
싱글 스레드는 양날의 검이다. 원자성을 보장하지만, O(N) 명령을 쓰면 전체가 멈춥니다.
Thundering Herd 문제를 항상 의식하라. 캐시 만료 시점에 DB가 터질 수 있습니다. Locking이나 Early Expiration으로 대응하세요.
Sorted Set, HyperLogLog 같은 자료구조를 활용하라. 리더보드, 유니크 카운트 같은 문제를 간단히 해결할 수 있습니다.
고가용성을 원하면 Sentinel이나 Cluster를 쓰라. 하지만 작은 프로젝트에는 과합니다.
공부하면서 와닿은 가장 큰 교훈은 이거였습니다: Redis는 올바르게 쓰면 시스템을 살리고, 잘못 쓰면 더 복잡하게 만든다.
정리한 내용이 누군가에게 도움이 되길 바랍니다.
I first encountered Redis while studying what happens when DB queries slow down under load. The pattern is well-documented: as traffic grows, database load grows linearly, and eventually response times hit a wall.
"Caching dramatically improves performance" — I'd heard that many times. But I didn't really understand the mechanics, or which strategy to use. I treated Redis as just a "fast key-value store," nothing more.
Digging deeper, it turned out to be much richer than that. Different caching strategies create different tradeoffs between consistency and performance. One eviction policy change can fundamentally alter how a service behaves under memory pressure. Real-world case studies commonly show response time differences of tens of times or more between cached and uncached queries.
Cache-Aside vs Write-Through, Thundering Herd problems, how LRU actually works, RDB vs AOF tradeoffs, why single-threaded architecture is both Redis's strength and weakness — this post is my attempt to organize what I've learned.
From the CPU's perspective, disk storage (HDD or SSD) is a turtle. Hard drives have mechanical arms that need to move physically. SSDs need electrical processing to read from NAND flash memory. Disk I/O operates in milliseconds (ms).
RAM, on the other hand, operates in microseconds (μs). That's 1,000 times faster. CPU access to RAM happens in nanoseconds (ns), but including memory fetch time, we're looking at μs range.
But RAM is volatile. Power off, data gone. And it's expensive. A 1TB SSD costs around $100. 1TB of RAM? Several thousand dollars.
Redis combines the speed of RAM with disk persistence. It's an In-Memory Data Structure Store. My perception of Redis as "just a fast cache" completely changed when I learned it supports not just strings, but complex data structures: Lists, Sets, Sorted Sets, Hashes, HyperLogLog.
This wasn't just a cache. This was a database living in memory. An incredibly fast one.
When you put Redis in front of a database, you need to decide how to read and write data. At first, I thought "just read from DB if Redis doesn't have it." Turns out it's more complex than that.
This is the most common pattern. Industry standard.
async function getUser(userId) {
// 1. Check Redis first
const cached = await redis.get(`user:${userId}`);
if (cached) {
return JSON.parse(cached); // Cache Hit!
}
// 2. Not there? Query database
const user = await db.query('SELECT * FROM users WHERE id = ?', [userId]);
// 3. Save to Redis for next time
await redis.set(`user:${userId}`, JSON.stringify(user), 'EX', 3600); // 1 hour TTL
return user;
}
How it works:
When I first used this pattern, I missed something important: the Cold Start problem. After server restart, Redis is empty. Every request hits the database. Sudden massive DB load.
The solution is Cache Warming. Pre-load frequently accessed data when the server starts.
async function warmUpCache() {
console.log('Warming up cache...');
const popularUsers = await db.query('SELECT * FROM users ORDER BY visit_count DESC LIMIT 100');
for (const user of popularUsers) {
await redis.set(`user:${user.id}`, JSON.stringify(user), 'EX', 3600);
}
console.log('Cache warmed!');
}
Add this to your startup script. Cold Start problem mostly solved.
When writing data, write to both Redis and DB simultaneously.
async function updateUser(userId, data) {
// 1. Write to DB
await db.query('UPDATE users SET name = ? WHERE id = ?', [data.name, userId]);
// 2. Write to Redis
await redis.set(`user:${userId}`, JSON.stringify(data), 'EX', 3600);
return data;
}
Pros:
I tried Write-Through initially but gave up. When users update their profiles, you write to both Redis and DB. What if Redis temporarily goes down? Do you rollback the transaction?
Handling this properly requires distributed transaction mechanisms like Two-Phase Commit. Too complex. I went back to Cache-Aside and used short TTLs to mitigate stale data issues.
More aggressive strategy. Write to Redis first, sync to DB later.
async function updateUserScore(userId, score) {
// 1. Write to Redis immediately
await redis.set(`user:${userId}:score`, score);
// 2. Queue for later DB sync
await queue.add('sync-score', { userId, score });
}
// Separate worker handles syncing
async function syncWorker() {
const jobs = await queue.getJobs();
for (const job of jobs) {
await db.query('UPDATE users SET score = ? WHERE id = ?', [job.data.score, job.data.userId]);
}
}
Pros:
This pattern works when you can tolerate some data loss. Real-time leaderboards, for example. Game scores being a few seconds behind in the database is acceptable.
My takeaway: There's no one-size-fits-all caching strategy. Read-heavy? Cache-Aside. Consistency critical? Write-Through. Write performance matters and you can tolerate loss? Write-Behind.
RAM is finite. When Redis hits maxmemory, something has to go. But what?
Initially, I didn't understand this properly. I thought "just remove old stuff first" would work. But operating a production service taught me this is more nuanced.
EXPIRE set (TTL).I started with allkeys-lru, but ran into problems when using Redis for session management. Login session data was getting evicted due to memory pressure. Users randomly got logged out.
The solution: Separate Redis instances for different purposes. Session Redis with noeviction and adequate memory. Cache Redis with allkeys-lru for efficient memory management.
Lesson learned: Don't use one Redis instance for everything. Separate by purpose.
LRU shows up constantly in coding interviews and system design discussions. Implementing it yourself makes the concept click much faster than just reading about it.
Core idea:
JavaScript's Map remembers insertion order, perfect for LRU implementation.
class LRUCache {
constructor(capacity) {
this.capacity = capacity;
this.map = new Map(); // Remembers insertion order!
}
get(key) {
if (!this.map.has(key)) return -1;
// Key point: access happened, so delete and re-insert to make it "recent"
const value = this.map.get(key);
this.map.delete(key);
this.map.set(key, value);
return value;
}
put(key, value) {
if (this.map.has(key)) {
// Existing key: delete first to update order
this.map.delete(key);
} else if (this.map.size >= this.capacity) {
// Over capacity: delete oldest (first) key
const firstKey = this.map.keys().next().value;
this.map.delete(firstKey);
}
this.map.set(key, value);
}
}
// Test
const cache = new LRUCache(2);
cache.put(1, 'A'); // {1: 'A'}
cache.put(2, 'B'); // {1: 'A', 2: 'B'}
console.log(cache.get(1)); // 'A' - {2: 'B', 1: 'A'} (1 moved to recent)
cache.put(3, 'C'); // Over capacity! Remove 2 -> {1: 'A', 3: 'C'}
console.log(cache.get(2)); // -1 (gone)
In Redis configuration:
# redis.conf
maxmemory 256mb
maxmemory-policy allkeys-lru
Bottom line: LRU exploits temporal locality. Recently accessed data is likely to be accessed again soon. When this concept clicked, I also understood CPU cache memory behavior.
"Redis is in-memory, so data disappears when you power off, right?"
Half true, half false. Redis can back up data to disk. Two methods.
Periodically (e.g., every hour, or after 1000 key changes) saves entire memory to a .rdb file like taking a photo.
# redis.conf
save 900 1 # Save if 1+ keys changed in 900 seconds (15 min)
save 300 10 # Save if 10+ keys changed in 300 seconds (5 min)
save 60 10000 # Save if 10000+ keys changed in 60 seconds
Pros:
.rdb file.fork(), temporarily doubling memory usage.I used RDB-only at first and experienced data loss. Configured 15-minute snapshots, then the server crashed. Lost 14 minutes of data. Thousands of user action logs gone.
Lesson learned: RDB isn't perfect backup.
Logs every write command (SET, DEL, INCR) sequentially to .aof file. On restart, replays this log from the beginning to restore data.
# redis.conf
appendonly yes
appendfsync everysec # Flush to disk every second
appendfsync options:
always: Flush after every write command. Slow but safe.everysec: Flush every second. Balanced performance and safety.no: Let OS handle flushing. Fast but risky.everysec mode).AOF Rewrite compacts redundant commands. Example:
SET user:1 "Alice"
SET user:1 "Bob"
SET user:1 "Charlie"
After rewrite:
SET user:1 "Charlie"
Configuration:
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
Meaning: "Auto-rewrite when AOF exceeds 64MB and is 100% larger than previous size."
After experimenting, I concluded: Enable both.
# RDB: periodic backups
save 900 1
save 300 10
save 60 10000
# AOF: real-time log
appendonly yes
appendfsync everysec
# AOF Rewrite
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
Result:
On restart, Redis prefers AOF. If AOF doesn't exist, reads RDB.
Conclusion: RDB for backup, AOF for recovery. Both needed.
"Redis is single-threaded, so it guarantees atomicity."
When I first heard this, I thought "single-threaded must be slow." But after using it in production, I understood: single-threading is Redis's greatest strength and weakness.
Redis processes all commands sequentially in one thread. Multiple clients send commands simultaneously? Redis queues them and processes one at a time.
Example: implementing a view counter.
// Normal approach (Race Condition possible)
const views = await db.query('SELECT views FROM posts WHERE id = ?', [postId]);
await db.query('UPDATE posts SET views = ? WHERE id = ?', [views + 1, postId]);
This code has problems. Two users access simultaneously:
views = 100.views = 100.views = 101.views = 101.Result: Should increment by 2, only incremented by 1.
Redis's INCR command solves this.
await redis.incr(`post:${postId}:views`);
INCR is atomic. Read and write happen as one command, no Race Condition.
I experienced this firsthand. Built a limited coupon issuance feature. DB implementation kept having concurrency issues. Switched to Redis, problems vanished.
async function issueCoupon(userId) {
const remaining = await redis.decr('coupon:remaining');
if (remaining < 0) {
await redis.incr('coupon:remaining'); // Rollback
return { success: false, message: 'Coupons sold out.' };
}
await redis.sadd('coupon:issued', userId);
return { success: true };
}
DECR is also atomic, so even with 1000 simultaneous requests, it processes in strict order.
The scary part of single-threading: one slow command blocks all subsequent commands.
My mistake: I ran this command in production for debugging.
redis-cli KEYS *
Production Redis had 1 million keys. This command ran for 5 seconds. During those 5 seconds, every API request froze. Users stared at loading spinners.
Commands to Never Use:KEYS *: Lists all keys. O(N).FLUSHALL: Deletes all data. O(N).SMEMBERS large-set: Fetches entire large Set. O(N).HGETALL large-hash: Fetches entire large Hash. O(N).SCAN: Cursor-based incremental iteration. O(1) per call.SSCAN, HSCAN: SCAN for Sets and Hashes.// ❌ Bad: blocking
const keys = await redis.keys('user:*');
// ✅ Good: non-blocking
let cursor = '0';
const keys = [];
do {
const result = await redis.scan(cursor, 'MATCH', 'user:*', 'COUNT', 100);
cursor = result[0];
keys.push(...result[1]);
} while (cursor !== '0');
SCAN fetches 100 keys at a time, allowing other commands to execute between iterations. No blocking.
Summary: Redis's single-threading is a double-edged sword. Guarantees atomicity, but heavy commands freeze everything.
This was one of my worst experiences.
I was caching view counts for a popular post. Set TTL to 1 hour. Exactly 1 hour later when the cache expired, 1000 users simultaneously viewed that post.
What happened?
This is called the Thundering Herd Problem or Cache Stampede. When cache expires, thousands of requests stampede the database.
Only the first request goes to DB, others wait.
async function getPostWithLock(postId) {
const cacheKey = `post:${postId}`;
const lockKey = `lock:${cacheKey}`;
// 1. Check cache
const cached = await redis.get(cacheKey);
if (cached) return JSON.parse(cached);
// 2. Try acquiring lock (10 second TTL)
const acquired = await redis.set(lockKey, '1', 'NX', 'EX', 10);
if (acquired) {
// Lock acquired: access DB
try {
const post = await db.query('SELECT * FROM posts WHERE id = ?', [postId]);
await redis.set(cacheKey, JSON.stringify(post), 'EX', 3600);
return post;
} finally {
await redis.del(lockKey); // Release lock
}
} else {
// Lock not acquired: wait and retry
await new Promise(resolve => setTimeout(resolve, 50));
return getPostWithLock(postId); // Recursive retry
}
}
Key point: SET key value NX EX 10 means "set only if key doesn't exist, auto-delete after 10 seconds." Works as a distributed lock.
Refresh cache before it actually expires.
async function getPostWithEarlyExpiration(postId) {
const cacheKey = `post:${postId}`;
const result = await redis.get(cacheKey);
if (result) {
const { data, expiresAt } = JSON.parse(result);
const now = Date.now();
const timeToExpire = expiresAt - now;
// Within 10% of expiration time? Probabilistically refresh early
const delta = Math.random() * 3600 * 1000 * 0.1; // 10% of 1 hour
if (timeToExpire < delta) {
// Async refresh (respond with cached data first)
updateCache(postId);
}
return data;
}
// Cache Miss
return updateCache(postId);
}
async function updateCache(postId) {
const post = await db.query('SELECT * FROM posts WHERE id = ?', [postId]);
const expiresAt = Date.now() + 3600 * 1000;
await redis.set(`post:${postId}`, JSON.stringify({ data: post, expiresAt }), 'EX', 3600);
return post;
}
This approach: "Refresh cache just before expiration to reduce users experiencing actual expiration."
Key insight: Cache isn't just data storage, it's a concurrency control tool.
Redis isn't a simple key-value store. It supports various data structures.
Game rankings with DB require running ORDER BY score DESC LIMIT 100 every time. Slow.
Redis Sorted Set manages rankings in O(log N).
// Record scores
await redis.zadd('leaderboard', 9500, 'user:123');
await redis.zadd('leaderboard', 8800, 'user:456');
await redis.zadd('leaderboard', 9200, 'user:789');
// Get top 10 (O(log N + M))
const top10 = await redis.zrevrange('leaderboard', 0, 9, 'WITHSCORES');
// ['user:123', '9500', 'user:789', '9200', 'user:456', '8800']
// Get user's rank (O(log N))
const rank = await redis.zrevrank('leaderboard', 'user:123');
console.log(`Rank: ${rank + 1}`); // Rank: 1
// Get user's score (O(1))
const score = await redis.zscore('leaderboard', 'user:123');
When I implemented this, leaderboard query time dropped from 300ms to 5ms.
Counting daily unique visitors with Set requires storing every ID. 1 million visitors = 1 million IDs stored. Memory intensive.
HyperLogLog uses a probabilistic algorithm to estimate unique counts. 0.81% error rate, fixed 12KB memory.
// Add visitors (O(1))
await redis.pfadd('visitors:2025-09-05', 'user:123');
await redis.pfadd('visitors:2025-09-05', 'user:456');
await redis.pfadd('visitors:2025-09-05', 'user:123'); // Duplicate ignored
// Get unique count (O(1))
const uniqueCount = await redis.pfcount('visitors:2025-09-05');
console.log(`Daily visitors: ${uniqueCount}`); // 2
1 million unique visitors stored in Set requires tens of MB. HyperLogLog? 12KB. Trade precision for efficiency.
Show user's 10 most recent activities.
// Add activity (newest to left)
await redis.lpush('user:123:activities', JSON.stringify({ type: 'post', id: 456 }));
// Get recent 10
const activities = await redis.lrange('user:123:activities', 0, 9);
// Trim list to 10 items
await redis.ltrim('user:123:activities', 0, 9);
This pattern works great for timeline feeds or notification lists.
Biggest fear after deploying Redis to production: "What if Redis dies?"
Sentinel monitors Redis. When master dies, automatically promotes a slave to master.
┌─────────┐ ┌─────────┐
│ Master │────────▶│ Slave 1 │
└─────────┘ └─────────┘
│ ▲
│ │
▼ │
┌─────────┐ ┌─────────┐
│ Slave 2 │ │Sentinel │ (monitoring)
└─────────┘ └─────────┘
Master dies:
┌─────────┐ ┌─────────┐
│ (dead) │ │ Master! │ (promoted)
└─────────┘ └─────────┘
▲
│
┌─────────┐ ┌─────────┐
│ Slave 2 │────────▶│Sentinel │
└─────────┘ └─────────┘
Configuration:
# sentinel.conf
sentinel monitor mymaster 127.0.0.1 6379 2
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
monitor mymaster: Monitor Redis named "mymaster".down-after-milliseconds 5000: Consider dead after 5 seconds no response.failover-timeout 60000: Failover timeout 60 seconds.Too much data for one Redis instance? Use Cluster. Distributes data across multiple nodes.
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Node 1 │ │ Node 2 │ │ Node 3 │
│ Slot │ │ Slot │ │ Slot │
│ 0-5460 │ │ 5461- │ │ 10923- │
│ │ │ 10922 │ │ 16383 │
└──────────┘ └──────────┘ └──────────┘
Redis hashes keys to slots 0-16383. Each node manages a slot range.
const Redis = require('ioredis');
const cluster = new Redis.Cluster([
{ host: '127.0.0.1', port: 7000 },
{ host: '127.0.0.1', port: 7001 },
{ host: '127.0.0.1', port: 7002 },
]);
await cluster.set('user:123', 'Alice'); // Auto-routes to appropriate node
But Cluster is complex. For small projects, it's overkill. Sentinel + Replication was sufficient for me.
Redis isn't just for caching. It handles real-time messaging too.
// Subscriber
const subscriber = redis.duplicate();
await subscriber.subscribe('chat:room1');
subscriber.on('message', (channel, message) => {
console.log(`[${channel}] ${message}`);
});
// Publisher
await redis.publish('chat:room1', 'Hello, everyone!');
Downside: Messages are volatile. Offline subscribers miss messages.
Streams are a lightweight version of Kafka.
�CB63�
Streams persist messages to disk, can be read later. Works as background job queue.
In production, I use the ioredis library. More features than the redis package.
const Redis = require('ioredis');
const redis = new Redis({
host: '127.0.0.1',
port: 6379,
password: 'your-password',
db: 0,
retryStrategy(times) {
const delay = Math.min(times * 50, 2000);
return delay; // Retry connection
},
maxRetriesPerRequest: 3,
});
redis.on('connect', () => console.log('Redis connected!'));
redis.on('error', (err) => console.error('Redis error:', err));
// Usage example
await redis.set('key', 'value', 'EX', 60);
const value = await redis.get('key');
Connection Pool settings:
const redis = new Redis({
host: '127.0.0.1',
port: 6379,
lazyConnect: true, // Connect only when needed
enableOfflineQueue: false, // Don't queue if disconnected, error immediately
});
Consistent key naming is crucial. I use this convention:
{object}:{id}:{field}
user:123:profile
post:456:views
session:abc123
cache:product:789
Colon (:) separators show hierarchical structure in Redis admin tools.
Keys without TTL cause memory leaks. Set TTL on every key.
// ❌ Bad
await redis.set('user:123', data);
// ✅ Good
await redis.set('user:123', data, 'EX', 3600); // 1 hour
Send multiple commands at once to reduce network round trips (RTT).
// ❌ Slow (3 round trips)
await redis.set('key1', 'value1');
await redis.set('key2', 'value2');
await redis.set('key3', 'value3');
// ✅ Fast (1 round trip)
const pipeline = redis.pipeline();
pipeline.set('key1', 'value1');
pipeline.set('key2', 'value2');
pipeline.set('key3', 'value3');
await pipeline.exec();
Use redis-cli INFO command to check memory, CPU, hit rate.
redis-cli INFO stats
Key metrics:
used_memory_human: Memory usage.evicted_keys: Number of evicted keys.keyspace_hits / keyspace_misses: Cache hit rate.Hit rate = hits / (hits + misses) * 100
Good hit rate: 90%+. Below 50%? Reconsider your caching strategy., 'MKSTREAM'); const messages = await redis.xreadgroup('GROUP', 'mygroup', 'consumer1', 'COUNT', 10, 'STREAMS', 'mystream', '>');
// ACK (processing complete) await redis.xack('mystream', 'mygroup', messageId);
Streams persist messages to disk, can be read later. Works as background job queue.
---
## Connecting Redis in Node.js (ioredis)
In production, I use the `ioredis` library. More features than the `redis` package.
```javascript
const Redis = require('ioredis');
const redis = new Redis({
host: '127.0.0.1',
port: 6379,
password: 'your-password',
db: 0,
retryStrategy(times) {
const delay = Math.min(times * 50, 2000);
return delay; // Retry connection
},
maxRetriesPerRequest: 3,
});
redis.on('connect', () => console.log('Redis connected!'));
redis.on('error', (err) => console.error('Redis error:', err));
// Usage example
await redis.set('key', 'value', 'EX', 60);
const value = await redis.get('key');
Connection Pool settings:
const redis = new Redis({
host: '127.0.0.1',
port: 6379,
lazyConnect: true, // Connect only when needed
enableOfflineQueue: false, // Don't queue if disconnected, error immediately
});
Consistent key naming is crucial. I use this convention:
{object}:{id}:{field}
user:123:profile
post:456:views
session:abc123
cache:product:789
Colon (:) separators show hierarchical structure in Redis admin tools.
Keys without TTL cause memory leaks. Set TTL on every key.
// ❌ Bad
await redis.set('user:123', data);
// ✅ Good
await redis.set('user:123', data, 'EX', 3600); // 1 hour
Send multiple commands at once to reduce network round trips (RTT).
// ❌ Slow (3 round trips)
await redis.set('key1', 'value1');
await redis.set('key2', 'value2');
await redis.set('key3', 'value3');
// ✅ Fast (1 round trip)
const pipeline = redis.pipeline();
pipeline.set('key1', 'value1');
pipeline.set('key2', 'value2');
pipeline.set('key3', 'value3');
await pipeline.exec();
Use redis-cli INFO command to check memory, CPU, hit rate.
redis-cli INFO stats
Key metrics:
used_memory_human: Memory usage.evicted_keys: Number of evicted keys.keyspace_hits / keyspace_misses: Cache hit rate.Hit rate = hits / (hits + misses) * 100
Good hit rate: 90%+. Below 50%? Reconsider your caching strategy., 'MKSTREAM'); const messages = await redis.xreadgroup('GROUP', 'mygroup', 'consumer1', 'COUNT', 10, 'STREAMS', 'mystream', '>');
// ACK (처리 완료) await redis.xack('mystream', 'mygroup', messageId);
Streams는 메시지를 디스크에 저장하기 때문에, 나중에 다시 읽을 수 있습니다. 백그라운드 작업 큐로 쓸 수 있습니다.
---
## Node.js에서 Redis 연결하기 (ioredis)
실제로는 `ioredis` 라이브러리를 많이 씁니다. `redis` 패키지보다 기능이 많습니다.
```javascript
const Redis = require('ioredis');
const redis = new Redis({
host: '127.0.0.1',
port: 6379,
password: 'your-password',
db: 0,
retryStrategy(times) {
const delay = Math.min(times * 50, 2000);
return delay; // 재연결 시도
},
maxRetriesPerRequest: 3,
});
redis.on('connect', () => console.log('Redis connected!'));
redis.on('error', (err) => console.error('Redis error:', err));
// 사용 예시
await redis.set('key', 'value', 'EX', 60);
const value = await redis.get('key');
Connection Pool 설정:
const redis = new Redis({
host: '127.0.0.1',
port: 6379,
lazyConnect: true, // 필요할 때만 연결
enableOfflineQueue: false, // 연결 끊기면 큐에 쌓지 말고 바로 에러
});
키 이름을 일관되게 짓는 게 중요합니다. 저는 이런 규칙을 씁니다.
{object}:{id}:{field}
user:123:profile
post:456:views
session:abc123
cache:product:789
콜론(:)으로 구분하면, Redis 관리 도구에서 계층 구조로 보입니다.
TTL 없는 키는 메모리 누수의 원인입니다. 모든 키에 TTL을 설정하세요.
// ❌ 나쁜 예
await redis.set('user:123', data);
// ✅ 좋은 예
await redis.set('user:123', data, 'EX', 3600); // 1시간
여러 명령을 한 번에 보내면 네트워크 왕복(RTT)을 줄일 수 있습니다.
// ❌ 느린 방법 (3번 왕복)
await redis.set('key1', 'value1');
await redis.set('key2', 'value2');
await redis.set('key3', 'value3');
// ✅ 빠른 방법 (1번 왕복)
const pipeline = redis.pipeline();
pipeline.set('key1', 'value1');
pipeline.set('key2', 'value2');
pipeline.set('key3', 'value3');
await pipeline.exec();
redis-cli의 INFO 명령으로 메모리, CPU, 히트율을 확인하세요.
redis-cli INFO stats
주요 지표:
used_memory_human: 메모리 사용량.evicted_keys: 축출된 키 개수.keyspace_hits / keyspace_misses: 캐시 히트율.히트율 = hits / (hits + misses) * 100
히트율이 90% 이상이면 좋습니다. 50% 이하면 캐시 전략을 재검토하세요.
Redis를 파고들면서 이해한 것들을 정리해봅니다.
Redis는 캐시 이상이다. 자료구조 스토어, 메시지 브로커, 세션 관리자, 분산 락... 다재다능합니다.
캐싱 전략에 정답은 없다. 읽기가 많으면 Cache-Aside, 일관성이 중요하면 Write-Through, 쓰기 성능이 중요하면 Write-Behind. 상황에 따라 다릅니다.
메모리 관리가 핵심이다. LRU, LFU, TTL을 잘 설정해야 메모리를 효율적으로 씁니다.
영속성은 RDB + AOF 조합이 최고다. RDB는 빠른 백업, AOF는 데이터 안전성.
싱글 스레드는 양날의 검이다. 원자성을 보장하지만, O(N) 명령을 쓰면 전체가 멈춥니다.
Thundering Herd 문제를 항상 의식하라. 캐시 만료 시점에 DB가 터질 수 있습니다. Locking이나 Early Expiration으로 대응하세요.
Sorted Set, HyperLogLog 같은 자료구조를 활용하라. 리더보드, 유니크 카운트 같은 문제를 간단히 해결할 수 있습니다.
고가용성을 원하면 Sentinel이나 Cluster를 쓰라. 하지만 작은 프로젝트에는 과합니다.
공부하면서 와닿은 가장 큰 교훈은 이거였습니다: Redis는 올바르게 쓰면 시스템을 살리고, 잘못 쓰면 더 복잡하게 만든다.
정리한 내용이 누군가에게 도움이 되길 바랍니다.
I first encountered Redis while studying what happens when DB queries slow down under load. The pattern is well-documented: as traffic grows, database load grows linearly, and eventually response times hit a wall.
"Caching dramatically improves performance" — I'd heard that many times. But I didn't really understand the mechanics, or which strategy to use. I treated Redis as just a "fast key-value store," nothing more.
Digging deeper, it turned out to be much richer than that. Different caching strategies create different tradeoffs between consistency and performance. One eviction policy change can fundamentally alter how a service behaves under memory pressure. Real-world case studies commonly show response time differences of tens of times or more between cached and uncached queries.
Cache-Aside vs Write-Through, Thundering Herd problems, how LRU actually works, RDB vs AOF tradeoffs, why single-threaded architecture is both Redis's strength and weakness — this post is my attempt to organize what I've learned.
From the CPU's perspective, disk storage (HDD or SSD) is a turtle. Hard drives have mechanical arms that need to move physically. SSDs need electrical processing to read from NAND flash memory. Disk I/O operates in milliseconds (ms).
RAM, on the other hand, operates in microseconds (μs). That's 1,000 times faster. CPU access to RAM happens in nanoseconds (ns), but including memory fetch time, we're looking at μs range.
But RAM is volatile. Power off, data gone. And it's expensive. A 1TB SSD costs around $100. 1TB of RAM? Several thousand dollars.
Redis combines the speed of RAM with disk persistence. It's an In-Memory Data Structure Store. My perception of Redis as "just a fast cache" completely changed when I learned it supports not just strings, but complex data structures: Lists, Sets, Sorted Sets, Hashes, HyperLogLog.
This wasn't just a cache. This was a database living in memory. An incredibly fast one.
When you put Redis in front of a database, you need to decide how to read and write data. At first, I thought "just read from DB if Redis doesn't have it." Turns out it's more complex than that.
This is the most common pattern. Industry standard.
async function getUser(userId) {
// 1. Check Redis first
const cached = await redis.get(`user:${userId}`);
if (cached) {
return JSON.parse(cached); // Cache Hit!
}
// 2. Not there? Query database
const user = await db.query('SELECT * FROM users WHERE id = ?', [userId]);
// 3. Save to Redis for next time
await redis.set(`user:${userId}`, JSON.stringify(user), 'EX', 3600); // 1 hour TTL
return user;
}
How it works:
When I first used this pattern, I missed something important: the Cold Start problem. After server restart, Redis is empty. Every request hits the database. Sudden massive DB load.
The solution is Cache Warming. Pre-load frequently accessed data when the server starts.
async function warmUpCache() {
console.log('Warming up cache...');
const popularUsers = await db.query('SELECT * FROM users ORDER BY visit_count DESC LIMIT 100');
for (const user of popularUsers) {
await redis.set(`user:${user.id}`, JSON.stringify(user), 'EX', 3600);
}
console.log('Cache warmed!');
}
Add this to your startup script. Cold Start problem mostly solved.
When writing data, write to both Redis and DB simultaneously.
async function updateUser(userId, data) {
// 1. Write to DB
await db.query('UPDATE users SET name = ? WHERE id = ?', [data.name, userId]);
// 2. Write to Redis
await redis.set(`user:${userId}`, JSON.stringify(data), 'EX', 3600);
return data;
}
Pros:
I tried Write-Through initially but gave up. When users update their profiles, you write to both Redis and DB. What if Redis temporarily goes down? Do you rollback the transaction?
Handling this properly requires distributed transaction mechanisms like Two-Phase Commit. Too complex. I went back to Cache-Aside and used short TTLs to mitigate stale data issues.
More aggressive strategy. Write to Redis first, sync to DB later.
async function updateUserScore(userId, score) {
// 1. Write to Redis immediately
await redis.set(`user:${userId}:score`, score);
// 2. Queue for later DB sync
await queue.add('sync-score', { userId, score });
}
// Separate worker handles syncing
async function syncWorker() {
const jobs = await queue.getJobs();
for (const job of jobs) {
await db.query('UPDATE users SET score = ? WHERE id = ?', [job.data.score, job.data.userId]);
}
}
Pros:
This pattern works when you can tolerate some data loss. Real-time leaderboards, for example. Game scores being a few seconds behind in the database is acceptable.
My takeaway: There's no one-size-fits-all caching strategy. Read-heavy? Cache-Aside. Consistency critical? Write-Through. Write performance matters and you can tolerate loss? Write-Behind.
RAM is finite. When Redis hits maxmemory, something has to go. But what?
Initially, I didn't understand this properly. I thought "just remove old stuff first" would work. But operating a production service taught me this is more nuanced.
EXPIRE set (TTL).I started with allkeys-lru, but ran into problems when using Redis for session management. Login session data was getting evicted due to memory pressure. Users randomly got logged out.
The solution: Separate Redis instances for different purposes. Session Redis with noeviction and adequate memory. Cache Redis with allkeys-lru for efficient memory management.
Lesson learned: Don't use one Redis instance for everything. Separate by purpose.
LRU shows up constantly in coding interviews and system design discussions. Implementing it yourself makes the concept click much faster than just reading about it.
Core idea:
JavaScript's Map remembers insertion order, perfect for LRU implementation.
class LRUCache {
constructor(capacity) {
this.capacity = capacity;
this.map = new Map(); // Remembers insertion order!
}
get(key) {
if (!this.map.has(key)) return -1;
// Key point: access happened, so delete and re-insert to make it "recent"
const value = this.map.get(key);
this.map.delete(key);
this.map.set(key, value);
return value;
}
put(key, value) {
if (this.map.has(key)) {
// Existing key: delete first to update order
this.map.delete(key);
} else if (this.map.size >= this.capacity) {
// Over capacity: delete oldest (first) key
const firstKey = this.map.keys().next().value;
this.map.delete(firstKey);
}
this.map.set(key, value);
}
}
// Test
const cache = new LRUCache(2);
cache.put(1, 'A'); // {1: 'A'}
cache.put(2, 'B'); // {1: 'A', 2: 'B'}
console.log(cache.get(1)); // 'A' - {2: 'B', 1: 'A'} (1 moved to recent)
cache.put(3, 'C'); // Over capacity! Remove 2 -> {1: 'A', 3: 'C'}
console.log(cache.get(2)); // -1 (gone)
In Redis configuration:
# redis.conf
maxmemory 256mb
maxmemory-policy allkeys-lru
Bottom line: LRU exploits temporal locality. Recently accessed data is likely to be accessed again soon. When this concept clicked, I also understood CPU cache memory behavior.
"Redis is in-memory, so data disappears when you power off, right?"
Half true, half false. Redis can back up data to disk. Two methods.
Periodically (e.g., every hour, or after 1000 key changes) saves entire memory to a .rdb file like taking a photo.
# redis.conf
save 900 1 # Save if 1+ keys changed in 900 seconds (15 min)
save 300 10 # Save if 10+ keys changed in 300 seconds (5 min)
save 60 10000 # Save if 10000+ keys changed in 60 seconds
Pros:
.rdb file.fork(), temporarily doubling memory usage.I used RDB-only at first and experienced data loss. Configured 15-minute snapshots, then the server crashed. Lost 14 minutes of data. Thousands of user action logs gone.
Lesson learned: RDB isn't perfect backup.
Logs every write command (SET, DEL, INCR) sequentially to .aof file. On restart, replays this log from the beginning to restore data.
# redis.conf
appendonly yes
appendfsync everysec # Flush to disk every second
appendfsync options:
always: Flush after every write command. Slow but safe.everysec: Flush every second. Balanced performance and safety.no: Let OS handle flushing. Fast but risky.everysec mode).AOF Rewrite compacts redundant commands. Example:
SET user:1 "Alice"
SET user:1 "Bob"
SET user:1 "Charlie"
After rewrite:
SET user:1 "Charlie"
Configuration:
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
Meaning: "Auto-rewrite when AOF exceeds 64MB and is 100% larger than previous size."
After experimenting, I concluded: Enable both.
# RDB: periodic backups
save 900 1
save 300 10
save 60 10000
# AOF: real-time log
appendonly yes
appendfsync everysec
# AOF Rewrite
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
Result:
On restart, Redis prefers AOF. If AOF doesn't exist, reads RDB.
Conclusion: RDB for backup, AOF for recovery. Both needed.
"Redis is single-threaded, so it guarantees atomicity."
When I first heard this, I thought "single-threaded must be slow." But after using it in production, I understood: single-threading is Redis's greatest strength and weakness.
Redis processes all commands sequentially in one thread. Multiple clients send commands simultaneously? Redis queues them and processes one at a time.
Example: implementing a view counter.
// Normal approach (Race Condition possible)
const views = await db.query('SELECT views FROM posts WHERE id = ?', [postId]);
await db.query('UPDATE posts SET views = ? WHERE id = ?', [views + 1, postId]);
This code has problems. Two users access simultaneously:
views = 100.views = 100.views = 101.views = 101.Result: Should increment by 2, only incremented by 1.
Redis's INCR command solves this.
await redis.incr(`post:${postId}:views`);
INCR is atomic. Read and write happen as one command, no Race Condition.
I experienced this firsthand. Built a limited coupon issuance feature. DB implementation kept having concurrency issues. Switched to Redis, problems vanished.
async function issueCoupon(userId) {
const remaining = await redis.decr('coupon:remaining');
if (remaining < 0) {
await redis.incr('coupon:remaining'); // Rollback
return { success: false, message: 'Coupons sold out.' };
}
await redis.sadd('coupon:issued', userId);
return { success: true };
}
DECR is also atomic, so even with 1000 simultaneous requests, it processes in strict order.
The scary part of single-threading: one slow command blocks all subsequent commands.
My mistake: I ran this command in production for debugging.
redis-cli KEYS *
Production Redis had 1 million keys. This command ran for 5 seconds. During those 5 seconds, every API request froze. Users stared at loading spinners.
Commands to Never Use:KEYS *: Lists all keys. O(N).FLUSHALL: Deletes all data. O(N).SMEMBERS large-set: Fetches entire large Set. O(N).HGETALL large-hash: Fetches entire large Hash. O(N).SCAN: Cursor-based incremental iteration. O(1) per call.SSCAN, HSCAN: SCAN for Sets and Hashes.// ❌ Bad: blocking
const keys = await redis.keys('user:*');
// ✅ Good: non-blocking
let cursor = '0';
const keys = [];
do {
const result = await redis.scan(cursor, 'MATCH', 'user:*', 'COUNT', 100);
cursor = result[0];
keys.push(...result[1]);
} while (cursor !== '0');
SCAN fetches 100 keys at a time, allowing other commands to execute between iterations. No blocking.
Summary: Redis's single-threading is a double-edged sword. Guarantees atomicity, but heavy commands freeze everything.
This was one of my worst experiences.
I was caching view counts for a popular post. Set TTL to 1 hour. Exactly 1 hour later when the cache expired, 1000 users simultaneously viewed that post.
What happened?
This is called the Thundering Herd Problem or Cache Stampede. When cache expires, thousands of requests stampede the database.
Only the first request goes to DB, others wait.
async function getPostWithLock(postId) {
const cacheKey = `post:${postId}`;
const lockKey = `lock:${cacheKey}`;
// 1. Check cache
const cached = await redis.get(cacheKey);
if (cached) return JSON.parse(cached);
// 2. Try acquiring lock (10 second TTL)
const acquired = await redis.set(lockKey, '1', 'NX', 'EX', 10);
if (acquired) {
// Lock acquired: access DB
try {
const post = await db.query('SELECT * FROM posts WHERE id = ?', [postId]);
await redis.set(cacheKey, JSON.stringify(post), 'EX', 3600);
return post;
} finally {
await redis.del(lockKey); // Release lock
}
} else {
// Lock not acquired: wait and retry
await new Promise(resolve => setTimeout(resolve, 50));
return getPostWithLock(postId); // Recursive retry
}
}
Key point: SET key value NX EX 10 means "set only if key doesn't exist, auto-delete after 10 seconds." Works as a distributed lock.
Refresh cache before it actually expires.
async function getPostWithEarlyExpiration(postId) {
const cacheKey = `post:${postId}`;
const result = await redis.get(cacheKey);
if (result) {
const { data, expiresAt } = JSON.parse(result);
const now = Date.now();
const timeToExpire = expiresAt - now;
// Within 10% of expiration time? Probabilistically refresh early
const delta = Math.random() * 3600 * 1000 * 0.1; // 10% of 1 hour
if (timeToExpire < delta) {
// Async refresh (respond with cached data first)
updateCache(postId);
}
return data;
}
// Cache Miss
return updateCache(postId);
}
async function updateCache(postId) {
const post = await db.query('SELECT * FROM posts WHERE id = ?', [postId]);
const expiresAt = Date.now() + 3600 * 1000;
await redis.set(`post:${postId}`, JSON.stringify({ data: post, expiresAt }), 'EX', 3600);
return post;
}
This approach: "Refresh cache just before expiration to reduce users experiencing actual expiration."
Key insight: Cache isn't just data storage, it's a concurrency control tool.
Redis isn't a simple key-value store. It supports various data structures.
Game rankings with DB require running ORDER BY score DESC LIMIT 100 every time. Slow.
Redis Sorted Set manages rankings in O(log N).
// Record scores
await redis.zadd('leaderboard', 9500, 'user:123');
await redis.zadd('leaderboard', 8800, 'user:456');
await redis.zadd('leaderboard', 9200, 'user:789');
// Get top 10 (O(log N + M))
const top10 = await redis.zrevrange('leaderboard', 0, 9, 'WITHSCORES');
// ['user:123', '9500', 'user:789', '9200', 'user:456', '8800']
// Get user's rank (O(log N))
const rank = await redis.zrevrank('leaderboard', 'user:123');
console.log(`Rank: ${rank + 1}`); // Rank: 1
// Get user's score (O(1))
const score = await redis.zscore('leaderboard', 'user:123');
When I implemented this, leaderboard query time dropped from 300ms to 5ms.
Counting daily unique visitors with Set requires storing every ID. 1 million visitors = 1 million IDs stored. Memory intensive.
HyperLogLog uses a probabilistic algorithm to estimate unique counts. 0.81% error rate, fixed 12KB memory.
// Add visitors (O(1))
await redis.pfadd('visitors:2025-09-05', 'user:123');
await redis.pfadd('visitors:2025-09-05', 'user:456');
await redis.pfadd('visitors:2025-09-05', 'user:123'); // Duplicate ignored
// Get unique count (O(1))
const uniqueCount = await redis.pfcount('visitors:2025-09-05');
console.log(`Daily visitors: ${uniqueCount}`); // 2
1 million unique visitors stored in Set requires tens of MB. HyperLogLog? 12KB. Trade precision for efficiency.
Show user's 10 most recent activities.
// Add activity (newest to left)
await redis.lpush('user:123:activities', JSON.stringify({ type: 'post', id: 456 }));
// Get recent 10
const activities = await redis.lrange('user:123:activities', 0, 9);
// Trim list to 10 items
await redis.ltrim('user:123:activities', 0, 9);
This pattern works great for timeline feeds or notification lists.
Biggest fear after deploying Redis to production: "What if Redis dies?"
Sentinel monitors Redis. When master dies, automatically promotes a slave to master.
┌─────────┐ ┌─────────┐
│ Master │────────▶│ Slave 1 │
└─────────┘ └─────────┘
│ ▲
│ │
▼ │
┌─────────┐ ┌─────────┐
│ Slave 2 │ │Sentinel │ (monitoring)
└─────────┘ └─────────┘
Master dies:
┌─────────┐ ┌─────────┐
│ (dead) │ │ Master! │ (promoted)
└─────────┘ └─────────┘
▲
│
┌─────────┐ ┌─────────┐
│ Slave 2 │────────▶│Sentinel │
└─────────┘ └─────────┘
Configuration:
# sentinel.conf
sentinel monitor mymaster 127.0.0.1 6379 2
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
monitor mymaster: Monitor Redis named "mymaster".down-after-milliseconds 5000: Consider dead after 5 seconds no response.failover-timeout 60000: Failover timeout 60 seconds.Too much data for one Redis instance? Use Cluster. Distributes data across multiple nodes.
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Node 1 │ │ Node 2 │ │ Node 3 │
│ Slot │ │ Slot │ │ Slot │
│ 0-5460 │ │ 5461- │ │ 10923- │
│ │ │ 10922 │ │ 16383 │
└──────────┘ └──────────┘ └──────────┘
Redis hashes keys to slots 0-16383. Each node manages a slot range.
const Redis = require('ioredis');
const cluster = new Redis.Cluster([
{ host: '127.0.0.1', port: 7000 },
{ host: '127.0.0.1', port: 7001 },
{ host: '127.0.0.1', port: 7002 },
]);
await cluster.set('user:123', 'Alice'); // Auto-routes to appropriate node
But Cluster is complex. For small projects, it's overkill. Sentinel + Replication was sufficient for me.
Redis isn't just for caching. It handles real-time messaging too.
// Subscriber
const subscriber = redis.duplicate();
await subscriber.subscribe('chat:room1');
subscriber.on('message', (channel, message) => {
console.log(`[${channel}] ${message}`);
});
// Publisher
await redis.publish('chat:room1', 'Hello, everyone!');
Downside: Messages are volatile. Offline subscribers miss messages.
Streams are a lightweight version of Kafka.
// Add message
await redis.xadd('mystream', '*', 'user', 'Alice', 'action', 'login');
// Read messages (Consumer Group)
await redis.xgroup('CREATE', 'mystream', 'mygroup', '
Streams persist messages to disk, can be read later. Works as background job queue.
---
## Connecting Redis in Node.js (ioredis)
In production, I use the `ioredis` library. More features than the `redis` package.
```javascript
const Redis = require('ioredis');
const redis = new Redis({
host: '127.0.0.1',
port: 6379,
password: 'your-password',
db: 0,
retryStrategy(times) {
const delay = Math.min(times * 50, 2000);
return delay; // Retry connection
},
maxRetriesPerRequest: 3,
});
redis.on('connect', () => console.log('Redis connected!'));
redis.on('error', (err) => console.error('Redis error:', err));
// Usage example
await redis.set('key', 'value', 'EX', 60);
const value = await redis.get('key');
Connection Pool settings:
const redis = new Redis({
host: '127.0.0.1',
port: 6379,
lazyConnect: true, // Connect only when needed
enableOfflineQueue: false, // Don't queue if disconnected, error immediately
});
Consistent key naming is crucial. I use this convention:
{object}:{id}:{field}
user:123:profile
post:456:views
session:abc123
cache:product:789
Colon (:) separators show hierarchical structure in Redis admin tools.
Keys without TTL cause memory leaks. Set TTL on every key.
// ❌ Bad
await redis.set('user:123', data);
// ✅ Good
await redis.set('user:123', data, 'EX', 3600); // 1 hour
Send multiple commands at once to reduce network round trips (RTT).
// ❌ Slow (3 round trips)
await redis.set('key1', 'value1');
await redis.set('key2', 'value2');
await redis.set('key3', 'value3');
// ✅ Fast (1 round trip)
const pipeline = redis.pipeline();
pipeline.set('key1', 'value1');
pipeline.set('key2', 'value2');
pipeline.set('key3', 'value3');
await pipeline.exec();
Use redis-cli INFO command to check memory, CPU, hit rate.
redis-cli INFO stats
Key metrics:
used_memory_human: Memory usage.evicted_keys: Number of evicted keys.keyspace_hits / keyspace_misses: Cache hit rate.Hit rate = hits / (hits + misses) * 100
Good hit rate: 90%+. Below 50%? Reconsider your caching strategy., 'MKSTREAM'); const messages = await redis.xreadgroup('GROUP', 'mygroup', 'consumer1', 'COUNT', 10, 'STREAMS', 'mystream', '>');
// ACK (processing complete) await redis.xack('mystream', 'mygroup', messageId);
Streams persist messages to disk, can be read later. Works as background job queue.
---
## Connecting Redis in Node.js (ioredis)
In production, I use the `ioredis` library. More features than the `redis` package.
```javascript
const Redis = require('ioredis');
const redis = new Redis({
host: '127.0.0.1',
port: 6379,
password: 'your-password',
db: 0,
retryStrategy(times) {
const delay = Math.min(times * 50, 2000);
return delay; // Retry connection
},
maxRetriesPerRequest: 3,
});
redis.on('connect', () => console.log('Redis connected!'));
redis.on('error', (err) => console.error('Redis error:', err));
// Usage example
await redis.set('key', 'value', 'EX', 60);
const value = await redis.get('key');
Connection Pool settings:
const redis = new Redis({
host: '127.0.0.1',
port: 6379,
lazyConnect: true, // Connect only when needed
enableOfflineQueue: false, // Don't queue if disconnected, error immediately
});
Consistent key naming is crucial. I use this convention:
{object}:{id}:{field}
user:123:profile
post:456:views
session:abc123
cache:product:789
Colon (:) separators show hierarchical structure in Redis admin tools.
Keys without TTL cause memory leaks. Set TTL on every key.
// ❌ Bad
await redis.set('user:123', data);
// ✅ Good
await redis.set('user:123', data, 'EX', 3600); // 1 hour
Send multiple commands at once to reduce network round trips (RTT).
// ❌ Slow (3 round trips)
await redis.set('key1', 'value1');
await redis.set('key2', 'value2');
await redis.set('key3', 'value3');
// ✅ Fast (1 round trip)
const pipeline = redis.pipeline();
pipeline.set('key1', 'value1');
pipeline.set('key2', 'value2');
pipeline.set('key3', 'value3');
await pipeline.exec();
Use redis-cli INFO command to check memory, CPU, hit rate.
redis-cli INFO stats
Key metrics:
used_memory_human: Memory usage.evicted_keys: Number of evicted keys.keyspace_hits / keyspace_misses: Cache hit rate.Hit rate = hits / (hits + misses) * 100
Good hit rate: 90%+. Below 50%? Reconsider your caching strategy.