11 KiB
11 KiB
Redis Performance Benchmarking Guide
Overview
This guide helps you verify and measure the performance improvements achieved with Redis caching.
Quick Performance Check
1. Monitor Cache Hits in Real-time
# Terminal 1: Start monitoring Redis
redis-cli MONITOR
# Terminal 2: Make API calls
curl http://localhost:3000/api/admin/setup
curl http://localhost:3000/api/admin/setup # Second call should be instant
# Terminal 1: You should see Redis operations
# GET admin:setup (cache hit)
2. Check Cache Statistics
# Get Redis stats
redis-cli INFO stats
# Key metrics:
# - total_commands_processed: Total Redis commands
# - instantaneous_ops_per_sec: Operations per second
# - keyspace_hits: Successful cache hits
# - keyspace_misses: Cache misses (DB queries needed)
# Calculate hit rate
redis-cli INFO stats | grep "keyspace"
# Output example:
# keyspace_hits:1000 (cached responses)
# keyspace_misses:200 (database queries)
# Hit rate = 1000 / (1000 + 200) = 83%
3. List All Cached Data
# See all cache keys
redis-cli KEYS "*"
# See specific cache types
redis-cli KEYS "session:*" # All sessions
redis-cli KEYS "user:*" # All cached users
redis-cli KEYS "admin:setup" # Admin configuration
redis-cli KEYS "webinar:*" # Webinar data
# Get cache size in memory
redis-cli INFO memory
# Look for "used_memory_human" for actual usage
Performance Testing Steps
Test 1: Admin Setup Page Performance
# Baseline (without cache - first request)
time curl http://localhost:3000/api/admin/setup
# Example output without cache:
# real 0m0.234s (234ms)
# With cache (second request)
time curl http://localhost:3000/api/admin/setup
# Example output with cache:
# real 0m0.005s (5ms)
# Performance improvement: ~98% faster
Test 2: Session Lookup Performance
# Login to get a session
curl -c cookies.txt -X POST http://localhost:3000/api/auth/signin \
-H "Content-Type: application/json" \
-d '{"email":"user@example.com","password":"password"}'
# Measure session verification (first call - cache miss)
time curl -b cookies.txt http://localhost:3000/api/auth/me
# Example: ~100-150ms
# Measure session verification (subsequent calls - cache hit)
time curl -b cookies.txt http://localhost:3000/api/auth/me
time curl -b cookies.txt http://localhost:3000/api/auth/me
time curl -b cookies.txt http://localhost:3000/api/auth/me
# Example: ~5-10ms each
Test 3: Database Query Reduction
# Get baseline stats
redis-cli INFO stats > stats_before.txt
# Make 100 API calls
for i in {1..100}; do
curl http://localhost:3000/api/admin/setup
done
# Get updated stats
redis-cli INFO stats > stats_after.txt
# Compare
diff stats_before.txt stats_after.txt
# Calculate improvements:
# - Before: 100 database queries (100% hit DB)
# - After: ~2 database queries (cache hits for remaining 98)
# - Improvement: 98% reduction in database load
Test 4: Concurrent User Performance
# Install Apache Bench (if not installed)
# macOS: brew install httpd
# Linux: apt-get install apache2-utils
# Test with 10 concurrent users, 100 requests total
ab -n 100 -c 10 http://localhost:3000/api/admin/setup
# Key metrics in output:
# Requests per second: [RPS] <- Higher is better
# Time per request: [ms] <- Lower is better
# Failed requests: 0 <- Should be zero
# Expected results with cache:
# Requests per second: 150-300 (vs 20-50 without cache)
# Time per request: 30-50ms (vs 200-500ms without cache)
Cache Hit Rate Analysis
Calculate Hit Rate
# Get stats
STATS=$(redis-cli INFO stats)
# Extract hit and miss counts
HITS=$(echo "$STATS" | grep "keyspace_hits" | cut -d: -f2 | tr -d '\r')
MISSES=$(echo "$STATS" | grep "keyspace_misses" | cut -d: -f2 | tr -d '\r')
# Calculate rate (shell math)
TOTAL=$((HITS + MISSES))
if [ $TOTAL -gt 0 ]; then
RATE=$((HITS * 100 / TOTAL))
echo "Cache Hit Rate: ${RATE}%"
echo "Hits: $HITS"
echo "Misses: $MISSES"
fi
Interpret Results
| Hit Rate | Performance | Action |
|---|---|---|
| 90%+ | Excellent | No action needed |
| 75-90% | Good | Monitor, consider increasing TTL |
| 50-75% | Fair | Review cache keys, optimize patterns |
| <50% | Poor | Check Redis connection, review cache strategy |
Memory Usage Analysis
# Check Redis memory
redis-cli INFO memory
# Key values:
# used_memory: Bytes used
# used_memory_human: Human readable format
# used_memory_peak: Peak memory used
# maxmemory: Max allowed memory
# memory_fragmentation_ratio: Should be < 1.5
# Set memory limit (optional)
# redis-cli CONFIG SET maxmemory 512mb
# redis-cli CONFIG SET maxmemory-policy allkeys-lru
Load Testing Scenarios
Scenario 1: Peak Hour Traffic
# Simulate peak hour (1000 requests in 10 seconds)
for i in {1..1000}; do
curl http://localhost:3000/api/admin/setup &
if [ $((i % 50)) -eq 0 ]; then
sleep 0.5
fi
done
wait
# Monitor during test
redis-cli MONITOR
redis-cli INFO stats
# Expected: High RPS, low latency, no errors
Scenario 2: User Login Spike
# Simulate login spike
for i in {1..100}; do
curl -X POST http://localhost:3000/api/auth/signin \
-H "Content-Type: application/json" \
-d "{\"email\":\"user$i@example.com\",\"password\":\"pass\"}" &
done
wait
# Check session cache
redis-cli KEYS "session:*" | wc -l
# Should have ~100 sessions cached
Scenario 3: Configuration Updates
# Monitor cache invalidation
redis-cli MONITOR
# Update admin setup
curl -X POST http://localhost:3000/api/admin/setup \
-H "Content-Type: application/json" \
-d '{"pagination":{"itemsPerPage":20}}'
# In monitor, should see:
# DEL admin:setup (cache invalidation)
# Then fresh cache on next GET
Performance Bottlenecks
Identify Slow Operations
# Enable Redis slowlog
redis-cli CONFIG SET slowlog-log-slower-than 10000 # 10ms
redis-cli CONFIG SET slowlog-max-len 128
# View slow commands
redis-cli SLOWLOG GET 10
# Look for:
# - O(N) operations on large datasets
# - KEYS pattern matching
# - Large value sizes
Find Memory Leaks
# Monitor memory growth
redis-cli INFO memory | grep used_memory_human
# Run for a while (hour), then check again
redis-cli INFO memory | grep used_memory_human
# If constantly growing:
# 1. Check for missing TTL
# 2. Verify cache invalidation
# 3. Review cache key patterns
# 4. Use FLUSHALL to reset (dev only)
Optimization Recommendations
Based on Hit Rate
If hit rate < 90%:
- Increase TTL for frequently accessed data
- Check cache key patterns
- Verify cache invalidation isn't too aggressive
If memory usage > 80% of limit:
- Implement eviction policy (LRU)
- Reduce TTL values
- Remove unused cache keys
If response time > 50ms:
- Verify Redis is on same network/machine
- Check Redis memory pressure
- Monitor CPU usage
- Consider Redis cluster for scale
Cache Key Strategy
# Good cache keys (organized by feature)
session:abc123
user:user-id-123
admin:setup
webinar:webinar-id-456
webinars:list:page-1
# Monitor key space
redis-cli --bigkeys # Find largest keys
redis-cli --scan # Iterate all keys
redis-cli DBSIZE # Total keys in DB
Monitoring Commands Reference
# Real-time monitoring
redis-cli MONITOR # All commands in real-time
redis-cli INFO # All stats and info
redis-cli INFO stats # Stats only
# Performance metrics
redis-cli SLOWLOG GET 10 # 10 slowest commands
redis-cli LATENCY LATEST # Latest latency samples
redis-cli LATENCY HISTORY # Historical latency
# Memory analysis
redis-cli INFO memory # Memory breakdown
redis-cli --bigkeys # Largest keys
redis-cli MEMORY STATS # Memory by allocation
# Cache analysis
redis-cli KEYS "*" # All cache keys
redis-cli SCAN 0 # Scan keys (no blocking)
redis-cli TTL key # Check TTL remaining
redis-cli EXPIRE key 3600 # Set new expiration
# Debugging
redis-cli PING # Test connection
redis-cli ECHO "test" # Echo test
redis-cli SELECT 0 # Select database
redis-cli FLUSHDB # Clear current DB (dev only)
redis-cli FLUSHALL # Clear all DBs (dev only)
Troubleshooting Performance Issues
Issue: Cache Not Improving Performance
Diagnostics:
# Check if Redis is being used
redis-cli MONITOR
curl http://localhost:3000/api/admin/setup
# Should see GET admin:setup command
# Check cache hits
redis-cli INFO stats | grep keyspace
# Hits should be increasing
Solutions:
- Verify Redis connection:
redis-cli ping - Check TTL:
redis-cli TTL admin:setup - Review cache keys:
redis-cli KEYS "admin:*" - Check memory:
redis-cli INFO memory
Issue: High Memory Usage
Diagnostics:
redis-cli INFO memory
redis-cli --bigkeys # Find large keys
redis-cli --scan | wc -l # Count keys
Solutions:
- Implement TTL on all keys
- Reduce TTL values
- Set maxmemory policy:
redis-cli CONFIG SET maxmemory-policy allkeys-lru - Clear unused keys:
redis-cli EVAL "return redis.call('del',unpack(redis.call('keys','*')))" 0
Issue: Slow Cache Operations
Diagnostics:
redis-cli SLOWLOG GET 10
redis-cli LATENCY LATEST
Solutions:
- Check network latency
- Verify Redis isn't CPU-bound
- Move Redis closer (same machine/container)
- Consider Redis persistence (if enabled, disable AOF rewrite)
Baseline Metrics to Track
Keep these metrics for comparison:
# Run this command periodically
DATE=$(date +%Y-%m-%d\ %H:%M:%S)
echo "=== $DATE ===" >> redis_metrics.log
redis-cli INFO stats >> redis_metrics.log
redis-cli INFO memory >> redis_metrics.log
redis-cli DBSIZE >> redis_metrics.log
echo "" >> redis_metrics.log
# Compare over time to identify trends
Performance Report Example
Performance Baseline Report
===========================
Date: 2025-02-03
Environment: Docker (Redis 7-alpine)
Metrics:
- Cache Hit Rate: 94.2%
- Avg Response Time: 12ms (with cache)
- DB Response Time: 150ms (without cache)
- Improvement: 92% faster
- Memory Usage: 45MB
- Concurrent Users Tested: 100
- Requests Per Second: 250
Cache Statistics:
- Total Commands: 5,432
- Cache Hits: 5,120
- Cache Misses: 312
- Session Keys: 87
- Admin Setup Hits: 1,543
System Health:
- Redis Memory Fragmentation: 1.1 (Good)
- Slowlog Commands: 0
- Connection Failures: 0
Best Practices
-
Monitor Regularly
- Check metrics weekly
- Alert on hit rate drops
- Track memory trends
-
Optimize TTLs
- Session cache: 7 days
- User data: 1 hour
- Config: 5 minutes
- API responses: Based on freshness needs
-
Cache Invalidation
- Clear on data updates
- Use patterns:
invalidateCachePattern() - Verify in Redis:
KEYS pattern:*
-
Production Monitoring
- Use CloudWatch, DataDog, or New Relic
- Set up alerts for high memory
- Monitor connection count
- Track command latency
-
Scalability
- Single Redis for <1000 concurrent users
- Redis Cluster for >1000 users
- Redis Sentinel for high availability