Migration Guide
This guide covers migrating from Redis or Memcached to Swytch, including step-by-step instructions, rollback strategies, and patterns for zero-downtime transitions.
Swytch is designed as a drop-in replacement, but migrations require careful planning:
| Phase | Duration | Risk Level | Rollback |
|---|---|---|---|
| Preparation | Days-weeks | None | N/A |
| Shadow testing | Days | Low | Instant |
| Dual-write | Hours-days | Low | Instant |
| Cutover | Minutes | Medium | Minutes |
| Validation | Hours-days | Low | Minutes |
| Cleanup | Days | None | N/A |
Review your application’s Redis usage against supported commands. Common blockers:
| Feature | Swytch Support | Workaround |
|---|---|---|
| Cluster mode | Not supported | Vertical scaling only |
| Replication | Not supported | Use external HA (load balancer) |
| Lua cjson/cmsgpack | Not supported | Use application-side serialization |
| ACLs | Not supported | Use --requirepass for simple auth |
| TLS | Not supported | Use TLS-terminating proxy |
Deploy Swytch alongside your existing Redis:
# Start Swytch on a different port
swytch redis --port 6380 --maxmemory 4gb --metrics-port 9090
Modify your application to write to both Redis and Swytch:
import redis
class DualWriteClient:
def __init__(self, primary_host, shadow_host):
self.primary = redis.Redis(host=primary_host, port=6379)
self.shadow = redis.Redis(host=shadow_host, port=6380)
self.shadow_enabled = True
def set(self, key, value, **kwargs):
# Always write to primary
result = self.primary.set(key, value, **kwargs)
# Shadow write (fire-and-forget, don't block on errors)
if self.shadow_enabled:
try:
self.shadow.set(key, value, **kwargs)
except Exception as e:
# Log but don't fail the request
logger.warning(f"Shadow write failed: {e}")
return result
def get(self, key):
# Read from primary only during shadow phase
return self.primary.get(key)
After the cache warms up, add shadow reads to compare results:
def get_with_validation(self, key):
primary_result = self.primary.get(key)
if self.shadow_enabled:
try:
shadow_result = self.shadow.get(key)
if primary_result != shadow_result:
logger.error(f"Mismatch for {key}: primary={primary_result}, shadow={shadow_result}")
metrics.increment("shadow_mismatch")
else:
metrics.increment("shadow_match")
except Exception as e:
logger.warning(f"Shadow read failed: {e}")
return primary_result
Run this for at least one full cache TTL cycle to validate consistency.
Once validation passes:
- Update connection strings to point to Swytch
- Keep Redis running as fallback
- Monitor closely for the first hour
# Feature flag cutover
if feature_flags.get("use_swytch"):
cache = redis.Redis(host="swytch-host", port=6379)
else:
cache = redis.Redis(host="redis-host", port=6379)
If issues arise:
- Immediate: Flip a feature flag back to Redis
- Data sync: If Swytch had writes Redis doesn’t have, replay from application logs or accept data loss for cache
- Post-mortem: Analyse what went wrong before retrying
After running successfully for 1–2 weeks:
- Remove dual-write code
- Decommission old Redis instance
- Update monitoring and runbooks
Swytch’s memcached implementation differs in some ways:
| Feature | Memcached | Swytch |
|---|---|---|
| Binary protocol | Supported | Not supported (ASCII only) |
| Meta commands | Supported | Not supported |
| Slab allocator | Yes | No (Go allocator) |
| Eviction | LRU | Frequency-based |
If you use binary protocol, you’ll need to switch clients to ASCII mode.
Most memcached clients support ASCII protocol:
# Python - pymemcache
from pymemcache.client import base
# Ensure ASCII protocol (default)
client = base.Client(('swytch-host', 11211))
// Node.js - memcached
const Memcached = require('memcached');
// ASCII protocol is default
const memcached = new Memcached('swytch-host:11211');
The same pattern as Redis migration:
class DualMemcachedClient:
def __init__(self, primary_host, shadow_host):
self.primary = base.Client((primary_host, 11211))
self.shadow = base.Client((shadow_host, 11211))
def set(self, key, value, expire=0):
result = self.primary.set(key, value, expire=expire)
try:
self.shadow.set(key, value, expire=expire)
except Exception:
pass # Log and continue
return result
The same process as Redis—use feature flags for instant rollback capability.
Use a load balancer to gradually shift traffic:
Week 1: 100% Redis, 0% Swytch (shadow writes only)
Week 2: 90% Redis, 10% Swytch (canary)
Week 3: 50% Redis, 50% Swytch
Week 4: 0% Redis, 100% Swytch
For strong consistency during migration:
def get(self, key):
# Check Swytch first (has latest writes)
result = self.swytch.get(key)
if result is not None:
return result
# Fall back to old cache
result = self.redis.get(key)
if result is not None:
# Backfill to Swytch
self.swytch.set(key, result)
return result
Don’t pre-warm—let traffic naturally populate:
def get(self, key):
result = self.swytch.get(key)
if result is None:
# Cache miss—fetch from source and populate Swytch only
result = self.database.get(key)
if result is not None:
self.swytch.set(key, result, ex=3600)
return result
This is simplest but causes a temporary cache miss spike during transition.
For persistent mode, you can dump and restore:
# Export from Redis
redis-cli --rdb dump.rdb
# Unfortunately, RDB import is not supported
# Use RIOT or write a migration script instead
RIOT can replicate data between Redis instances:
# Note: Swytch doesn't support replication commands
# Use RIOT in scan mode instead
riot-redis -u redis://old-redis:6379 replicate -u redis://swytch:6379 --mode scan
Let your application repopulate the cache naturally:
- Deploy Swytch with empty database
- Point application to Swytch
- Cache misses hit your database and populate Swytch
- After one TTL cycle, cache is fully warm
This is often the safest approach for cache workloads.
Before declaring migration complete:
- All application endpoints tested
- Hit rate matches or exceeds previous cache
- Latency p50/p99 within acceptable range
- No error rate increase
- Memory usage stable
- Metrics and alerts configured
- Runbooks updated
- On-call team briefed
- Rollback tested and documented
| Issue | Cause | Solution |
|---|---|---|
| Higher miss rate after cutover | Cache not warmed | Pre-warm or accept temporary misses |
| Latency spike | Cold L1 cache (persistent mode) | Increase maxmemory or wait for warm |
| Connection errors | Client timeout too aggressive | Increase client timeout |
| Memory growing unexpectedly | Different overhead than Redis | Adjust maxmemory based on actual |
| Command not supported | Using unsupported Redis feature | Check compatibility, modify app |