Migrating from Redis
This guide covers migrating from Redis to Swytch, including step-by-step instructions, rollback strategies, and patterns for zero-downtime transitions.
Swytch’s Redis mode is 100% command-compatible with Redis, speaks RESPv2 and RESPv3, and accepts the same ACL format Redis uses. Migration is largely about moving traffic safely, not rewriting application code.
| Phase | Duration | Risk level | Rollback |
|---|---|---|---|
| Preparation | Days-weeks | None | N/A |
| Shadow testing | Days | Low | Instant |
| Dual-write | Hours-days | Low | Instant |
| Cutover | Minutes | Medium | Minutes |
| Validation | Hours-days | Low | Minutes |
| Cleanup | Days | None | N/A |
Review your application’s Redis usage against supported commands. A few Redis features map to Swytch-architectural choices rather than commands:
| Feature | Swytch | Notes |
|---|---|---|
| Client-side cluster mode | Not needed | Swytch uses its own leaderless clustering |
| Primary-replica replication | Not used | Swytch replicates synchronously to subscribers; cluster membership via --join |
| Lua cjson/cmsgpack | Not supported | Use application-side serialization |
| ACLs | Supported | Use --aclfile (Redis-compatible ACL format) |
| TLS | Supported | Native TLS and mTLS via --tls-* flags |
The first two rows aren’t missing features — they’re places where Redis’s single-region model and Swytch’s leaderless model differ at the architecture level. Your application’s Redis client connects to Swytch the same way it connects to Redis; what happens behind the wire protocol is what’s different.
Deploy Swytch as a sidecar next to your application, using a different port from your existing Redis:
# Sidecar deployment on the same host as the app
swytch redis --port 6380 --maxmemory 4gb --metrics-port 9090
The application will connect to both Redis and Swytch over localhost during the shadow and dual-write phases.
Modify your application to write to both Redis and Swytch:
import redis
class DualWriteClient:
def __init__(self, primary_host, shadow_host):
self.primary = redis.Redis(host=primary_host, port=6379)
self.shadow = redis.Redis(host=shadow_host, port=6380)
self.shadow_enabled = True
def set(self, key, value, **kwargs):
# Always write to primary
result = self.primary.set(key, value, **kwargs)
# Shadow write (fire-and-forget, don't block on errors)
if self.shadow_enabled:
try:
self.shadow.set(key, value, **kwargs)
except Exception as e:
# Log but don't fail the request
logger.warning(f"Shadow write failed: {e}")
return result
def get(self, key):
# Read from primary only during shadow phase
return self.primary.get(key)
After the cache warms up, add shadow reads to compare results:
def get_with_validation(self, key):
primary_result = self.primary.get(key)
if self.shadow_enabled:
try:
shadow_result = self.shadow.get(key)
if primary_result != shadow_result:
logger.error(f"Mismatch for {key}: primary={primary_result}, shadow={shadow_result}")
metrics.increment("shadow_mismatch")
else:
metrics.increment("shadow_match")
except Exception as e:
logger.warning(f"Shadow read failed: {e}")
return primary_result
Run this for at least one full cache TTL cycle to validate consistency.
Once validation passes:
- Update connection strings to point to Swytch.
- Keep Redis running as fallback.
- Monitor closely for the first hour.
# Feature flag cutover
if feature_flags.get("use_swytch"):
cache = redis.Redis(host="swytch-host", port=6379)
else:
cache = redis.Redis(host="redis-host", port=6379)
If issues arise:
- Immediate. Flip a feature flag back to Redis.
- Data sync. If Swytch had writes Redis doesn’t have, replay from application logs or accept data loss for cache.
- Post-mortem. Analyze what went wrong before retrying.
After running successfully for 1–2 weeks:
- Remove dual-write code.
- Decommission the old Redis instance.
- Update monitoring and runbooks.
RIOT can replicate data between Redis instances. Since Swytch doesn’t respond to Redis replication commands, use RIOT’s scan mode:
riot-redis -u redis://old-redis:6379 replicate -u redis://swytch:6380 --mode scan
For cache workloads, the simplest path is to let the application repopulate Swytch naturally:
- Deploy Swytch with an empty database.
- Point the application at Swytch.
- Cache misses hit the source of truth and populate Swytch.
- After one TTL cycle, the cache is fully warm.
This is often the safest approach for cache workloads, since there’s no data-transfer step that could fail mid-migration.
Before declaring migration complete:
- All application endpoints tested
- Hit rate matches or exceeds the previous cache
- Latency p50/p99 within acceptable range
- No error rate increase
- Memory usage stable
- Metrics and alerts configured
- Runbooks updated
- On-call team briefed
- Rollback tested and documented
| Issue | Cause | Solution |
|---|---|---|
| Higher miss rate after cutover | Cache not warmed | Pre-warm, or accept temporary misses |
| Latency spike | Cold cache | Increase --maxmemory or wait for warm |
| Connection errors | Client timeout too aggressive | Increase client timeout |
| Memory growing unexpectedly | Different overhead than Redis | Adjust --maxmemory based on actual |
| Command not supported | Using an unsupported Redis feature | Check compatibility, modify app |