Deployment
This guide covers deploying Swytch in production, from a single node to a multi-node cluster.
The simplest deployment runs one Swytch instance as an in-memory cache:
swytch redis --maxmemory=4gb --bind=0.0.0.0 --metrics-port=9090
For applications on the same host, a unix socket avoids TCP overhead entirely:
swytch redis --maxmemory=4gb --unixsocket=/var/run/swytch.sock --bind=""
| Flag | Recommendation |
|---|---|
--maxmemory | Set to your target memory budget (e.g., 4gb). Default 64MB is for testing only |
--bind | 0.0.0.0 for network access, "" with --unixsocket for local-only |
--metrics-port | Enable for monitoring (e.g., 9090) |
--log-format | json for structured log ingestion |
--requirepass or --aclfile | Always set authentication in network-exposed deployments |
Swytch supports native TLS for client connections:
# TLS (server certificate only)
swytch redis --tls-cert-file=server.crt --tls-key-file=server.key
# mTLS (require client certificates)
swytch redis \
--tls-cert-file=server.crt \
--tls-key-file=server.key \
--tls-ca-cert-file=ca.crt
Both --tls-cert-file and --tls-key-file must be provided together. Adding --tls-ca-cert-file enables mutual TLS
where clients must present a certificate signed by the given CA.
The minimum TLS version defaults to 1.2. Override with --tls-min-version=1.3 if required.
Connect with redis-cli over TLS:
redis-cli --tls --cert client.crt --key client.key --cacert ca.crt
Swytch nodes form a leaderless cluster over QUIC+mTLS. Every node accepts reads and writes. No leader election, no quorum configuration.
Generate a shared passphrase. This is the sole trust root for the cluster:
swytch gen-passphrase
# outputs: YUa6WXJDsloKgx4BQWV2edOiH3U2Ym4O5VLR1jrVvO4
All nodes in the cluster must use the same passphrase. Each node derives a shared CA from it and generates ephemeral leaf certificates on startup. No manual certificate distribution is needed.
Point every node at a DNS name that resolves to all peers:
# Node 1
swytch redis --bind=0.0.0.0 --cluster-passphrase="YUa6..." --join=cache.local
# Node 2
swytch redis --bind=0.0.0.0 --cluster-passphrase="YUa6..." --join=cache.local
Nodes discover each other via DNS, connect over QUIC, and replicate automatically.
The --join flag accepts any DNS name. SRV records are tried first, falling back to A/AAAA records combined with
--cluster-port.
| Environment | What DNS returns | How it works |
|---|---|---|
| Consul | SRV with host:port per service | SRV |
| K8s headless service | A record per pod IP | A + cluster port |
| K8s headless + named port | SRV with pod IP + port | SRV |
| Docker Compose | A record per container | A + cluster port |
Manual /etc/hosts | A record | A + cluster port |
DNS is only used for bootstrap. Once joined, nodes discover peers through membership effects. DNS is re-resolved only if a node loses all peers and needs to re-bootstrap.
By default the cluster port is the Redis port + 1000 (e.g., port 6379 uses cluster port 7379). Override with
--cluster-port.
If your node is behind NAT or a load balancer, use --cluster-advertise=<public-addr:port> to set the address
other nodes use to reach it. By default this is auto-detected.
Membership is not a special subsystem — it uses the same effects engine as all other data. Each node periodically writes a heartbeat effect on an internal key. Crashed nodes expire after 30 seconds. Graceful shutdowns remove the entry immediately.
Inspect membership from any Redis client:
redis-cli HGETALL __swytch:members
services:
cache:
image: swytch
command: >
swytch redis
--bind 0.0.0.0
--maxmemory 1gb
--cluster-passphrase ${CLUSTER_PASSPHRASE}
--join cache
--metrics-port 9090
deploy:
replicas: 3
ports:
- "6379:6379"
- "9090:9090"
Docker Compose’s built-in DNS resolves the service name cache to all container IPs.
Use a headless Service so each pod gets a DNS A record:
apiVersion: v1
kind: Service
metadata:
name: swytch
spec:
clusterIP: None
selector:
app: swytch
ports:
- name: redis
port: 6379
- name: cluster
port: 7379
- name: metrics
port: 9090
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: swytch
spec:
serviceName: swytch
replicas: 3
selector:
matchLabels:
app: swytch
template:
metadata:
labels:
app: swytch
spec:
containers:
- name: swytch
image: swytch:latest
args:
- redis
- --bind=0.0.0.0
- --maxmemory=4gb
- --cluster-passphrase=$(CLUSTER_PASSPHRASE)
- --join=swytch
- --metrics-port=9090
env:
- name: CLUSTER_PASSPHRASE
valueFrom:
secretKeyRef:
name: swytch-secret
key: cluster-passphrase
ports:
- containerPort: 6379
name: redis
- containerPort: 7379
name: cluster
protocol: UDP
- containerPort: 7379
name: cluster-quic
protocol: TCP
- containerPort: 9090
name: metrics
livenessProbe:
httpGet:
path: /health
port: metrics
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: metrics
initialDelaySeconds: 2
periodSeconds: 5
resources:
requests:
memory: "4.5Gi"
cpu: "2"
limits:
memory: "5Gi"
Store the cluster passphrase in a Kubernetes secret:
kubectl create secret generic swytch-secret \
--from-literal=cluster-passphrase="$(swytch gen-passphrase)"
When --metrics-port is set, Swytch exposes:
| Endpoint | Description |
|---|---|
/health | Returns HTTP 200 with body ok. Use for liveness and readiness probes |
/metrics | Prometheus metrics scrape endpoint |
Swytch handles POSIX signals for graceful shutdown:
| Signal | Behavior |
|---|---|
SIGTERM | Graceful shutdown: stop accepting connections, complete in-flight commands, shut down cluster, exit |
SIGINT | Same as SIGTERM (Ctrl+C) |
For container orchestration, send SIGTERM to stop the server gracefully. There is no drain period configuration;
shutdown completes as fast as pending work allows.
Set the container memory limit higher than --maxmemory to account for connection buffers, Lua scripting overhead,
and Go runtime:
container_memory = maxmemory + max(512MB, maxmemory * 0.2)
For example, --maxmemory=4gb needs approximately 4.5-5 GB container memory.
Swytch uses all available CPUs by default (--threads=0). Override with --threads=N to limit.
See the Sizing and Capacity Planning guide for detailed memory estimation and scaling guidance.