Skip to main content
Swytch Documentation
Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Back to homepage

Deployment

This guide covers deploying Swytch in production, from a single node to a multi-node cluster.

Single Node

The simplest deployment runs one Swytch instance as an in-memory cache:

swytch redis --maxmemory=4gb --bind=0.0.0.0 --metrics-port=9090

For applications on the same host, a unix socket avoids TCP overhead entirely:

swytch redis --maxmemory=4gb --unixsocket=/var/run/swytch.sock --bind=""
FlagRecommendation
--maxmemorySet to your target memory budget (e.g., 4gb). Default 64MB is for testing only
--bind0.0.0.0 for network access, "" with --unixsocket for local-only
--metrics-portEnable for monitoring (e.g., 9090)
--log-formatjson for structured log ingestion
--requirepass or --aclfileAlways set authentication in network-exposed deployments

TLS

Swytch supports native TLS for client connections:

# TLS (server certificate only)
swytch redis --tls-cert-file=server.crt --tls-key-file=server.key

# mTLS (require client certificates)
swytch redis \
  --tls-cert-file=server.crt \
  --tls-key-file=server.key \
  --tls-ca-cert-file=ca.crt

Both --tls-cert-file and --tls-key-file must be provided together. Adding --tls-ca-cert-file enables mutual TLS where clients must present a certificate signed by the given CA.

The minimum TLS version defaults to 1.2. Override with --tls-min-version=1.3 if required.

Connect with redis-cli over TLS:

redis-cli --tls --cert client.crt --key client.key --cacert ca.crt

Clustering

Swytch nodes form a leaderless cluster over QUIC+mTLS. Every node accepts reads and writes. No leader election, no quorum configuration.

Prerequisites

Generate a shared passphrase. This is the sole trust root for the cluster:

swytch gen-passphrase
# outputs: YUa6WXJDsloKgx4BQWV2edOiH3U2Ym4O5VLR1jrVvO4

All nodes in the cluster must use the same passphrase. Each node derives a shared CA from it and generates ephemeral leaf certificates on startup. No manual certificate distribution is needed.

Starting a Cluster

Point every node at a DNS name that resolves to all peers:

# Node 1
swytch redis --bind=0.0.0.0 --cluster-passphrase="YUa6..." --join=cache.local

# Node 2
swytch redis --bind=0.0.0.0 --cluster-passphrase="YUa6..." --join=cache.local

Nodes discover each other via DNS, connect over QUIC, and replicate automatically.

DNS Discovery

The --join flag accepts any DNS name. SRV records are tried first, falling back to A/AAAA records combined with --cluster-port.

EnvironmentWhat DNS returnsHow it works
ConsulSRV with host:port per serviceSRV
K8s headless serviceA record per pod IPA + cluster port
K8s headless + named portSRV with pod IP + portSRV
Docker ComposeA record per containerA + cluster port
Manual /etc/hostsA recordA + cluster port

DNS is only used for bootstrap. Once joined, nodes discover peers through membership effects. DNS is re-resolved only if a node loses all peers and needs to re-bootstrap.

Cluster Port

By default the cluster port is the Redis port + 1000 (e.g., port 6379 uses cluster port 7379). Override with --cluster-port.

If your node is behind NAT or a load balancer, use --cluster-advertise=<public-addr:port> to set the address other nodes use to reach it. By default this is auto-detected.

Membership

Membership is not a special subsystem — it uses the same effects engine as all other data. Each node periodically writes a heartbeat effect on an internal key. Crashed nodes expire after 30 seconds. Graceful shutdowns remove the entry immediately.

Inspect membership from any Redis client:

redis-cli HGETALL __swytch:members

Docker Compose

services:
  cache:
    image: swytch
    command: >
      swytch redis
        --bind 0.0.0.0
        --maxmemory 1gb
        --cluster-passphrase ${CLUSTER_PASSPHRASE}
        --join cache
        --metrics-port 9090
    deploy:
      replicas: 3
    ports:
      - "6379:6379"
      - "9090:9090"

Docker Compose’s built-in DNS resolves the service name cache to all container IPs.

Kubernetes

Use a headless Service so each pod gets a DNS A record:

apiVersion: v1
kind: Service
metadata:
  name: swytch
spec:
  clusterIP: None
  selector:
    app: swytch
  ports:
    - name: redis
      port: 6379
    - name: cluster
      port: 7379
    - name: metrics
      port: 9090
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: swytch
spec:
  serviceName: swytch
  replicas: 3
  selector:
    matchLabels:
      app: swytch
  template:
    metadata:
      labels:
        app: swytch
    spec:
      containers:
        - name: swytch
          image: swytch:latest
          args:
            - redis
            - --bind=0.0.0.0
            - --maxmemory=4gb
            - --cluster-passphrase=$(CLUSTER_PASSPHRASE)
            - --join=swytch
            - --metrics-port=9090
          env:
            - name: CLUSTER_PASSPHRASE
              valueFrom:
                secretKeyRef:
                  name: swytch-secret
                  key: cluster-passphrase
          ports:
            - containerPort: 6379
              name: redis
            - containerPort: 7379
              name: cluster
              protocol: UDP
            - containerPort: 7379
              name: cluster-quic
              protocol: TCP
            - containerPort: 9090
              name: metrics
          livenessProbe:
            httpGet:
              path: /health
              port: metrics
            initialDelaySeconds: 5
            periodSeconds: 10
          readinessProbe:
            httpGet:
              path: /health
              port: metrics
            initialDelaySeconds: 2
            periodSeconds: 5
          resources:
            requests:
              memory: "4.5Gi"
              cpu: "2"
            limits:
              memory: "5Gi"

Secret Management

Store the cluster passphrase in a Kubernetes secret:

kubectl create secret generic swytch-secret \
  --from-literal=cluster-passphrase="$(swytch gen-passphrase)"

Health Checks

When --metrics-port is set, Swytch exposes:

EndpointDescription
/healthReturns HTTP 200 with body ok. Use for liveness and readiness probes
/metricsPrometheus metrics scrape endpoint

Signal Handling

Swytch handles POSIX signals for graceful shutdown:

SignalBehavior
SIGTERMGraceful shutdown: stop accepting connections, complete in-flight commands, shut down cluster, exit
SIGINTSame as SIGTERM (Ctrl+C)

For container orchestration, send SIGTERM to stop the server gracefully. There is no drain period configuration; shutdown completes as fast as pending work allows.

Resource Sizing

Container Memory

Set the container memory limit higher than --maxmemory to account for connection buffers, Lua scripting overhead, and Go runtime:

container_memory = maxmemory + max(512MB, maxmemory * 0.2)

For example, --maxmemory=4gb needs approximately 4.5-5 GB container memory.

CPU

Swytch uses all available CPUs by default (--threads=0). Override with --threads=N to limit.

See the Sizing and Capacity Planning guide for detailed memory estimation and scaling guidance.