What is Swytch?
Swytch is a distributed database without a leader.
No primary, no consensus protocol, no coordinator whose health the cluster depends on. Writes commit synchronously to the nodes subscribed to the data. Reads are answered locally, once a node has subscribed. Every node accepts writes; every node serves reads. The database is serializable (whether in Redis or SQL mode), and every node arrives at the same ordering decisions without coordinating to agree.
It’s one binary per node. At startup you choose whether to run it as a Redis node or a SQL node. Deploy it as a sidecar next to your application, using the RAM you already overprovision for your app (Swytch takes a memory budget as a percentage of available memory). Your app talks to its local Swytch over a Unix socket or localhost TCP. Nodes find each other through DNS and replicate over QUIC with mutual TLS. No central service sits in the middle; nodes talk to each other directly. Scaling the cluster is adding another binary to another host and pointing it at the same DNS name.
Swytch runs in one of two modes, chosen when you start a node:
- Redis mode speaks RESPv2 and RESPv3.
- SQL mode speaks the PostgreSQL wire protocol, with a SQLite query engine underneath.
A given Swytch binary runs in exactly one of those modes. You can run two binaries sharing the same underlying DAG if each mode operates on disjoint data: Redis mode owns its keys, SQL mode owns its tables, and the two modes don’t touch each other’s namespace. If you want full isolation, you run two clusters with separate DAGs.
⚠️ Sharing a DAG across modes is a footgun.
A Redis-mode binary sharing a DAG with a SQL-mode binary can address every key in the DAG — not just the data SQL exposes as rows, but also the keys SQL uses internally to represent schemas, tables, indexes, and system catalogs. Redis has no way to know which keys are safe to touch. A stray
HSETagainst the wrong key can corrupt the SQL side permanently.The “two binaries, one DAG” pattern is real and supported, but only for disjoint data: let Redis mode own its keys, let SQL mode own its tables, and don’t have the two modes read or write each other’s namespace.
Swytch nodes form a leaderless cluster. Nodes discover each other through DNS and communicate over QUIC with mutual TLS.
Replication is subscription-driven. A node subscribes to the tables or keys its application actually touches, and only receives writes for data it’s subscribed to. When a write happens, it commits synchronously to the subscribed nodes (one round trip to the furthest subscriber, not to every replica in the cluster). Reads are local after the first one: the first read on a given node triggers a subscription; subsequent reads are answered from memory.
Under the hood, every write produces an effect (an immutable record carrying the operation and pointers to its causal dependencies). Effects form a causal DAG, and every node sees the same DAG structure for the data it subscribes to. When two transactions conflict, every node detects the conflict and orders them deterministically from the DAG. Same DAG at every node, same rules, same result. Exactly one commits; the other fails cleanly. No voting, no communication to decide.
Read more about the architecture →
Most distributed databases treat network partitions as a failure mode. Writes stall. Somebody gets paged. When the partition heals, there’s a postmortem.
By default, Swytch does the same; writes to data that can’t reach its subscribers return an error, and the partition shows up as unavailability the same way it would anywhere else. That’s the conservative choice, and it’s what you get out of the box.
What Swytch adds is the option to do something different: holographic divergence, a mode where both sides of a partition keep accepting writes, diverge cleanly, and reconcile when the network heals. The divergence is first-class ( recorded in the causal DAG as structure, not caught as an exception) and the causal history is preserved through reconciliation. Non-conflicting writes merge automatically. Conflicts produce two valid histories that can be walked back through.
Holographic divergence is a design tool, not a universal default. You reach for it when you know a partition is coming and want the disconnected side to keep working: field equipment on remote sites, nodes that drop uplink for expected windows, anywhere the network going down is a scheduled part of the workflow rather than an incident. Writing against holographic keys means designing for the reconciliation (knowing what “both sides made a valid decision” looks like for your data, and having a plan for when they disagree).
Consider a lottery run across a partitioned cluster. Side A announces Alice won. Side B announces Bob won. Both sides are internally consistent. When the partition heals, Swytch doesn’t pick a winner; it can’t, because that’s a business decision rather than a database decision. What it does give you is a causal record precise enough that you (or a reconciliation script) can decide which branch is canonical. The shape of the recovery is your application’s responsibility.
Read more about designing for partitions →
Swytch’s architecture isn’t ad-hoc. It’s a direct implementation of Light Cone Consistency, a formal framework that describes every message-passing system as the same three-parameter function over a causal DAG. Pick your parameters, you get a consistency model.
Swytch picks serializable consistency for transactions and causal consistency for everything else, by default. Those are points on the LCC map; the framework describes the whole map.
By default, Swytch data lives in RAM across the nodes subscribed to it. If a node dies, the data is still on the other subscribers. If every subscriber for a given piece of data dies simultaneously, that data is gone.
Swytch Cloud adds distributed durable storage; writes land on disk, survive full-cluster restart, and the causal history is preserved. Your data still lives on your nodes; Swytch Cloud is the control plane that coordinates durability and cross-region reconciliation, not a hosted database service.
If your failure model requires data to survive “everything is on fire at the same time,” you want Swytch Cloud. If your failure model is about individual node failures and recovering via the rest of the cluster, the default model is what you want.