Architecture
Swytch is a distributed database without a leader, built around four architectural decisions that reinforce each other: a sidecar deployment model, effect-based replication, the causal DAG, and Light Cone Consistency.
This page walks through each in the order a reader needs to understand the ones that come after.
Swytch runs next to your application (the same host, either as a sidecar process or embedded in the application runtime). Your application talks to its local Swytch node over a Unix socket or localhost TCP. No load balancer in the path, no remote cache server, no round-trip to another machine.
This is the recommended deployment. A Swytch binary can also listen on a network interface (port 5433 for SQL mode, 6379 for Redis mode) if you need that shape, but the design is tuned for local co-location with your app.
The network hop is between Swytch nodes, not between your app and Swytch. Nodes discover each other via DNS and communicate over QUIC with mutual TLS. When a write happens, it’s replicated to the other nodes that care about that data (more on this below). Your application doesn’t see that replication; it sees a synchronous local write to its sidecar.
The payoff: reads are local after subscription. Writes commit once they’ve reached subscribed nodes. The database is distributed, but from the application’s perspective it behaves like a very fast local store.
A Swytch binary runs in one of two modes, chosen at startup:
- Redis mode speaks RESPv2 and RESPv3 on port 6379.
- SQL mode speaks the PostgreSQL wire protocol on port 5433, with a SQLite query engine underneath.
One binary, one mode. If you want both protocols against the same data, run two binaries that share the same underlying DAG. One cluster, two wire-protocol faces.
The mode determines the wire protocol and the command semantics. Both modes share the clustering layer, the effect model, the causal DAG, and LCC. Everything below this section applies to both modes unless noted.
Every write in Swytch produces an effect (an immutable record of what changed, carrying enough information to replicate the change to other nodes and apply it correctly).
Effects aren’t diffs, aren’t write-ahead log entries, aren’t replication frames. They’re the primary unit of the database. The causal log is the sequence of effects. Replication is the process of effects flowing between nodes. There’s no separate replication stream.
Every effect carries pointers to its causal dependencies. Not “sometimes, for operations that need them” — every effect, unconditionally. That’s what makes the DAG a DAG rather than an unordered bag of operations, and it’s what lets nodes handle forks correctly when branches appear and merge. Without the dependency pointer, an LWW effect on one branch would erase the other branch’s history from a receiving node’s perspective; with it, the DAG preserves the full structure regardless of how resolution eventually plays out.
What differs between operation types is how much of the causal past a node needs to walk before the effect is meaningful:
- Last-writer-wins operations (SET in Redis, row UPDATE in SQL) sit at the shallow end. The effect says “the value is now X”; whoever arrives latest in DAG order wins. The dependency pointers are still there (they’re what makes fork handling and reconciliation possible) but apply-time doesn’t require them.
- Counter-style operations (INCR, DECR) need the prior value resolved before the effect is meaningful. “+5” means nothing without knowing what it’s adding to.
- Causal collection operations (list and set insertions, sorted-set updates, hash-field writes) need enough of the collection’s causal history to know where the new element belongs, whether it conflicts with concurrent operations on the same structure, and how to merge with other branches. A concurrent insert into the same sorted set on two partitioned nodes produces a deterministic merged result because the DAG tells both sides exactly what the other saw.
This is Swytch’s alternative to CRDTs. CRDTs restrict you to data types whose merge function is commutative and associative by construction — valuable, but narrow. Causal-collection operations get deterministic merges from the DAG itself, which means the full Redis command set (lists, sorted sets, hashes, streams) and SQL’s row operations all merge correctly without being re-engineered as CRDTs.
The shape of a subscription follows from the shape of its operations. An LWW-heavy dataset can often be served with current state plus incremental effects. A dataset full of counters or causal collections needs more of the DAG walked to reconstruct state correctly. The structure is the same either way; what differs is how deep into the causal past a node has to look.
In practice, “how deep” is bounded. Swytch’s implementation has extensive optimizations around compacting state, summarizing resolved sub-DAGs, and avoiding redundant traversal. The amount a subscription actually has to move across the network is a function of recent activity, not historical volume. The DAG is conceptually a growing structure; the operational footprint is not.
The causal DAG is the structural record of what happened before what. Every effect is a vertex; every causal edge is a " this effect depended on that one" relationship. The DAG grows monotonically as writes happen.
Every node in a Swytch cluster eventually sees the same DAG structure for the data it subscribes to, bounded by propagation delay. There’s no way around propagation delay; information travels at the speed of the network. What Swytch guarantees is that once two nodes have the same DAG state, they agree on everything that can be derived from it (without needing to exchange additional messages to converge on an interpretation). Same DAG in, same answers out.
This is what makes the leaderless architecture work. When two transactions conflict, every node orders them deterministically from the DAG alone, without voting or communicating, because every node has the same DAG. Same input, same rules, same decision.
Swytch nodes form a leaderless cluster. Every node accepts reads and writes. No primary. No leader election. No quorum.
Nodes discover each other through DNS; you point --join at a DNS name that resolves to cluster peers, and nodes find
their way from there. Cluster traffic runs over QUIC with mutual TLS, authenticated by a shared passphrase configured at
startup.
Replication is subscription-driven. A node subscribes to the tables or keys its application actually touches, and only receives effects for data it’s subscribed to. Writes commit synchronously: a write is acknowledged once the effect has reached the subscribed nodes. You pay one RTT to the furthest subscriber, not to every replica in the cluster.
The first read of a piece of data on a given node triggers the subscription (a one-time network hop to establish which nodes hold it). Subsequent reads are served locally.
When two nodes write to the same data concurrently (the classic distributed-systems fork), Swytch resolves the conflict deterministically from the DAG. Same DAG at every node, same rules, same decision. No voting. No coordination.
The mechanics differ by operation type:
- Non-transactional writes in Redis mode use commutative, merge-safe operations where the semantics allow. Two partitioned nodes each incrementing the same counter will agree on the total when they reunite.
- Non-transactional writes in SQL mode (row-level
UPDATEs andINSERTs outside a transaction) resolve by last-writer-wins in DAG order when they conflict. Not wall-clock; the DAG has its own deterministic ordering that every node agrees on because every node has the same DAG. - Transactional writes in either mode (
BEGIN/COMMITin SQL,MULTI/EXECin Redis) that conflict across a partition don’t resolve automatically. Each side’s transaction was internally valid as a serializable ordering of its own side’s history, and neither can silently overwrite the other. That’s holographic divergence, and it’s the case that requires Swytch Cloud to arbitrate between the branches.
Swytch implements Light Cone Consistency (LCC), a framework that describes every message-passing system as the same three-parameter function over a causal DAG.
In practice, Swytch defaults to two consistency levels:
- Serializable consistency for transactions. Jepsen-tested. Achieved through an envelope-commit model: a transaction becomes a single vertex in the DAG whose causal edges point to every operation inside, and it commits atomically or not at all. When two transactions conflict, every node detects the conflict and orders them deterministically from the DAG. Same DAG at every node, same rules, same result. Exactly one transaction commits; the other fails cleanly. No voting, no communication between nodes to decide.
- Causal consistency for everything else. If you wrote A, and then wrote B after seeing A, any observer that sees B also sees A. No reversed-time anomalies.
For the formal treatment of LCC, see the Light Cone Consistency paper and Leaderless Transactions paper.
By default, Swytch data lives in RAM across the nodes subscribed to it. Writes are durable in the sense that they exist on multiple nodes; if any individual node dies, the data is still on the other subscribers. What the default model does not give you is survival of full-cluster restart. If every subscriber for a piece of data went down simultaneously, that data is gone.
Swytch Cloud is the control plane that adds distributed durable storage. Writes land on disk, survive full-cluster restart, and the causal history is preserved across time. Cloud also arbitrates holographic divergence when transactional writes conflict across a partition.
Your data still lives on your nodes. Swytch Cloud coordinates durability and cross-region reconciliation; it isn’t a hosted database service.
┌─────────────────────┐ ┌─────────────────────┐
│ App Server (EU) │ │ App Server (US) │
│ │ │ │
│ ┌───────────────┐ │ │ ┌───────────────┐ │
│ │ Swytch │◄─┼───────┼──► Swytch │ │
│ └──────┬────────┘ │ │ └──────┬────────┘ │
│ │ unix sock │ │ │ unix sock │
│ ┌──────┴────────┐ │ │ ┌──────┴────────┐ │
│ │ Application │ │ │ │ Application │ │
│ └───────────────┘ │ │ └───────────────┘ │
└─────────────────────┘ └─────────────────────┘
│ │
│ QUIC + mTLS │
│ (effects replication) │
└──────────┬───────────────────┘
│
┌─────────┴──────────┐
│ Swytch Cloud │
│ (optional) │
│ │
│ • Discovery │
│ • Durable storage │
│ • Divergence │
│ arbitration │
└────────────────────┘
- Application → local Swytch node: Unix socket or localhost TCP. All reads served here.
- Swytch ↔ Swytch: QUIC + mTLS. Effects replicate synchronously to subscribed nodes.
- Swytch → Swytch Cloud: Optional. Provides discovery, durable storage, and divergence arbitration. Not in the read or write path for the default in-memory deployment.