Swytch vs CockroachDB
CockroachDB and Swytch are both serializable, distributed SQL databases that run active across regions. That’s the shared ground. If you stopped reading there and picked a coin, you’d be fine either way for a lot of workloads.
The difference — the only difference that actually matters — is how they get to serializable.
CockroachDB gets there through consensus. Every write goes through a Raft group. A quorum of replicas has to agree before the write is committed. It’s a mature, well-understood approach, implemented carefully, and it works.
Swytch gets there through the absence of consensus. No Raft. No Paxos. No quorum vote per write. Writes commit synchronously to the nodes subscribed to the data, using the causal DAG to order events and Light Cone Consistency to guarantee serializability.
| CockroachDB | Swytch | |
|---|---|---|
| Deployment | Standalone binary, distributed cluster | Standalone binary, one or many nodes |
| Nodes | One or more, Raft-coordinated | One or more, active-active |
| Writers | Leaseholder per range, Raft quorum per write | Every node, concurrently |
| Network needed? | Yes, between nodes and clients | Yes, between nodes and clients |
| Wire protocol | PostgreSQL wire | PostgreSQL wire (SQL mode) or Redis RESP (Redis mode), per binary |
| SQL dialect | Postgres (broad subset) | SQLite |
| Replication | Raft consensus per range (quorum) | Synchronous to interested nodes (no vote) |
| Partition handling | Minority partition loses availability | First-class, holographic divergence |
| Durability | Disk, MVCC, replicated via Raft | In-memory across subscribed nodes by default; disk via Swytch Cloud |
| Write latency | Raft round-trip (quorum within group) | ~1 RTT to furthest subscriber |
| Read latency | Leaseholder RTT (stale reads available as follower reads) | ~microseconds (local node) |
| Operational surface | Mature, well-understood | One binary, one config |
| License | CockroachDB Software License (source-available) | AGPL (open source) + commercial |
Raft-based consensus is one of the great engineering achievements of the last fifteen years. It’s the reason you can trust a distributed database with your data. It solves a genuinely hard problem (getting a group of machines to agree on a sequence of events, even when some of them fail) and it solves it with proofs.
Here’s what consensus buys CockroachDB:
- A global ordering for every range. Every write to a range goes through the leaseholder and Raft log, in a defined order. That ordering is what makes serializability straightforward to reason about at the range level.
- Strong reads from the leaseholder. Because the leaseholder has committed every write it knows about, a read from the leaseholder is guaranteed up-to-date. You don’t need a second round of consensus to answer a query.
- Well-understood failure semantics. If a minority of replicas fails, the majority continues. If a majority fails, writes halt. The rules are clean, documented, and provable.
Here’s what it costs:
- Every write waits for a quorum. A CockroachDB write — even a trivial one — isn’t committed until a quorum of replicas has acknowledged it. In a three-replica range, that’s a round-trip to at least one other replica. If the range spans regions (because you want survivability), that’s a cross-region round-trip. Every write. No exceptions.
- The leaseholder is a hot spot. All reads and writes for a range funnel through one node (the leaseholder) at any given moment. CockroachDB balances leaseholders across nodes to spread load, but for any individual range there’s a single decision point.
- Leaseholder transfers cost availability. When a leaseholder fails or needs to move, there’s a brief window where writes stall while a new leaseholder is elected. CockroachDB handles this gracefully, but the window is not zero.
- Leader election is non-deterministic. Raft uses randomized election timeouts to avoid vote-splitting, and most of the time this resolves in milliseconds. It doesn’t always. Under adversarial network conditions (asymmetric partitions, specific packet-loss patterns, or a recovering region where hundreds of ranges are all electing new leaders simultaneously), an election can drag out into seconds or, rarely, minutes. It happens, it’s documented, and production CockroachDB operators have seen it. A consensus-free architecture doesn’t have an election to fail because it doesn’t have a leader to elect.
- Geo-distribution requires thought. Getting good write latency in a multi-region CockroachDB cluster means placing leaseholders carefully, using regional tables, or accepting the cross-region quorum cost. There’s significant documentation about how to do this well; it’s not automatic.
None of this is a criticism of CockroachDB. It’s the shape of consensus-based distributed SQL. These are the tradeoffs Raft makes, done well.
Swytch’s core architectural claim is that you don’t need consensus to get serializability. That sounds like magic, so let’s be specific about what it actually is and isn’t.
Swytch uses the causal DAG (the structural record of what happened before what) to order events. When two transactions touch the same data concurrently, Swytch detects the conflict at each node independently and orders them deterministically from the DAG’s structure. No communication required. No voting. Every node arrives at the same decision because the input (the DAG itself) is the same. And exactly one of the two transactions succeeds. Not “maybe both retry” or “maybe both fail” — exactly one. Serializability falls out of this: every transaction either commits atomically into the DAG in a position consistent with everything around it, or loses cleanly to the transaction that got there first.
The consequences:
- No quorum round-trip per write. A write commits when it reaches the nodes subscribed to the data — which might be one node, might be five, depending on who actually needs to see the write. You don’t pay for replicas that didn’t care.
- Every node reads locally, once subscribed. The first read of a piece of data on a given node triggers a subscription (a one-time network hop to establish which nodes are interested). After that, the data stays local, kept current by synchronous replication. Every subsequent read is local, every time. No leaseholder redirection, no stale-versus-current distinction.
- Every node accepts writes. No leader, no leaseholder, no election when a node goes down. You don’t lose write availability when a coordinator fails, because there is no coordinator.
- Partitions are first-class. When the network splits, both sides keep accepting writes to whatever they’re subscribed to. When the partition heals, the causal DAG handles reconciliation — automatically for non-conflicting writes, with a preserved history for anything that did conflict.
Reach for CockroachDB when:
- You want a distributed SQL database with a mature, well-documented operational story.
- Your workload is cleanly shardable and the consensus-per-range cost is acceptable given your write latency budget.
- You’re comfortable with the leaseholder model and you’ve read enough CockroachDB documentation to place your tables and leaseholders intentionally.
Reach for Swytch when:
- You want writes that don’t wait for quorum; especially when your cluster spans regions.
- You want reads that are unconditionally local, with no leaseholder redirection and no “follower reads are stale, leaseholder reads are current” cognitive overhead.
- You have nodes that go offline (edge devices, ships, remote sites, anything with unreliable connectivity) and you need them to keep accepting writes while disconnected.
- You want to run active-active across regions without architecting carefully around leaseholder placement.
- You’re building something that leans into the consensus-free architecture rather than working around the consensus one.
CockroachDB and Swytch are both serializable distributed SQL databases, and they get to serializable along different paths. CockroachDB uses Raft consensus: a quorum of replicas acknowledges every write, coordinated by a leaseholder per range. Swytch uses the causal DAG: concurrent transactions are detected and ordered at each node independently, with every node deterministically reaching the same decision from the same DAG input. Both approaches are formally grounded. They produce different latency profiles, different failure modes, and different operational shapes. CockroachDB is the reference implementation of the consensus approach. Swytch is what a serializable distributed database looks like when consensus isn’t in the picture.