Swytch vs Cloudflare D1
D1 is Cloudflare’s SQLite-at-the-edge database. It runs as a Durable Object, it’s tightly integrated with Workers, and it ships with a free tier that will carry most hobby projects forever. It’s also, if you read its own documentation carefully, a single-writer database with asynchronous read replicas — which is a genuinely useful shape for a lot of workloads, and a terrible shape for others.
Swytch is a distributed SQL database you run yourself, where every node accepts writes and replicates synchronously to the nodes that care about the same data. Different shape. Different tradeoffs. This page exists because if you’re already happy with D1, you probably shouldn’t switch — and if D1’s model is biting you, you probably should.
Let’s get into it.
D1 has one writer. Swytch has as many writers as you have nodes.
Everything else is a consequence of that difference.
D1’s primary database lives in one region. All writes go there. Read replicas — asynchronous, read-only — live in other regions and serve reads with some lag. If your users are in Singapore and your primary is in Virginia, their writes make a round trip to Virginia. Every time.
Swytch has no primary. Every node accepts writes. A write commits when it reaches the nodes subscribed to that table — which might be one node, might be five, might be twenty, depending on who cares about the data. No leader election, no failover dance, no “the write must go to Virginia.”
This isn’t a performance optimization. It’s a different product category. If you need active-active writes across regions, D1 doesn’t do that. If you don’t, D1’s model might be exactly right and you don’t need Swytch.
| Cloudflare D1 | Swytch | |
|---|---|---|
| Writers | One primary, one region | Every node, any region |
| Replicas | Asynchronous, read-only | Synchronous, read/write |
| Per-database cap | 10 GB (architectural, non-negotiable) | Bounded by cluster memory; disk via Swytch Cloud |
| Concurrency | Single-threaded per database | Multi-node, concurrent per table |
| Consistency (default) | Read committed | Causal |
| Consistency (best) | Sequential, with Sessions API + bookmarks | Serializable transactions |
| Read-your-writes | Requires Sessions API + bookmark plumbing | Guaranteed locally, immediate |
| Horizontal scale pattern | Many small databases (per-tenant, per-user) | One database, many nodes |
| Deploy target | Cloudflare network only | Anywhere — cloud, on-prem, edge devices, offline |
| Operational surface | Zero. It’s serverless. | One binary per node |
| Access | Workers binding, HTTP API | PostgreSQL wire protocol |
| Backup model | Time Travel (30-day point-in-time restore) | Causal history via Swytch Cloud |
| Network partitions | Not applicable (one primary) | First-class, holographic divergence |
| SQL dialect | SQLite | SQLite |
Two databases that both speak SQLite, with almost nothing else in common.
Genuinely — for a lot of applications, D1 is the better choice. Here’s when.
You’re already on Workers. If your compute runs on Cloudflare, your database should probably also run on Cloudflare. The binding is one line of config. Latency between Worker and D1 is on the order of a few milliseconds. The operational integration is as good as it gets. Swytch can’t match that, because Swytch isn’t running inside Cloudflare’s network.
Your write rate is low and mostly comes from one region. D1’s single-primary model is fine if your writes are infrequent and your users are concentrated near the primary. A blog, a portfolio site, a low-traffic SaaS dashboard — all of these work great on D1.
You want database-per-tenant isolation without operational pain. D1 lets you spin up thousands of 10GB databases at no additional cost. For multi-tenant SaaS where tenants should be genuinely isolated — different schemas, different data locality, different blast radius — this pattern is excellent. Swytch can do multi-tenancy, but not in the way D1 makes effortless.
You want zero-ops. D1 has no nodes to provision, no passphrases to manage, no DNS to configure, no updates to apply. You create a database, you use it. When it breaks, Cloudflare fixes it. That’s a real feature. Swytch makes you run a binary.
Your read traffic is the bottleneck, not your write traffic. This is what D1’s read replication is for. Add replicas, reads get faster, the Sessions API keeps things consistent. If your workload is 95% reads, D1 scales it well.
You want point-in-time restore with no setup. Time Travel gives you 30 days of minute-level restore points. It’s a button. Swytch Cloud has a different durability story — full causal history, not snapshots — and it’s more powerful but also more work to think about.
For any of these, the answer is probably D1. Don’t switch to Swytch because someone on HN told you distributed databases are cool.
The places where D1’s model stops being a fit are also the places Swytch is built for.
You need active-active writes across regions. D1 can’t do this. One primary, full stop. Your European users writing to a Virginia-primary database pay the transatlantic round-trip on every write — often 80-100ms, sometimes more, every time. With Swytch, a node in Frankfurt accepts writes for your European users and commits them to whichever other nodes care, at whatever latency those specific nodes impose. No mandatory trip across the ocean.
Your writes come from multiple regions concurrently. A global game backend. A multi-region SaaS. A fleet of edge devices that all write at roughly the same rate. D1’s single-primary funnel becomes the system’s hot spot — everything queues behind it, and a 100ms query blocks the 1ms queries behind it because D1 is single-threaded per database. Swytch writes in parallel, on every node, to the subscribers of each table.
Your data doesn’t fit under 10 GB. D1’s 10 GB per-database cap is architectural. The recommended answer is “run many databases” — which works for multi-tenant isolation but not for a single logical database that grew. If you have a 50 GB dataset that logically wants to be one database, D1 asks you to rearchitect. Swytch doesn’t.
You need read-your-writes without plumbing. D1 gets you read-your-writes through the Sessions API — which works, but requires threading a bookmark through your application: every query takes a bookmark, returns a new bookmark, and you pass it along to the next query to guarantee consistency. Forget a bookmark somewhere, and a read can go backwards in time. Swytch gives you read-your-writes on the local node automatically. Your application doesn’t know there are other nodes.
You need to run outside Cloudflare. On-prem. On ships. On oil platforms. On devices that lose connectivity for hours. In a region Cloudflare doesn’t have a presence. On air-gapped networks. D1 doesn’t go those places. Swytch is a binary; it goes wherever you put it.
You need serializable transactions. D1’s consistency ceiling is sequential consistency, via the Sessions API. That’s strong, and it’s enough for most applications, but it’s not serializable. Swytch serves serializable transactions as the default for SQL mode — the full envelope-commit model, same guarantees you’d expect from Postgres’s SERIALIZABLE isolation level, but distributed.
You need a Postgres-shaped integration surface. D1 is accessed via the Workers binding or an HTTP API. No wire
protocol, no psql, no DataGrip, no pgAdmin talking to it over a standard socket. Swytch speaks the PostgreSQL wire
protocol, so any Postgres driver connects cleanly — psql, psycopg, pg, whatever your language uses. Tools that
depend on the wire protocol alone (connection, prepared statements, auth, basic query execution) work without changes.
Tools that depend on the Postgres dialect — pg_dump’s output, Postgres-specific DDL, introspection against
pg_catalog, Postgres-flavored ORMs that emit CREATE TYPE or GENERATED ALWAYS AS IDENTITY — will connect and then
hit the SQLite dialect underneath. See the SQL Dialect page for what crosses the line and what doesn’t.
If you’re nodding at any of these, keep reading. If you’re not, close this tab and go enjoy D1.
This deserves its own section because D1 users will recognize it, and readers new to D1 should see it before they commit.
D1’s read replicas are asynchronous. That means a replica you read from might be two seconds behind the primary. On its own, that’s fine — lots of systems run asynchronous read replicas. The problem is that Cloudflare’s global routing doesn’t guarantee your successive requests go to the same replica. Your first query might hit replica A (lagging 200ms). Your second query, 500ms later, might hit replica B (lagging 2 seconds). The second query would see older data than the first. Reads can go backwards in time.
D1’s answer is the Sessions API with bookmarks. Every query returns a bookmark — essentially “here’s the point in the log where this query was answered” — and you pass the bookmark to the next query, which guarantees it reads from a replica at least that current. This works. It is, by D1’s own description, the only way to get useful consistency properties out of a multi-replica D1 setup.
The cost is in your application code. Every data-access path has to thread bookmarks through. Async handlers have to propagate bookmarks across boundaries. Cache layers have to know about bookmarks. Third-party libraries have to be wrapped. It’s not catastrophic, but it’s omnipresent — and when you get it wrong, the bugs are the subtle kind that don’t show up until production.
Swytch’s model doesn’t have this problem because Swytch’s replicas aren’t asynchronous. Writes commit synchronously to subscribed nodes. Your application connects to a local Swytch node, reads and writes against it, and read-your-writes is automatic. No bookmarks. No Sessions API. No threading.
Whether that’s worth switching for depends on how much Sessions-API plumbing you’re maintaining, and how much of your bug surface it explains.
Honest section. This is where Swytch loses.
Serverless operation. D1 is truly serverless. You do nothing except call it. Swytch is a binary you run, even if it’s a small, friendly one. You provision nodes, you manage the passphrase, you decide how many regions and where, you keep the processes running. For most ops-literate teams this is a minor cost; for teams that chose Cloudflare specifically to not deal with infrastructure, it’s a dealbreaker.
The free tier. D1 has a generous free tier — millions of rows read per day, tens of thousands of writes, 5 GB of storage, all free. Swytch is open source and free to run yourself, but that means you’re also paying for the machines it runs on. For a hobby project or a prototype, D1 is free in a way Swytch can’t match.
Integration with the rest of the Cloudflare platform. KV, R2, Queues, Durable Objects, Workers AI — if your application stitches these together, D1 is the database that lives in the same network and bills from the same place. Swytch doesn’t. You’d be running Swytch on some other infrastructure and reaching into it from your Workers.
Time Travel. D1’s 30-day point-in-time restore is a single button. Swytch Cloud has a more powerful durability story — full causal history of every row, every transaction, every related write — but it’s a different shape. If what you want is specifically “put the database back the way it was at 3:47pm on Tuesday,” D1 does that more simply than Swytch does.
Ecosystem familiarity on Workers. Every Workers tutorial, every template, every blog post about Cloudflare development assumes D1. Swytch is a new project; the Workers community hasn’t written those tutorials yet.
Multi-database patterns. D1’s “thousands of small databases” model is genuinely great for multi-tenant isolation.
Swytch is one cluster serving many tenants, which means more care around tenant separation, access control, and resource
isolation. Doable, but not as effortless as create_database(tenant_id).
Honest summary: If D1’s single-writer model fits your application, D1 is probably the better call. The Cloudflare integration, the free tier, the zero-ops story are real advantages that Swytch doesn’t match. Swytch wins when the single-writer model specifically doesn’t fit.
Reads are always local. Because Swytch replicates synchronously to subscribed nodes, a query against a local node is answered from local memory — no network, ever. Your Sydney user’s read hits a Sydney node. Your European user’s read hits a European node. No replica lag, because there are no read replicas: every node is a full participant, not a read-only copy chasing a primary.
Writes are the trickier story. A write commits when it reaches the nodes subscribed to that table. If the table is region-local, the write is region-local. If the table is shared across regions — which, in real applications, most interesting tables are — the write pays the cross-region RTT. Swytch doesn’t cheat that; when you need other regions to see a write, you wait for them. What Swytch doesn’t do is make you wait on regions that don’t need the data. Compare that to D1’s single-primary model, where every write pays the round-trip to Virginia (or wherever your primary lives), regardless of who actually needs to see it.
No 10 GB cap. A Swytch cluster’s capacity is bounded by the memory of the nodes subscribed to each table (or, with Swytch Cloud, by disk). Your database can be 10 MB or 100 GB. Architecturally, nothing cares.
Multi-threaded query execution. Swytch doesn’t have D1’s single-threaded-per-database constraint. A slow query on one table doesn’t block fast queries on other tables. A slow query on one node doesn’t block queries on other nodes.
Serializable transactions. D1 does not offer serializable. Swytch’s default for SQL-mode transactions is true serializable — Jepsen-tested — via the envelope-commit model. You build up operations, commit the envelope atomically, and if the envelope can’t be ordered consistently with the rest of the DAG, the commit fails.
No bookmark plumbing. Read-your-writes is automatic on the local node. Application code doesn’t know there are other nodes.
Runs anywhere. Including places Cloudflare doesn’t go and places that go offline sometimes.
Postgres-wire integration. Your Postgres driver works against Swytch — connection, auth, prepared statements,
query execution, all of it. That’s a lot of ecosystem for free: every language has a Postgres driver, every CLI (psql,
DataGrip, pgAdmin) speaks the protocol. What doesn’t come for free is the Postgres dialect. Queries have to be valid
SQLite. If your application is already Postgres-dialect-native, migrating is real work. If your application has been
writing portable SQL, the wire-level drop-in is close to painless.
Partition survival, not just replica lag. When D1’s primary becomes unreachable, your writes fail. When a Swytch node is partitioned from the cluster, both sides keep accepting writes, and the causal log handles reconciliation on heal. For some workloads that’s overkill; for others — the ones with nodes at the edge that genuinely do lose uplink — it’s the whole reason you’d pick Swytch.
Let’s do the math on a specific scenario, because abstract comparisons are easy to handwave.
Say you’re building a global SaaS. Users in the US, Europe, Asia. Median write rate is modest — a few hundred writes per second at peak, spread across all regions. You need read-your-writes, and your latency budget matters.
On D1: You pick a primary region, say Virginia. Your US East users are fine for writes — they land in 5-10ms. Your European users pay ~80-100ms to Virginia. Your Asian users pay ~180-200ms. Every write, every user, every time. Reads can go to replicas in other regions and get faster — if you’ve plumbed the Sessions API and bookmarks through your application correctly. Without that plumbing, reads are routed anywhere and can go backwards in time. With it, reads are fast-ish but come with the plumbing cost.
On Swytch: Every region has a local node, so reads are always local, always fast — single-digit milliseconds regardless of region, no Sessions API, no bookmarks, no backwards-in-time surprises. Writes are where the tradeoff actually lives. If a table is truly region-local (a user’s profile data that only their region cares about), writes are region-local and single-digit milliseconds. If a table is globally shared (shared config, a global leaderboard, anything every region reads), writes pay the cross-region RTT. That cost is real — there is no free lunch on globally coordinated writes — but it’s scoped to the tables that actually need global coordination, not applied as a blanket cost on everything.
The tradeoff Swytch asks of you: think about which tables are region-local and which are global. D1 doesn’t ask this, because it handles it for you by not offering the option. Whether that data-modeling work pays off depends on how much of your data is actually global versus how much just happens to live in a global primary by default. For most global SaaS applications, the answer is “most tables aren’t actually global” — and that’s where Swytch wins the latency comparison even with the modeling cost factored in.
If you’ve decided to switch, the migration has two parts: the schema and the access pattern.
Schema is the easy part — and the only case in the whole Swytch ecosystem where the dialect question doesn’t come
up. D1’s schema is SQLite. Swytch’s schema is SQLite. Your DDL — CREATE TABLE, indexes, views — ports over directly.
Export from D1 (wrangler d1 export), import via psql against Swytch. The wire protocol on the Swytch side is Postgres,
but the SQL you’re sending is the same dialect D1 was already using, so nothing translates.
Access pattern is the interesting part. Your application is probably written to use the D1 Workers binding — either
directly or through a wrapper — which means it’s talking to D1 via a proprietary API, not a wire protocol. Migrating to
Swytch means connecting via a Postgres driver. In most languages that’s a one-line change (pg instead of
@cloudflare/d1, psycopg instead of whatever), plus rewriting the query-execution calls, which is tedious but
mechanical.
The bookmark removal. If your application threads D1 Session API bookmarks through every request, you get to rip that out. Read-your-writes on Swytch is automatic on the local node. Deleting bookmark-propagation code is the fun part of migrating.
Multi-database applications. If you’re running many D1 databases (say, per-tenant), migration is a bigger decision: do you consolidate into one Swytch cluster, or keep per-tenant isolation with other mechanisms? There’s no single right answer. The Migration Guide has more detail.
For most applications, a D1 → Swytch migration is an afternoon of work, most of it adapting to the Postgres wire. The bookmark-removal is a bonus.
D1 and Swytch both speak SQLite, and that’s about where the similarity ends. D1 is a single-writer serverless database tightly integrated with Cloudflare’s platform, with asynchronous read replicas and a 10 GB per-database cap. It’s excellent for applications where the single-writer model fits and the Cloudflare-only deployment is a feature. Swytch is a distributed database you run yourself, with every node accepting writes, synchronous replication to subscribed nodes, no size cap (and disk durability via Swytch Cloud), and the full Postgres wire protocol. It’s built for active-active multi-region workloads, data that doesn’t fit in one region, and deployment targets Cloudflare doesn’t reach. If you’re happy on D1, stay on D1. If D1’s single-writer model is the reason you’re here reading this, Swytch is the shape you’re looking for.
Pick the shape that matches your system. They’re both good tools for the jobs they’re built for.