In 2014, almost ten years ago, I wrote that SQL would:
Exist in 5 years,
Exist as legacy in 10 years, and
Become extinct in 15 years.
Arguably, the world is a bit too slow at making things happen. Time for me to step in personally and get the job done.
My focus is specifically on OLTP, online transactions processing. The real-time transactions, where latency is critical, high throughput is required, and batching is undesired.
Online analytical processing (OLAP), the tasks that have to do with crunching the data in data warehouses, are out of scope of this post.
Besides, I design data systems so that subscribing to updates and building materialized views downstream is a first-class-citizen task. Analytics, thus, is automagically solved.
Vision
The goal of this post is to outline the guiding principles for the language that can make SQL extinct for OLTP use cases. The language that will obsolete SQL and become its successor should be:
Simple: The code should be easy to build and maintain. By large teams included.
Safe: No surprises when coding and debugging. Also, designed for testability.
Forward-Looking: Schema evolution should be built into the language.
Events-First. Materialized views are first-class citizens.
Performant: Support millions of transactions per second.
Let me cover these five pillars one by one.
Simple
The code should be easy to build and maintain.
This includes the code written and maintained by large teams, where many, if not most people are below the senior level. In fact, my approach is to ship solutions that both appear trivial and do not surprise users down the road.
What’s not simple in SQL, one may ask?
Well, for individual SELECT-s and UPDATE-s, SQL is most definitely simple enough.
End-to-end business logic rarely boils down to a single SQL statement though. And then we run into all sorts of problems:
Consistency. In the vast majority of business logic, an UPDATE to the database is preceded by a SELECT from it. And if they are two separate DB transactions, the data may have changed in between. And to keep them in one transaction one has to constraint themselves to writing SQL only. Alas, there are no for-loops in SQL to begin with, which often results in developers breaking down their SQL statements into multiple parts.
Strong schema. Ideally, if the schema of the stored data changes, the broader project should fail to build, at the stage where static build-time invariants are checked. In reality, the schema of the DB rarely makes it to be the schema of the business logic that is using it. If one is lucky and they have the tests that involve a snapshot of a production database, possible problems could be caught early on. If they are less lucky, production issues are almost guaranteed.
Lack of best engineering practices when it comes to SQL. Simply put, code reviewers tend to focus too much on the “program code”, such as “Java code”, and too little on the “query code”, which is the “SQL code”. A dedicated reviewer for SQL code is a plus, but then this person is unlikely to be familiar with the business logic (“Java”) code surrounding it, which again is a potential source of error.
Some of the problems above can be resolved by confining the logic to SQL stored procedures, which are designed to be code-reviewed separately, by the people who deliberately specialize in reviewing DB-level statements.
In my experience though, this is rare. Not only because most companies prefer to move fast and break things as needed. But also because features can indeed be shipped faster without the DB-level “gatekeeper” in the loop. So, there’s no malice; it’s just too unnatural for most developers to assume that data-related operations should involve a separate reviewer.
Besides, most of the time, your average developers would not talk to an RDBMS in a way that could mess up production. The trick is in knowing which of those benign SQL code changes (or benign changes to that code that is sandwiched between several SQL snippets) are dangerous.
Safe
In late 2023, I am comfortable betting on a statement that an essential trait of a programming language is that it is forgiving.
It is normal that most people who are reading and reviewing the code are not the experts in the language in which this code is written. Therefore, if some non-trivial language construct can backfire, it will backfire.
This is not even a gamble; it’s straight out the guarantee. Tools and instruments will be used and abused. And if instrument abuse is not obvious to the reader of the code, this abuse will eventually make it to production. And it will result in paging alerts, lost data, and other bad things down the road.
A safe programming language is a language designed specifically to limit room for such ab- and mid-use.
SQL itself, I have to admit, is not bad at all! In fact, if there was a clever way to merge SQL with for-loops, I personally would vouch for such a solution.
In a way, the language I am proposing is effectively an attempt to solve this very problem:
Make it easy to use SELECT-s as SELECT-s and UPDATE-s as UPDATE-s,
Make it easy to write IF-s and FOR-s,
And have the above two play well together.
Forward-Looking
Schema evolution should be built into the language.
The sentence above pretty much explains it.
Consider a canonical example of splitting the “name” column in the database into “first_name” and “last_name”. One caveat: this change should happen with no downtime to the service.
How do you accomplish this?
Clearly, some backwards- and forward-compatible API contracts will have to co-exist for a non-trivial amount of time. Clearly, someone would have to implement and test these contracts. Clearly, eventually, the now-obsolete code would need to be removed.
Also, do not forget about the data feeds consumed by the data warehouse, by your friendly machine learning models, by regulatory reporting, by audit logs, and so on and so forth.
Realistically, what would have been an hour of work in a startup, can easily be a quarter of work for several seasoned engineers once the company and its data is a large mess.
Schema evolution should be built into the language. This is what I keep coming back to. The data access language, especially on the OLTP side, should be designed such that when the schema of the data is changing, static type analysis kicks in right away.
The code that is touching the fields of data that are changing should be highlighted,
It should be statically derivable from the schema of the data and the code of OLTP transactions which conditions should be covered by the tests, and
The tests should cover 100% of the cases when the data in the new format is expected while only the data in the old format is available, or vice versa.
Think of this as a `match` construct in functional programming. If the list of considered cases is not exhaustive, it’s a compile-time error. There’s a dangling case, after all, which must be considered by the user, but was not!
At the very least, the OLTP transactions language should provide static invariants checking to this degree.
Further down the road, data evolution itself can happen in various ways. It can be happening in the background, while the service is up and running. It can happen on the go, in the copy-on-write fashion, such that the records that are accessed more often get migrated sooner, and then a batch job is run to migrate the rest.
Ideally, the language, in which the logic of data migration is implemented, should be agnostic as to when and how the migration takes place.
During regular operation, there should be the means to track the progress of this migration. There should be the means to stop and revert it if necessary. And there should be guardrails so that it is impossible to declare the migration done (and impossible to remove the “old” fields) if the migration is, in fact, not done.
But the developer who is writing the code that runs the business logic transactions should not have to worry about data migrations. Schema evolution should be built into the language, to the degree that the checks & balances of schema migration follow directly from the type system of the language.
If the user has to worry about additional conditional statements within the OLTP transactions code to assist with data migration, it can only mean one thing in my book: the language could have been better!
Events-First
Materialized views are first-class citizens.
I talk about this a lot lately. To reiterate on the most important bullet points:
The domain that owns the data should also own the contract that ensures the feed of updates to this data is streamed to the outer world.
I refer to the “minimum viable atomic contract” for data access as “get-and-track”.
The persistent [gRPC] bidirectional connection is established.
The request is “give me the contents for this key, and stream the updates to it”.
The response, which arrives asynchronously, contains the requested content.
This response contains the version of the content sent back.
If and when the data for this key changes, the new content is published to this [gRPC] stream.
The client can send the “untrack” command, to stop tracking a particular key.
The above “handshake protocol” guarantees no concurrency issues.
If the data was changed right as it was requested, …
… then either the client receives the latest version right away, …
… or the update with the latest version follows shortly after.
Now, the above contract is, in fact, quite tricky to implement. It should not be taken as an RFC, but rather as the North Star of what the protool should support in order to be “100% there”. Realistically, some parts of the above are implemented, and the rest is dropped, since no high-pri clients need those parts, and the team is busy working on other things.
In practice, data update events are often sent via the PubSub bus, such as Kafka.
Without touching on the details of schema evolution, the very minimal contract to the events published to the PubSub bus is that:
No data create/update/delete events are lost.
Either versions of data elements are available, OR the order of updates for a given key should reflect the lifecycle of these events.
Realistically, with the READ API on the domain owner side implemented, and with (1) and (2) established with respect to streaming events, a service that converts this READ API + (1) + (2) into the “minimum viable atomic contract” above is somewhat straightforward to build and ship.
This margin is too narrow to contain all the details of what is worth sharing, including snapshotting, transactionality, schema evolution, etc.
To add just one more thing: If you are finding yourself in a mature team that understands and appreciates the need for the above, consider taking it one more step further and pushing for the idea of materialized view template projects.
This idea takes it further, all the way to suggesting that if a team owns a certain domain of data (with all its OLTP mutations), this team should also be responsible for offering a one-click scaffolded project that would build and maintain the materialized view of this very data.
This scaffolded project may not have to reconstruct the original state of the data (i.e. it may not have to be able to restore from a snapshot out-of-the-box), but it should offer an effective way to implement a read-through-cache for the users outside this team’s domain to be able to reduce the “read latency” for hot (“tracked”) keys to effectively zero.
The scaffolded project should be agnostic with respect to what storage layer, if any, should it use. It can be PostgreSQL, or SQLite, or Redis, or just a collection of manually updated hash maps.
As a litmus test, consider this scaffolded materialized view that is built on top of Redis. Any non-trivial data access pattern can then be implemented in Lua, which is JIT-compiled into this Redis instance straight away, offering a zero-overhead way to perform various graph traversals, executed 100% in memory.
To properly end this section, let me return back to the OLTP transactions language. The above section was deep, but, realistically, what it translates to is rather straightforward. The OLTP language should be designed so that all mutations that originate from a single transaction should be easy to capture, in a linearizable way, so that they can be one-to-one converted into the stream of data mutation events, based on which the materialized views can be built.
Performant
Support millions of transactions per second.
There are at least two major ways to approach this number, and let me talk about both. One: optimizing the critical path. Two: sharding.
Critical Path
Redis and V8/node.js have taught us that a lot can be done by a single thread.
Surely, being single-threaded by design is, in many ways, a self-imposed handicap. However, the enormous value that single-threaded processing brings is guaranteed linearizability.
Simply put, when all mutations are applied by a single thread, the mutations form a totally ordered sequence: for each pair of mutations one can tell exactly which one came before and which one came after.
To make a stronger statement: if the data of a certain domain is updated in a linearized fashion, each data mutation can be assigned a strictly increasing epoch index ID, and it is perfectly reasonable to refer to the data, or to a slice of the data, as “the data seen as of epoch $N”.
Contrast this with the multi-threaded, or multi-node / multi-sharded world, where instead of using just one number, the epoch index, to refer to the state of the data, we need to utilize vector clocks, where each logically independent component can potentially be looking at the data of different versions.
For sure,the RDBMS industry has solved all (or most) of the problems this introduces in practice. The developer still has to be aware of rather sophisticated “failure modes” such as phantom reads, and of rather sophisticated mitigation techniques such as SELECT FOR UPDATE.
My goal here is not to advocate for single-threaded-ness, or for linearizability from day one. But it always is the first question I ask: what number of transactions per second should the system handle, and how sophisticated are the transactions?
Because when it’s about two- or three-digits TPS, when each transaction only requires a dozen or so if-statements of logic, and when on-write fanout factors are low, designing the data model of this domain to be linearizable often is the best solution in the long run.
Redis, after all, can handle nearly a million trivial operations per second, all on a single thread. And even with nontrivial writes, when implemented in Lua, the number of updates per second is north of ten thousand.
Besides, the “tricks” to make a single-threaded linearizable solution work in production are all well-understood these days. Effectively, they boil down to taking of the work off the critical path:
Don’t parse input request bodies in the “critical path” thread.
Don’t construct response bodies in the “critical path” thread.
Don’t perform string manipulations (split or join) in the “critical path” thread.
Just use a lock-free queue and a thread pool for these operations. Three thread pools would do too. In fact, even the order in which the transactions are executed can be preserved.
What the above translates to in practice, with respect to the OLTP transactions language, is that it should be safe to implement each transaction as if it has exclusive access to the data.
Of course, this is indeed what SQL (mostly) offers. But this guarantee no longer holds if the way the business logic is implemented is using if-statements and for-loops between SELECT-s and UPDATE-s. And don’t get me started on keeping RDBMS transactions open for an extended period of time.
Sharding
Of course, not every use case can be covered by the single-threaded approach.
We then enter the dreaded world of sharding: when different parts of data live in different places, and synchronizing them becomes a pain.
Again, without going into too much detail, a good OLTP language should take care of the most common problems that originate from sharding. Specifically, the language should be designed such that the end user is oblivious to:
Two-phase commits,
Distributed locking,
Leader elections / distributed consensus,
Rebalancing / resharding.
Personally, I am convinced that it is possible to have the OLTP transactions language that abstracts the above away from the developer.
In fact, I would go further and say that it can be done in a clever way, similar to how Linux turned slow kernel level mutexes into fast user-space futexes:
If a transaction requires accessing multiple shards …
… and it first attempts to execute itself a few times with optimistic locking …
(i.e. attempt the transaction under the assumption that there will be no conflicts)
… and only if it fails it actually acquires the lock and executes itself in exclusive mode.
(the lock, of course, is managed by the framework automatically, not manually by the user)
Recap
A good OLTP language:
Abstracts away sharding.
Including re-sharding, DB scaling, migrations, etc.
Makes it natural to avoid distributed locking.
And manages it organically, under the hood, if and when needed.
And focuses on optimizing the performance on the critical path.
So that whatever logic fits a single thread is encouraged to be run in a single thread.
The business logic itself is already hard. Developers are busy enough thinking of how to ensure the business invariants are held.
It is bad enough if they have to concern themselves with which shards of which databases can be affected by their transaction. It is horrible if possible data inconsistencies, which stem from various sharding schemes, are also something this developer should be responsible for.
This is how write-only code emerges. This is what takes years to refactor. And this is what the successor to SQL when it comes to OLTP transactions, should strive to resolve once and for all.
Let me know what you think in the comments. If there is enough interest, I would love to continue researching this domain semi-publicly.
Btw, here's an early peek at what we're thinking about with friends: https://github.com/SysDesignMeetup/oltplang