It’s well known that many distributed consistency models come from the literature on shared memory multiprocessor systems. Intuitively, both systems need to present a consistent view of asynchronously replicated data to external clients (where replication in shared memory comes from caches), and therefore share the same underpinning theory. However, subtle differences between the two lead to their adoption of distinct consistency models. This is my attempt to explain the popular solutions for each system, and some observations on where the two diverge.
“General-Purpose” Consistency Models
Informally, consistency models describe the guarantees a system provides on the ordering in which writes become visible to reads (of different replicas). The more uniform and intuitive this ordering is, the stronger the model.
Many consistency models are universally applicable to any replicated system. Sequential consistency (SC), possibly the earliest proposed consistency model, is one such example. Recall its definition:
… the result of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program1.
Lamport defined SC as a property of multiprocessor systems, but this definition can be extended to “objects” in a concurrent system, encompassing distributed message-passing systems as well as shared-memory architectures. We can say an object is sequentially consistent if it permits only sequentially consistent histories, where the object can be a single memory location, a class in an OOP language, or a database row, etc.
The object definition of SC additionally reveals the caveat that SC does not compose - a system consisting of two SC objects itself is not necessarily SC. This property makes SC objects difficult to reason about. A counter-example, based on FIFO queues, is found in Art of Multiprocessor Programming2:
A p.enq(x) ———— q.enq(x) ———— y = p.deq() ————
B ———— q.enq(y) ———— p.enq(y) ———— x = q.deq()
Two threads, A and B, call enqueue and dequeue methods on two queue objects, p and q. It is not hard to see that p and q are each sequentially consistent, yet the execution as a whole is not sequentially consistent.
Linearizability is a stronger, compositional consistency model. In addition to SC, linearizability imposes “real-time” order - any new operation must observe all prior operations that completed in real time before-hand3. In other words, all operations must appear to happen instantaneously at some point between call and return, and this point is dubbed the linearization point. In multi-threaded programming, when we say a class is thread-safe, we usually mean it is linearizable.
On the weaker end of the consistency spectrum, causal consistency is a representative model. It is based on the happens-before relation4 - the transitive closure of program order and message-passing order (or equivalently, the order of synchronizing operations in shared memory systems). Unlike the stronger models described above, causal consistency does not guarantee a total order of operations - different threads may observe different order of concurrent writes (i.e. writes unrelated by happens-before). Informally, a history is causal if there is a serial ordering of its events that respects the happens-before order. An object is causal if it admits only causal histories5.
What’s also interesting about causal consistency is that it was introduced in a paper on shared memory, but to my knowledge, never caught on in shared memory systems. In contrast, it is being studied and applied extensively in distributed databases, perhaps because it is the strongest achievable consistency model that does not force processes to coordinate, affording the system lower latency and availability in network partitions. Anna6 is an example of the performance made possible with causal consistency.
Some other examples of general-purpose consistency models, which we will not expand on, include Pipeline Random Access Memory 7 and session guarantees8. See referenced first-hand materials and the Jepsen page for a good introduction.
Specialized Consistency Models
Some consistency models only make sense in the context of particular systems. We go through a few examples below.
Shared Memory
Consistency models in shared memory systems are also known as memory (consistency) models. In an earlier post, we introduced SC-DRF0, which offers a conditional consistency guarantee. It places restrictions on how a software program should be written, and guarantees sequential consistency in return. It is defined as the following:
A program obeys the synchronization model Data-Race-Free-0 (DRF0), if and only if
- all synchronization operations are recognizable by the hardware and each accesses exactly one memory location, and
- for any execution …, all conflicting accesses are ordered by the happens-before relation corresponding to the execution9.
Unlike general-purpose consistency models, SC-DRF0 divides memory operations into synchronization operations and “ordinary” operations, where the former are language-specific constructs like C++ atomics, Java volatile, etc. With this division, SC-DRF0 is able to give additional information to shared-memory hardware, enabling further optimizations, while still offering sequential consistency semantics to programmers. As mentioned in the memory model post, most high-level programming languages today adopt some variation of SC-DRF0.
Release consistency, like SC-DRF0, is also a conditional guarantee for sequential consistency. It makes the condition to achieve SC even more stringent - it’s more difficult to program under this model compared to SC-DRF0:
Synchronization accesses can further be partitioned into acquire and release accesses. An acquire synchronization access (e.g., a lock operation or a process spinning for a flag to be set) is performed to gain access to a set of shared locations. A release synchronization access (e.g., an unlock operation or a process setting a flag) grants this permission.
(In release consistency), enough competing accesses (must) be labeled as acquire/release to ensure that the ordinary accesses are indeed non-competing (i.e. the program is data-race-free)10.
As their names suggest, the acquire/release memory orders in C++ are based on release consistency.
Distributed Databases
In databases, transaction isolation levels are similar to, and frequently confused with, consistency models. The two concepts are ultimately orthogonal. Isolation is the “I” in ACID transactions, and refers to the guarantees a database provides for concurrent transactions to execute without exposing intermediate states to each other. In contrast, consistency is about guarantees on the order of single operations on single objects - there is no intermediate state to observe. In some contexts, “consistency model” is used in a broader sense to describe a mix of consistency and isolation properties of a distributed database system.
The strongest isolation level is serializability, which states:
A serializable execution is defined to be an execution of the operations of concurrently executing SQL-transactions that produces the same effect as some serial execution of those same SQL-transactions. A serial execution is one in which each SQL-transaction executes to completion before the next SQL-transaction begins11
Serializability bears a lot of resemblance to sequential consistency. Both require that the execution of some operations from multiple processes are equivalent to a serial execution. There are two notable differences:
- Serializability is about multiple operations on multiple objects.
- Serializability does not promise anything about the order of serial execution. Unlike sequential consistency which at least preserves program order, serializability allows “time-travel” anomalies where later transactions are ordered before earlier ones in the equivalent serial order.
Strict serializability is a representative example of a guarantee that mixes consistency and isolation, addressing the “time-travel” caveat of serializability. Strict serializability is essentially a multi-object version of linearizability:
A (partial) precedence order can be defined on non-overlapping pairs of transactions in the obvious way. A history is strictly serializable if the transactions’ order in the sequential history is compatible with their precedence order.
Linearizability can be viewed as a special case of strict serializability where transactions are restricted to consist of a single operation applied to a single object3.
Here, “the obvious way” refers to real-time order of transactions, just like in the definition of linearizability. If transaction Y starts after transaction X completes, strict serializability guarantees X must be before Y in the equivalent serial order of transactions. Spanner implements a version of this dubbed external consistency.
Many weaker isolation levels below serializability exist, but they are less interesting from a consistency perspective. See this blog post or “A Critique of ANSI SQL Isolation Levels”12 for a good introduction. Interestingly, the database literature associates “anomalies” with each isolation level to exemplify what undesirable executions are (dis)allowed by each level, which are not unlike litmus tests in the memory model literature.
Comparison
What leads to the adoption of different consistency models for shared memory systems and distributed databases?
Let’s consider the goals of a consistency model within the context of these systems. In an ideal world:
- A consistency model should make behaviors of the system easy to reason about by programmers.
- A consistency model should enable high performance (by limiting coordination among processes).
- In distributed databases, a consistency model should allow the database to remain available during network partitions.
- In distributed databases, a consistency model should enable clients to submit arbitrary new transactions during the lifetime of the database instance (cf. a statically compiled program).
(3) and (4) highlight that for shared memory, there are fewer constraints on the consistency model - there are no concerns around network partitions, or ever-changing data access patterns.
(1) is also worth expanding on. Multi-object transactions are a powerful high-level API to program around - programmers are relieved of the burden to carefully reason about intermediate states during a multi-operation multi-object transaction, crash recovery, maintaining application-specific invariants on data, etc. It is especially necessary in a multi-tenant system like distributed databases, where operations on data constantly evolve, making it infeasible to keep track of all conflicting operations and carefully coordinate their access. Transactions would also be nice to have in shared memory, but seemingly, the performance and complexity overhead they carry make them an active research area with limited industry adoption13.
With differing goals, consistency models naturally diverge:
- For distributed databases, the focus is to provide a combination of consistency and multi-object transaction isolation, while allowing for trade-offs between strong consistency and low latency / availability in network partitions. This leads to the popularity of models like strict serializability (for the strongest guarantees) and causal consistency (for reasonable guarantees while avoiding expensive coordination).
- For shared memory systems, there are fewer requirements compared to distributed databases, but performance requirements are also more stringent. Because programmers have full control over data accesses in statically compiled programs, consistency models distinguish between synchronizing and “ordinary” accesses to relax the ordering constraints of memory operations, achieving better performance. Models like SC-DRF0 and release consistency have no counterparts in distributed databases.
-
L. Lamport, “How to Make a Multiprocessor Computer That Correctly Executes Multiprocess Programs,” Sep. 1979. ↩︎
-
M. Herlihy and N. Shavit, The Art of Multiprocessor Programming, Revised Reprint, § 3.3.4, 2012. ↩︎
-
M. P. Herlihy and J. M. Wing, “Linearizability: a correctness condition for concurrent objects,” Jul. 1990. ↩︎ ↩︎
-
L. Lamport, “Time, Clocks, and the Ordering of Events in a Distributed System,”, 1978. ↩︎
-
M. Ahamad, G. Neiger, J. E. Burns, P. Kohli, and P. W. Hutto, Causal memory: definitions, implementation, and programming, Mar. 1995. ↩︎
-
C. Wu, J. M. Faleiro, Y. Lin, and J. M. Hellerstein, “Anna: A KVS For Any Scale,”, Oct. 2018 ↩︎
-
R. Lipton and J. Sandberg, “PRAM: A Scalable Shared Memory.” ↩︎
-
D. Terry, A. Demers, K. Petersen, M. Spreitzer, M. Theimer, and B. Welch, “Session Guarantees for Weakly Consistent Replicated Data),” Oct. 1994. ↩︎
-
S. V. Adve and M. D. Hill, Weak Ordering - A New Definition. ↩︎
-
K. Gharachorloo, D. Lenoski, P. Gibbons, A. Gupta, and J. HeMessy, Memory Consistency and Event Ordering in Scalable Shared-Memory Multiprocessors. ↩︎
-
ANSI/ISO/IEC International Standard (IS) Database Language SQL — Part 2: Foundation, Sep. 1999. ↩︎
-
H. Berenson et al., “A Critique of ANSI SQL Isolation Levels,”, Jun. 1995. ↩︎
-
W. Hasenplaugh, A. Nguyen, and N. Shavit, Quantifying the Capacity Limitations of Hardware Transactional Memory. ↩︎