composition.al

An example run of the Chandy-Lamport snapshot algorithm

In the undergrad distributed systems course I’m teaching this spring, I decided I wanted to discuss the Chandy-Lamport algorithm for snapshotting the global state of a distributed system in some detail. When I looked around to see how other people were teaching this algorithm, I kept noticing the exact same example being used in multiple people’s lecture notes, for different courses taught at different institutions.

From what I can tell, the original source of this example is Indranil Gupta’s lecture notes for his CS425 distributed systems course at Illinois. (If I’m wrong about the source, someone please let me know.) Since then, it seems like other people have been borrowing it (with attribution), and for good reason — it’s a nice example! But I found it to be a bit hard to follow, even when I watched Gupta’s own video lecture from his Coursera “Cloud Computing” course that shows him walking through the example.

Also, there’s a typo in the original example that seems to have propagated to all the other copies — they all have two events labeled “E”! So, this post is my own take on this ubiquitous example, cleaned up a bit and explained in detail. Credit goes to Indy Gupta for coming up with the example in the first place.

“Toward Domain-Specific Solvers for Distributed Consistency” will appear at SNAPL 2019

This year at SNAPL, I’ll be presenting a new paper, co-authored with my colleague Peter Alvaro, that collects our recent thinking on consistency-aware solvers. Here’s the abstract:

To guard against machine failures, modern internet services store multiple replicas of the same application data within and across data centers, which introduces the problem of keeping geo-distributed replicas consistent with one another in the face of network partitions and unpredictable message latency. To avoid costly and conservative synchronization protocols, many real-world systems provide only weak consistency guarantees (e.g., eventual, causal, or PRAM consistency), which permit certain kinds of disagreement among replicas.

There has been much recent interest in language support for specifying and verifying such consistency properties. Although these properties are usually beyond the scope of what traditional type checkers or compiler analyses can guarantee, solver-aided languages are up to the task. Inspired by systems like Liquid Haskell and Rosette, we believe that close integration between a language and a solver is the right path to consistent-by-construction distributed applications. Unfortunately, verifying distributed consistency properties requires reasoning about transitive relations (e.g., causality or happens-before), partial orders (e.g., the lattice of replica states under a convergent merge operation), and properties relevant to message processing or API invocation (e.g., commutativity and idempotence) that cannot be easily or efficiently carried out by general-purpose SMT solvers that lack native support for this kind of reasoning.

We argue that domain-specific SMT-based tools that exploit the mathematical foundations of distributed consistency would enable both more efficient verification and improved ease of use for domain experts. The principle of exploiting domain knowledge for efficiency and expressivity that has borne fruit elsewhere — such as in the development of high-performance domain-specific languages that trade off generality to gain both performance and productivity — also applies here. Languages augmented with domain-specific, consistency-aware solvers would support the rapid implementation of formally verified programming abstractions that guarantee distributed consistency. In the long run, we aim to democratize the development of such domain-specific solvers by creating a framework for domain-specific solver development that brings new theory solver implementation within the reach of programmers who are not necessarily SMT solver internals experts.

Way number eight of looking at the correlation coefficient

This post has an accompanying Jupyter Notebook!

Back in August, I wrote about how, while taking the Data 8X series of online courses1, I had learned about standard units and about how the correlation coefficient of two (one-dimensional) data sets can be thought of as either

  • the slope of the linear regression line through a two-dimensional scatter plot of the two data sets when in standard units, or
  • the average cross product of the two data sets when in standard units.

In fact, there are lots more ways to interpret the correlation coefficient, as Rodgers and Nicewander observed in their 1988 paper “Thirteen Ways to Look at the Correlation Coefficient”. The above two ways of interpreting it are are number three (“Correlation as Standardized Slope of the Regression Line”) and number six (“Correlation as the Mean Cross-Product of Standardized Variables2”) on Rodgers and Nicewander’s list, respectively.

But that still leaves eleven whole other ways of looking at the correlation coefficient! What about them?

Call for talk proposals: !!Con West 2019!

!!Con (pronounced “bang bang con”) West is a two-day conference of ten-minute talks about the joy, excitement, and surprise of computing, and the west-coast offshoot of !!Con, the annual independent and volunteer-run conference that I co-founded in New York in 2014. !!Con is a radically eclectic computing conference, and we’re excited to be bringing a version of it to the west coast for the first time! The conference will be held at UC Santa Cruz on February 23 and 24, 2019, and our call for talk proposals is open until tomorrow. We’ve already gotten nearly a hundred talk proposals, but we want more! We want yours!