Last week at POPL, I presented “Freeze After Writing: Quasi-Deterministic Parallel Programming with LVars”. In this paper, my co-authors Aaron Turon, Neel Krishnaswami, Ryan Newton, and I added three new features to the LVars programming model: handlers, quiescence, and freezing. I’ve posted the slides from the talk, and sooner or later there should be a video, too! Meanwhile, here’s some of the same material in written form.
I’m happy to announce a new draft paper, “Joining Forces: Toward a Unified Account of LVars and Convergent Replicated Data Types”, that my advisor and I just submitted to WoDet 2014. This short paper covers our work in progress on bringing together LVars and CRDTs, CvRDTs in particular.
Update (February 19, 2014): Our paper was accepted at WoDet, and the above link now points to the final version.
Today on Twitter, an informal survey is making the rounds on the experiences of software developers who came to the field from “non-traditional”, that is, non-CS backgrounds. I’m not sure if I should consider myself the target audience for the survey: on one hand, I do have a couple of “traditional” CS degrees by now, and I’m in mainstream CS academia; on the other hand, I didn’t set out to study computer science when I started college (and certainly didn’t consider doing so prior to college, either), and I was a music major for two years before I added a second undergrad major in CS. I did go ahead and fill out the survey, and I thought I’d also share my answers here. I’m curious to hear from others whose experiences are like mine.
Suppose we want to write a program in which two threads, say, t1 and t2, each contribute a Boolean value, either
False, to a shared result. If both threads write
True, we want the result to be
True; if either writes
False (or if both do), then we want the result to be
False. That is, we want the shared result to be the outcome of a logical “and” operation.
Let’s further suppose that we want the threads to be able to run in parallel, with their writes arriving in arbitrary order, and we want to be able to guarantee a deterministic result in spite of parallel execution. Sounds like a job for LVars!
Update (February 19, 2014): I’ve updated the links in this post to point to code that runs against the 1.1.2 release of LVish.
I’ve just posted a draft of my thesis proposal, which, barring disaster, I’ll be presenting to my research committee next Friday. To spill the beans, here’s my thesis statement:
Lattice-based data structures are a general and practical foundation for deterministic and quasi-deterministic parallel and distributed programming.
Update (November 30, 2013): I updated this post to reflect some of my current thinking on threshold reads for CRDTs and why I think high-water-mark reads aren’t necessarily the way to go.
One way in which CRDTs differ from LVars is that some CRDTs support non-monotonic operations. For instance, OR-Sets and 2P-Sets support removal of elements, while PN-Counters can count down as well as up. Combining threshold reads with non-monotonic operations like removal of elements from a set would seem to pose a problem for determinism. If only monotonic operations are allowed, as is the case for LVars, then the answer to the question of whether or not a given threshold read will unblock is the same from run to run (although it may unblock at different times, depending on scheduling). But if the state of the data structure being queried were allowed to move both up and down in a lattice, then that would no longer be the case. For instance, if a read were thresholded on an element that was added to and later removed from a set, then it might block forever, or not, depending on whether the read ran after or before that element’s removal. Hence some schedule nondeterminism would be exposed.