Dissertation draft readers wanted!

Inspired by Brent Yorgey, I’m finally going public with a draft of my dissertation!

My thesis is that a certain kind of data structures, which I call “lattice-based data structures” or “LVars” for short, lend themselves well to guaranteed-deterministic parallel programming. My dissertation combines material from various previously published papers, making it a three-papers-stapled-together dissertation in some sense, but I’m also retconning a lot of my work to make it tell the story I want to tell now.

When people ask what the best introduction to LVars is, I have trouble recommending the first LVars paper; even though it was only published a year ago, my thinking has changed quite a lot as my collaborators and I have figured things out since then, and the paper doesn’t match the way I like to present things now. So I’m hoping that my dissertation will be something I can point to as the definitive introduction to LVars.1 I’m also generalizing some of our previously published results, now that we know that that’s possible.

Call for papers: IFL 2014

This year, I’m on the program committee for IFL, the annual Symposium on Implementation and Application of Functional Languages. It’ll be held at Northeastern University in Boston this October, and the call for papers is open!

Draft: “Deterministic Threshold Queries of Distributed Data Structures”

I’m happy to announce a draft paper on my work (in collaboration with my advisor, Ryan Newton) on bringing LVar-style threshold reads to the setting of C(v)RDTs. In this paper, we define what it means for a CvRDT to support threshold reads, and we show that threshold reads of CvRDTs behave deterministically.

Determinism means something a little different in the distributed setting than it does in the shared-memory setting that we’re used to with LVars. The determinism property we show in the paper is: if a threshold query on a replica returns a particular result, then (1) subsequent runs of that query on that replica will always return that same result, and (2) any run of that query on any replica will eventually return that same result, and will block until it does so. (All this is under certain assumptions, of course, which we spell out in more detail in the paper.)

A one-minute talk about “Taming the Parallel Effect Zoo”

At PLDI a few weeks ago, my advisor Ryan Newton presented our paper “Taming the Parallel Effect Zoo: Extensible Deterministic Parallelism with LVish”. This year, PLDI asked the authors of each paper to give a short, one-minute talk, accompanied by a single slide, in the morning on the day of their main, twenty-minute presentation. The goal of the one-minute talks was to encourage people to attend the main talks and help them decide which ones to see. On each morning of the conference, all of that day’s one-minute talks were presented one after another, right after the keynote presentation.

Things you can do to get ready for PL grad school

I recently got an email from someone who was about to graduate with an undergrad CS degree and was interested in pursuing programming languages research. They were planning on going to grad school, but wanted to know what they could do between now and when grad school would start (in fall 2015 at the earliest, since that’s how application cycles for Ph.D. programs work) to keep their head in the research and PL game. Here are some thoughts on that topic, based on my own experience. My advice is targeted toward people who are graduating from undergrad CS programs soon, or who graduated recently and are working in industry.

Your next conference should have real-time captioning

I was one of the organizers of !!Con, a free conference about the joy, excitement and surprise of programming that happened two weeks ago in New York.

We did a number of things that I think helped set the conference apart — for instance, we had an anonymous talk review process. But one thing we did that I’m particularly glad of was having real-time captioning of the talks at the conference. As each presenter spoke, Mirabai Knight transcribed the text of their talk on her steno machine, in real time, at up to 260 words per minute, and projected it on a screen that the whole room could see.