This year’s International Conference on Functional Programming, or ICFP ‘17 to its friends, will be getting under way in Oxford, UK in just a couple of days. ICFP is one of my favorite conferences, and this year I’m serving as the publicity chair and as a member of the program committee, so I have even more reasons than usual to want to take part in it. I also have a new baby, so intercontinental travel is rather difficult for me at the moment, and I won’t be going to Oxford. Yet, I’ll still be able to watch ICFP talks in real time, ask questions in Q&A sessions, and banter with other ICFP attendees. All this will be possible because this year we made some changes to enable remote participation at the conference.
In my last post, I wrote about a few ways that people use the word “transpiler”. In this post, I’ll offer a more personal take on the topic, based on my own experience of learning compiler development.
The first compiler I ever worked on was the one I wrote in the spring of 2009 for Kent Dybvig’s graduate compilers course at Indiana University. Actually, I didn’t write just one compiler for Kent’s course that semester; I wrote fifteen compilers, one for each week of the course. The first one had an input language that was more or less just parenthesized assembly language; its target language was x86-64 assembly. Each week, we added more passes to the front of the previous week’s compiler, resulting in a new compiler with the same target language as the compiler of the previous week, but a slightly higher-level input language.1 By the end of the course, I had a compiler that compiled a substantial subset of Scheme to x86-64, structured as forty small passes. Each pass translated from its input language to a slightly lower-level language, or had the same input and output language but performed some analysis or optimization on it.
Around 2013 or so, it started to become fashionable to use the word “transpiler” for certain kinds of compilers. When people say “transpiler”, what kind of compiler do they mean?
Every spring, I help review talk proposals for !!Con, a conference of ten-minute talks about the joy, excitement and surprise of computing. Because distilling an interesting topic into ten minutes of material is hard, we ask prospective speakers to provide a timeline as part of their talk proposal, explaining how they plan to use their ten minutes of stage time. The timeline helps us make sure that the speaker understands the talk format, and that they’ve put some thought into what they’ll cover in their talk and how they’re going to fit the material into the allotted time.
We get a lot of talk proposals, so in order for a proposal to be competitive for acceptance, it needs to have a decent timeline, and a really good timeline can push a borderline proposal into the “accept” category. Conversely, a bad timeline can kill an otherwise promising talk proposal. So, this post is advice for people submitting talk proposals to !!Con (or, potentially, other conferences that also ask for timelines) about how to write a timeline that will make your talk proposal the best it can be.
Update (July 19, 2017): As it turns out, someone just told me that they used this advice in this post to improve a talk proposal that they had submitted to another conference and had been asked to revise and resubmit. They said that writing a timeline turned out to be a good way to rethink the proposal (even though the conference hadn’t explicitly asked for one), and that the proposal was subsequently accepted! So, the advice in this post might be more widely applicable than I thought.
I’m very happy to announce that “Parallelizing Julia with a Non-Invasive DSL”, by Todd Anderson, Paul Liu, Ehsan Totoni, Jan Vitek, Tatiana Shpeisman, and me, will appear at ECOOP 2017 in Barcelona in a couple weeks from now. This paper presents ParallelAccelerator, an open-source library and compiler for high-level, high-performance scientific computing in Julia. ECOOP is an open-access conference, and the paper will be permanently available for free as part of a LIPIcs volume; there’ll be an accompanying open-access artifact as well.
A few months ago, my group at Intel Labs began sponsoring and collaborating with a team of researchers at Stanford who are extending the capabilities of automated verification tools to formally verify properties of deep neural networks used in safety-critical systems. This is the second in a series of two posts about that work. In the previous post, we looked at what “verification” means, a particular system that we want to verify properties of, and what some of those properties are. In this post, we’ll dig into the verification process itself, as discussed in the Stanford team’s paper, “Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks”.
If you haven’t yet read the previous post, you may want to read it for background before continuing with this one.