I just got back from SPLASH 2015, held in beautiful and topologically interesting Pittsburgh!1 While there, I gave a talk in the SPLASH-I track, and thanks to the efforts of Michael, the SPLASH video chair, a recording of the talk is already up on YouTube.
In this talk, I give a brief introduction to the work that my colleagues at Intel Labs and I have been doing on Prospect, a system for high-performance scientific computing with a productivity language2, and ParallelAccelerator.jl, our implementation of Prospect in Julia.
At a high level, the idea is to identify implicit parallel patterns, like map, reduce, array comprehension, and stencil, that for the most part already appear in user programs — especially if the programmer is writing code using high-level array operations that are common in scientific computing anyway. We compile these implicit parallel patterns to explicit parallel for loops, and we do some aggressive optimizations along the way so that we can avoid the runtime overhead of things like array bounds checks and allocation of intermediate arrays.
Most excitingly, our team has just released ParallelAccelerator.jl as open source. It’s up on GitHub, and you can install it from the Julia 0.4 REPL with
Pkg.add("ParallelAccelerator"). (Complete instructions are in the README.) All the code that I present in the talk is also available in the “examples” directory of our project on GitHub, so you can try it out yourself.
Thanks to Tijs van der Storm and Jan Vitek for inviting me to speak at SPLASH-I, and to everyone who was a friendly and engaged audience for this talk. I only wish that I hadn’t forgotten the stroopwafels!
In the talk, I define a “productivity language” as one that lets you get stuff done at a level of abstraction that matches your domain expertise. Catanzaro et al. used “productivity-level language” in a similar way. ↩