composition.al

How to write a timeline for a !!Con talk proposal

Every spring, I help review talk proposals for !!Con, a conference of ten-minute talks about the joy, excitement and surprise of computing. Because distilling an interesting topic into ten minutes of material is hard, we ask prospective speakers to provide a timeline as part of their talk proposal, explaining how they plan to use their ten minutes of stage time. The timeline helps us make sure that the speaker understands the talk format, and that they’ve put some thought into what they’ll cover in their talk and how they’re going to fit the material into the allotted time.

We get a lot of talk proposals, so in order for a proposal to be competitive for acceptance, it needs to have a decent timeline, and a really good timeline can push a borderline proposal into the “accept” category. Conversely, a bad timeline can kill an otherwise promising talk proposal. So, this post is advice for people submitting talk proposals to !!Con (or, potentially, other conferences that also ask for timelines) about how to write a timeline that will make your talk proposal the best it can be.

Update (July 19, 2017): As it turns out, someone just told me that they used this advice in this post to improve a talk proposal that they had submitted to another conference and had been asked to revise and resubmit. They said that writing a timeline turned out to be a good way to rethink the proposal (even though the conference hadn’t explicitly asked for one), and that the proposal was subsequently accepted! So, the advice in this post might be more widely applicable than I thought.

“Parallelizing Julia with a Non-Invasive DSL” will appear at ECOOP 2017

I’m very happy to announce that “Parallelizing Julia with a Non-Invasive DSL”, by Todd Anderson, Paul Liu, Ehsan Totoni, Jan Vitek, Tatiana Shpeisman, and me, will appear at ECOOP 2017 in Barcelona in a couple weeks from now. This paper presents ParallelAccelerator, an open-source library and compiler for high-level, high-performance scientific computing in Julia. ECOOP is an open-access conference, and the paper will be permanently available for free as part of a LIPIcs volume; there’ll be an accompanying open-access artifact as well.

Update (June 16, 2017): The ECOOP 2017 LIPIcs volume is now available, under a Creative Commons Attribution license!

Proving that safety-critical neural networks do what they’re supposed to: where we are, where we’re going (part 2 of 2)

A few months ago, my group at Intel Labs began sponsoring and collaborating with a team of researchers at Stanford who are extending the capabilities of automated verification tools to formally verify properties of deep neural networks used in safety-critical systems. This is the second in a series of two posts about that work. In the previous post, we looked at what “verification” means, a particular system that we want to verify properties of, and what some of those properties are. In this post, we’ll dig into the verification process itself, as discussed in the Stanford team’s paper, “Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks”.

If you haven’t yet read the previous post, you may want to read it for background before continuing with this one.

Proving that safety-critical neural networks do what they’re supposed to: where we are, where we’re going (part 1 of 2)

Neural networks are turning up everywhere these days, including in safety-critical systems, such as autonomous driving and flight control systems. When these systems fail, human lives are at risk. But it’s hard to provide formal guarantees about the behavior of neural networks — how can we know for sure that they won’t steer us the wrong way?

A few months ago, my group at Intel Labs (including, among others, Tatiana Shpeisman, Justin Gottschlich, and me) began sponsoring and collaborating with a team of researchers at Stanford who are extending the capabilities of automated verification tools, such as SMT solvers, to formally verify properties of deep neural networks used in safety-critical systems. The team, which includes Guy Katz, Kyle Julian, and faculty members Clark Barrett and Mykel Kochenderfer, has developed a prototype SMT solver tool that can verify safety properties of a neural network that gives aircraft collision avoidance advice. For instance, we can use it to show that if another plane is near our plane and approaching from the left, the network will always advise us to turn right. The Stanford team’s first paper on this project, “Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks”, will appear at CAV 2017 this July.

In this two-part series of posts, I’ll give a 30,000-foot (…so to speak) view of the aircraft collision avoidance system we want to verify properties of, how the verification tool works, some of the interesting insights that went into developing that tool, and where I’d like to see this project go next. This first post will cover what it means to verify properties of safety-critical neural networks and some detail about the collision avoidance system in question. In the second post, we’ll dig into SMT solving and the verification process itself.

What isn’t a high-performance DSL?

At work, I’ve been participating in a series of long-running, broad-ranging discussions about the role that domain-specific languages, or DSLs, can play in helping programmers exploit high-performance parallel hardware. One thing that’s interesting about these discussions is that they reveal what people’s assumptions are about what “DSL” means.