composition.al

“Parallelizing Julia with a Non-invasive DSL” will appear at ECOOP 2017

I’m very happy to announce that “Parallelizing Julia with a Non-invasive DSL”, by Todd Anderson, Paul Liu, Ehsan Totoni, Jan Vitek, Tatiana Shpeisman, and me, will appear at ECOOP 2017 in Barcelona in a couple weeks from now. This paper presents ParallelAccelerator, an open-source library and compiler for high-level, high-performance scientific computing in Julia. ECOOP is an open-access conference, and the paper will be permanently available for free as part of a LIPIcs volume; there’ll be an accompanying open-access artifact as well.

Update (June 16, 2017): The ECOOP 2017 LIPIcs volume is now available, under a Creative Commons Attribution license!

Proving that safety-critical neural networks do what they’re supposed to: where we are, where we’re going (part 2 of 2)

A few months ago, my group at Intel Labs began sponsoring and collaborating with a team of researchers at Stanford who are extending the capabilities of automated verification tools to formally verify properties of deep neural networks used in safety-critical systems. This is the second in a series of two posts about that work. In the previous post, we looked at what “verification” means, a particular system that we want to verify properties of, and what some of those properties are. In this post, we’ll dig into the verification process itself, as discussed in the Stanford team’s paper, “Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks”.

If you haven’t yet read the previous post, you may want to read it for background before continuing with this one.

Proving that safety-critical neural networks do what they’re supposed to: where we are, where we’re going (part 1 of 2)

Neural networks are turning up everywhere these days, including in safety-critical systems, such as autonomous driving and flight control systems. When these systems fail, human lives are at risk. But it’s hard to provide formal guarantees about the behavior of neural networks — how can we know for sure that they won’t steer us the wrong way?

A few months ago, my group at Intel Labs (including, among others, Tatiana Shpeisman, Justin Gottschlich, and me) began sponsoring and collaborating with a team of researchers at Stanford who are extending the capabilities of automated verification tools, such as SMT solvers, to formally verify properties of deep neural networks used in safety-critical systems. The team, which includes Guy Katz, Kyle Julian, and faculty members Clark Barrett and Mykel Kochenderfer, has developed a prototype SMT solver tool that can verify safety properties of a neural network that gives aircraft collision avoidance advice. For instance, we can use it to show that if another plane is near our plane and approaching from the left, the network will always advise us to turn right. The Stanford team’s first paper on this project, “Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks”, will appear at CAV 2017 this July.

In this two-part series of posts, I’ll give a 30,000-foot (…so to speak) view of the aircraft collision avoidance system we want to verify properties of, how the verification tool works, some of the interesting insights that went into developing that tool, and where I’d like to see this project go next. This first post will cover what it means to verify properties of safety-critical neural networks and some detail about the collision avoidance system in question. In the second post, we’ll dig into SMT solving and the verification process itself.

What isn’t a high-performance DSL?

At work, I’ve been participating in a series of long-running, broad-ranging discussions about the role that domain-specific languages, or DSLs, can play in helping programmers exploit high-performance parallel hardware. One thing that’s interesting about these discussions is that they reveal what people’s assumptions are about what “DSL” means.

Scaling !!Con

As regular readers of this blog are probably tired of hearing by now, !!Con (“bang bang con”) is a weekend-long conference of ten-minute talks about experiencing computing viscerally, held annually in New York since 2014. I’ve been helping organize !!Con since the beginning, and the rest of the organizing team and I are now preparing to put on our 2017 event.

!!Con has been a success. In fact, year after year we find that the demand for what we’re doing is greater than what we’re able to meet: we get a lot more strong talk proposals than we have room to accept, and we have way more potential attendees than we have room for. It feels good to be wanted, but it also means that we’re constantly disappointing people, and that sucks.

So, how can we scale !!Con to meet demand?