composition.al

Proving that safety-critical neural networks do what they’re supposed to: where we are, where we’re going (part 1 of 2)

Neural networks are turning up everywhere these days, including in safety-critical systems, such as autonomous driving and flight control systems. When these systems fail, human lives are at risk. But it’s hard to provide formal guarantees about the behavior of neural networks — how can we know for sure that they won’t steer us the wrong way?

A few months ago, my group at Intel Labs (including, among others, Tatiana Shpeisman, Justin Gottschlich, and me) began sponsoring and collaborating with a team of researchers at Stanford who are extending the capabilities of automated verification tools, such as SMT solvers, to formally verify properties of deep neural networks used in safety-critical systems. The team, which includes Guy Katz, Kyle Julian, and faculty members Clark Barrett and Mykel Kochenderfer, has developed a prototype SMT solver tool that can verify safety properties of a neural network that gives aircraft collision avoidance advice. For instance, we can use it to show that if another plane is near our plane and approaching from the left, the network will always advise us to turn right. The Stanford team’s first paper on this project, “Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks”, will appear at CAV 2017 this July.

In this two-part series of posts, I’ll give a 30,000-foot (…so to speak) view of the aircraft collision avoidance system we want to verify properties of, how the verification tool works, some of the interesting insights that went into developing that tool, and where I’d like to see this project go next. This first post will cover what it means to verify properties of safety-critical neural networks and some detail about the collision avoidance system in question. In the second post, we’ll dig into SMT solving and the verification process itself.

What isn’t a high-performance DSL?

At work, I’ve been participating in a series of broad-ranging discussions about the role that domain-specific languages, or DSLs, can play in helping programmers exploit high-performance parallel hardware. One thing that’s interesting about these discussions is that they reveal what people’s assumptions are about what “DSL” means.

Scaling !!Con

As regular readers of this blog are probably tired of hearing by now, !!Con (“bang bang con”) is a weekend-long conference of ten-minute talks about experiencing computing viscerally, held annually in New York since 2014. I’ve been helping organize !!Con since the beginning, and the rest of the organizing team and I are now preparing to put on our 2017 event.

!!Con has been a success. In fact, year after year we find that the demand for what we’re doing is greater than what we’re able to meet: we get a lot more strong talk proposals than we have room to accept, and we have way more potential attendees than we have room for. It feels good to be wanted, but it also means that we’re constantly disappointing people, and that sucks.

So, how can we scale !!Con to meet demand?

Should you do an MS before doing a Ph.D.?

On the Recurse Center’s Zulip community, someone posted this request for advice recently:

I’m currently in the early stages of preparing my application to Ph.D. programs in machine learning. I have unrelated/tangentially-related research experience in economics and a more recent stint in computational biology that used standard ML algorithms. Additionally, my undergrad was in econ and math, so I’m a little light on CS and feel that it would take me at least a semester to get up to speed for research in the field. Currently, I anticipate having my rec letters come from my two former PIs and an old math professor. Is it plausible to jump straight to a Ph.D. in CS, or should I be looking to do an MS in CS first?

With their permission, I’m publicly sharing a version of the advice I gave them, which seems to be common knowledge among academics but less well known outside the bubble.

Call for talk proposals: !!Con 2017

!!Con (pronounced “bang bang con”) is a conference of ten-minute talks about the joy, excitement, and surprise of computing. I co-founded !!Con with a group of friends from the Recurse Center back in 2014, and it’s been held annually in New York each May since then. Right now, we’re preparing for our fourth conference, !!Con 2017, to be held in New York this May 6-7. We’ve just announced this year’s keynote speakers, Karen Sandler and Limor Fried, and opened our call for talk proposals, which will be open until March 20. Quoting from the call for talk proposals:

Over the last three years, !!Con talks have featured everything from poetry generation to Pokémon; from machine knitting to electroencephalography; from quantum computing to old DOS games. Do you have a favorite algorithm or data structure? A great story about that time you found a super-weird bug? A tool that you learned about and now you’re telling everyone and their cat?

We want to hear from tinkerers and practical types, scientists and artists, teachers and students, ordinary programmers and out-of-the-ordinary ones. We don’t care if what you talk about is “not smart enough” or “done before”; if you think it’s cool, we want to hear from you.