Neural networks are turning up everywhere these days, including in safety-critical systems, such as autonomous driving and flight control systems. When these systems fail, human lives are at risk. But it’s hard to provide formal guarantees about the behavior of neural networks — how can we know for sure that they won’t steer us the wrong way?
A few months ago, my group at Intel Labs (including, among others, Tatiana Shpeisman, Justin Gottschlich, and me) began sponsoring and collaborating with a team of researchers at Stanford who are extending the capabilities of automated verification tools, such as SMT solvers, to formally verify properties of deep neural networks used in safety-critical systems. The team, which includes Guy Katz, Kyle Julian, and faculty members Clark Barrett and Mykel Kochenderfer, has developed a prototype SMT solver tool that can verify safety properties of a neural network that gives aircraft collision avoidance advice. For instance, we can use it to show that if another plane is near our plane and approaching from the left, the network will always advise us to turn right. The Stanford team’s first paper on this project, “Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks”, will appear at CAV 2017 this July.
In this two-part series of posts, I’ll give a 30,000-foot (…so to speak) view of the aircraft collision avoidance system we want to verify properties of, how the verification tool works, some of the interesting insights that went into developing that tool, and where I’d like to see this project go next. This first post will cover what it means to verify properties of safety-critical neural networks and some detail about the collision avoidance system in question. In the second post, we’ll dig into SMT solving and the verification process itself.