composition.al

A simple but difficult arithmetic puzzle, and the rabbit hole it took me down

A while back, Mark Dominus proposed an arithmetic puzzle: combine the numbers 6, 6, 5, and 2 with arithmetic operations (addition, subtraction, multiplication, and division) to get 17. After fiddling with the problem for a bit on my own and not being able to solve it, I decided to write a solver, and I ended up falling down an unexpected rabbit hole and dragging a few friends down with me.

(If you want to try to solve the puzzle on your own, or if you want to write a solver without having seen someone else’s, you may want to go do that before continuing to read this post.)

My adventures with git-filter-branch

Not too long ago, I found myself in a tricky situation with git. I had an enormous git repo containing a few things I wanted to keep and many, many things I didn’t, including some large files that probably never should have been checked in. I wanted to create a fresh copy of the repo that included only a small subset of the files from the the original repo’s master branch (and only the history pertaining to those files).

git-subtree wasn’t the right tool for the job, because I wanted to include some but not all of the files in certain directories. I would have liked to use the BFG Repo-Cleaner, but it wasn’t the right tool either, because it has a “your current files are sacred” assumption, and I didn’t want to keep everything in HEAD. In fact, in the original repo, HEAD had about 5000 files (with many more files that weren’t in HEAD but had existed in history at various points), and there were only about 100 files that I wanted to keep. So it was time to take the ol’ git-filter-branch chainsaw out for a spin.

Thoughts on “Safety Verification of Deep Neural Networks”

A couple months back, I wrote about Pulina and Tacchella’s 2010 paper on verification of safety properties of neural networks. That paper is the earliest that I’ve found in the literature that makes a serious attempt at verification of neural networks using an SMT solver, and I found it really interesting.

In that post, I also mentioned a number of limitations of Pulina and Tacchella’s approach. First, the network used in the paper’s case study was tiny, and it wasn’t clear that the technique would scale to realistically sized neural networks. Second, the property being verified (ensuring that the network’s output fell within a specified range) seemed a bit boring; I was interested in whether it was possible to verify more interesting properties. Third, the function the network was learning — the position of the end of a robot arm, given its joint angles — seemed like something that could be worked out analytically, without the need for a neural network (or any machine learning at all). And finally, it wasn’t clear that the proposed verification approach would work for neural networks that did classification tasks, rather than regression tasks like the one in the case study. A 2011 follow-up paper by the authors addressed some of those concerns, but not all of them.

An economics analogy for why adversarial examples work

One of the most interesting results from “Explaining and Harnessing Adversarial Examples” is the idea that adversarial examples for a machine learning model do not arise because of the supposed complexity or nonlinearity of the model, but rather because of high dimensionality of the input space. I want to take a stab at explaining “Explaining”’s result with an economics analogy. Take it with a grain of salt, since I have little to no formal training in either machine learning or economics. Let’s go!

The one-sentence version of this post: There are economies of scale with dimensions when constructing adversarial examples.