Should you do an MS before doing a Ph.D.?

On the Recurse Center’s Zulip community, someone posted this request for advice recently:

I’m currently in the early stages of preparing my application to Ph.D. programs in machine learning. I have unrelated/tangentially-related research experience in economics and a more recent stint in computational biology that used standard ML algorithms. Additionally, my undergrad was in econ and math, so I’m a little light on CS and feel that it would take me at least a semester to get up to speed for research in the field. Currently, I anticipate having my rec letters come from my two former PIs and an old math professor. Is it plausible to jump straight to a Ph.D. in CS, or should I be looking to do an MS in CS first?

With their permission, I’m publicly sharing a version of the advice I gave them, which seems to be common knowledge among academics but less well known outside the bubble.

Call for talk proposals: !!Con 2017

!!Con (pronounced “bang bang con”) is a conference of ten-minute talks about the joy, excitement, and surprise of computing. I co-founded !!Con with a group of friends from the Recurse Center back in 2014, and it’s been held annually in New York each May since then. Right now, we’re preparing for our fourth conference, !!Con 2017, to be held in New York this May 6-7. We’ve just announced this year’s keynote speakers, Karen Sandler and Limor Fried, and opened our call for talk proposals, which will be open until March 20. Quoting from the call for talk proposals:

Over the last three years, !!Con talks have featured everything from poetry generation to Pokémon; from machine knitting to electroencephalography; from quantum computing to old DOS games. Do you have a favorite algorithm or data structure? A great story about that time you found a super-weird bug? A tool that you learned about and now you’re telling everyone and their cat?

We want to hear from tinkerers and practical types, scientists and artists, teachers and students, ordinary programmers and out-of-the-ordinary ones. We don’t care if what you talk about is “not smart enough” or “done before”; if you think it’s cool, we want to hear from you.

Why does a Turing-complete type system make type checking undecidable?

Last May, I wrote about an article by Radu Grigore that claimed that “all Java type checkers have bugs”, and what that claim might have meant. Later, Grigore expanded his original short article into a full-length paper that appeared at POPL 2017 a few weeks ago. The appearance of “Java Generics are Turing Complete” at POPL made me go back and reconsider what I wrote last May, given the new version of the paper. It turns out that the new version revises the sentence I quoted at the beginning of my May 2016 post from

For Java, Theorem 1 implies that there cannot exist a formally verified type checker: All Java type checkers have bugs.


For Java, Theorem 1 implies that a formally verified type checker that guarantees partial correctness cannot also guarantee termination.

A simple but difficult arithmetic puzzle, and the rabbit hole it took me down

A while back, Mark Dominus proposed an arithmetic puzzle: combine the numbers 6, 6, 5, and 2 with arithmetic operations (addition, subtraction, multiplication, and division) to get 17. After fiddling with the problem for a bit on my own and not being able to solve it, I decided to write a solver, and I ended up falling down an unexpected rabbit hole and dragging a few friends down with me.

(If you want to try to solve the puzzle on your own, or if you want to write a solver without having seen someone else’s, you may want to go do that before continuing to read this post.)