At work, I’ve been participating in a series of long-running, broad-ranging discussions about the role that domain-specific languages, or DSLs, can play in helping programmers exploit high-performance parallel hardware. One thing that’s interesting about these discussions is that they reveal what people’s assumptions are about what “DSL” means.
As regular readers of this blog are probably tired of hearing by now, !!Con (“bang bang con”) is a weekend-long conference of ten-minute talks about experiencing computing viscerally, held annually in New York since 2014. I’ve been helping organize !!Con since the beginning, and the rest of the organizing team and I are now preparing to put on our 2017 event.
!!Con has been a success. In fact, year after year we find that the demand for what we’re doing is greater than what we’re able to meet: we get a lot more strong talk proposals than we have room to accept, and we have way more potential attendees than we have room for. It feels good to be wanted, but it also means that we’re constantly disappointing people, and that sucks.
So, how can we scale !!Con to meet demand?
On the Recurse Center’s Zulip community, someone posted this request for advice recently:
I’m currently in the early stages of preparing my application to Ph.D. programs in machine learning. I have unrelated/tangentially-related research experience in economics and a more recent stint in computational biology that used standard ML algorithms. Additionally, my undergrad was in econ and math, so I’m a little light on CS and feel that it would take me at least a semester to get up to speed for research in the field. Currently, I anticipate having my rec letters come from my two former PIs and an old math professor. Is it plausible to jump straight to a Ph.D. in CS, or should I be looking to do an MS in CS first?
With their permission, I’m publicly sharing a version of the advice I gave them, which seems to be common knowledge among academics but less well known outside the bubble.
!!Con (pronounced “bang bang con”) is a conference of ten-minute talks about the joy, excitement, and surprise of computing. I co-founded !!Con with a group of friends from the Recurse Center back in 2014, and it’s been held annually in New York each May since then. Right now, we’re preparing for our fourth conference, !!Con 2017, to be held in New York this May 6-7. We’ve just announced this year’s keynote speakers, Karen Sandler and Limor Fried, and opened our call for talk proposals, which will be open until March 20. Quoting from the call for talk proposals:
Over the last three years, !!Con talks have featured everything from poetry generation to Pokémon; from machine knitting to electroencephalography; from quantum computing to old DOS games. Do you have a favorite algorithm or data structure? A great story about that time you found a super-weird bug? A tool that you learned about and now you’re telling everyone and their cat?
We want to hear from tinkerers and practical types, scientists and artists, teachers and students, ordinary programmers and out-of-the-ordinary ones. We don’t care if what you talk about is “not smart enough” or “done before”; if you think it’s cool, we want to hear from you.
Last May, I wrote about an article by Radu Grigore that claimed that “all Java type checkers have bugs”, and what that claim might have meant. Later, Grigore expanded his original short article into a full-length paper that appeared at POPL 2017 a few weeks ago. The appearance of “Java Generics are Turing Complete” at POPL made me go back and reconsider what I wrote last May, given the new version of the paper. It turns out that the new version revises the sentence I quoted at the beginning of my May 2016 post from
For Java, Theorem 1 implies that there cannot exist a formally verified type checker: All Java type checkers have bugs.
For Java, Theorem 1 implies that a formally verified type checker that guarantees partial correctness cannot also guarantee termination.