PurelyFunctional.tv: How did you get into Functional Programming?
Brandon Kase: I first learned about functional programming in my undergrad computer science program at CMU. However, at first I didn’t recognize its usefulness for building practical software. During my junior year, I took a compilers class and built a C-compiler in Scala with a partner who had experience working with Scala in industry. Our use of FP ensured that our compiler handled all of our edge cases before our program ran (despite some extra boilerplate since we were novices).
I like building mobile apps, so once out of school I joined a small firm -- Highlight -- where I was planning on doing mostly Android work. However, in the summer immediately after I graduated and before I started work, Apple released the Swift programming language. I immediately recognized the similarity with Scala. I remembered how Scala could catch my mistakes. At Highlight, I did start out doing Android work in Java, but I quickly became frustrated with the norms of Android programming (until we switched to Kotlin!), and even the mainstream software engineering discipline as a whole. I had seen the light when I had worked with Scala. Soon after, we needed to build an app from scratch on iOS and we decided to use Swift. I quickly became obsessed with copying what I had done in Scala: using algebraic data types, making impossible states impossible to represent, and using value types in as many places as possible. At some point, I hit the limits with what I already knew and started reading books, articles and blog posts, and watching meetup and conference talks about other functional programming languages. And still, today, I am continually trying to learn and explore functional programming.
PF.tv: What is your talk about?
BK: Consider what it means to be a cache. You need to be able to (1) associate a key with a value and (2) get some value given a key if such a value exists. That’s basically it. Caches tend to appear in layers. In a CPU, memory reads check L1, then L2, then L3, then RAM. When you want to load an image, you first check RAM, then disk, and finally network. We can help ensure a clean, correct implementation and encourage using caches in more places (leading to more responsive services and apps) by making it composable. Basic abstract algebra can help! My talk is about rethinking the caching layer of applications, by breaking caches into small pieces and putting them back together algebraically.
Traditionally, caching has been one of the most challenging parts of our programs to reason about. This was certainly the case in the app I was working on. Our caching layer became so hard to reason about that even though we had a tight deadline, I spent a bit working through for a better answer. Here, I stumbled upon a library called Carlos that had a very interesting approach to caching; one that I had not seen before. Instrumenting our code with Carlos caching vastly simplified the caching logic in our app. I noticed that this approach to caching had an interesting algebraic structure and I thought that such structure had to have been studied by someone and must have a name.
Eventually, I learned about monoids and eventually about the bits of the functor family needed for cache layering (profunctors and contravariant functors). In this talk, I’ll share this information by re-deriving this cache construction from the ground up.
PF.tv: Who is your talk for?
BK: My talk is for two kinds of people. Those who have been frustrated with the caching logic in their program, either on the front end or the backend, and those who are fascinated by interesting applications of algebraic structures. Hopefully, those in either camp will appreciate the other camp after this talk.
PF.tv: What do you hope people will take away from the talk?
BK: As I mentioned earlier, adopting this approach simplifies the caching logic in your application. So, those who have been annoyed with caching logic that is hard to reason about should consider taking this approach. On the other side (from the theory point of view), it’s fascinating to see that which was originally created based off of numbers (abstract algebra) appear in novel ways. It’s common sense that zero is the identity with respect to addition on integers, but more interesting to realize a cache that always misses is the identity with respect to layering caches. Not only that, but this approach of formalizing composition can be taken away and applied to other domains.
PF.tv: What concepts do you recommend people be familiar with to maximize their experience with the talk?
BK: People should be familiar with working with a programming language with typeclasses. This talk is presented in Swift but takes advantage of Swift's "typeclass", protocols. I will attempt to derive the intuition for the algebra of cache layering in a way that doesn’t require any outside knowledge, but those curious can read up on monoids. I’ll briefly mention profunctors and contravariant functors, so the especially curious can read up on those.
PF.tv: What resources are available for people who want to study up before the talk?
BK: To understand Swift's protocol system, I’d recommend looking at the official documentation for Swift on apple’s website. Since my examples are in Swift, looking over Swift's syntax on Apple's website could be wise as well. The abstract algebra that I will discuss can be understood by reading a couple of wikipedia pages: specifically the page on monoids and the page on associativity. Those that want to study about profunctors and contravariant functors should look up blog posts or meet-up talks on the topics
PF.tv: Where can people follow you online?
BK: People can follow me at @bkase_ on Twitter and @bkase on Github. People can also follow my blog and my personal site. Recently I’ve been focusing more on creating presentations for meetups and conferences rather than blogging, so my blog is not as up-to-date. If you search “Brandon Kase” and Swift on youtube, you’ll find a few conference and meetup talks that I’ve given (mostly related to functional programming)
PF.tv: Are there any projects you'd like people to be aware of? How can people help out?
BK: As a dependency for a commandline-parsing library I was working on, I needed a pretty printer implementation, so I ported wl-pprint-annotated — an implementation of the A prettier printer (Wadler 2003). This library uses algebraic reasoning to efficiently pretty print documents with dependency on page width.
Recently, I’ve been doing work for Typelift in Swift. Typelift is an organization that is dedicated to bridging the gap between the theory and practice of functional programming, and does so with Swift in an attempt to target iOS developers. I’m most excited about a new Typelift framework created by Elviro Rocca called Abstract that exposes common algebraic structures without any extra pieces in an attempt to make applications of algebraic structures (just like compassable caches) more accessible. I'll be focusing most of my efforts on that library in the near-term. In general, I recommend anyone who is interested in learning more about functional programming and who works with Swift to contribute to a Typelift project in some capacity. We will help any beginners out.
PF.tv: Where do you see the state of functional programming in 10 years?
BK: Hopefully everywhere. Compilers will be our friends. I want to see software engineers embracing category theory and algebraic structures in their code, but not only that: I hope that we will start to see lenses and recursion-schemes used in more practical software. As Erik Meijer says: General recursion is just like imperative code’s go-to. Additionally, I hope that the term and the concept of something like a monoid will be accepted across our industry.
PF.tv: If functional programming were a superhero, what superpower would it have?
BK: The power to freeze its enemies. Just as functional programming exists in a world of frozen state. (Get it -- pure FP doesn't mutate global state)