in Code

Recent Entries (Page 3)

  • Introduction to Singletons (Part 3)

    Welcome back! This article is part 3 of our journey through the singleton design pattern, and the great singletons library!

    This post will be a continuation of Part 1 and Part 2, so if you haven’t read those first, now would be a good time to pause and do so (and also try to complete the exercises). Today we will be expanding on the ideas in those posts by working with more complex ways to restrict functions based on types. Like the previous posts, we will start by writing things “by hand”, and then jumping into the singletons library and seeing how the framework gives you tools to work with these ideas in a smoother way.

    The first half of today’s post will introduce a new application and design pattern that the usage of singletons greatly enhances. The second part of today’s post deals directly with the lifting of functions to the type level, which is made practical by the usage of singletons and the singletons library.

    Code in this post is built on GHC 8.6.1 with the nightly-2018-09-29 snapshot (so, singletons-2.5). However, unless noted, all of the code should still work with GHC 8.4 and singletons-2.4. Again, you can download the source for this file here, and, if stack is installed, you can drop into a ghci session with all of the bindings in scope executing it:

    $ ./Door3.hs

    Read more … Comments

  • Lenses embody Products, Prisms embody Sums

    I’ve written about a variety of topics on this blog, but one thing I haven’t touched in too much detail is the topic of lenses and optics. A big part of this is because there are already so many great resources on lenses.

    This post won’t be a “lens tutorial”, but rather a dive into an perspective on lenses and prisms that I’ve heard repeated many times (usually credited to Edward Kmett, but shachaf has helped me trace the origins back to this paste by Reid Barton) but never quite expanded on in-depth. In particular, I’m going to talk about the perspective of lenses and prisms as embodying the essences of products and sums (respectively), the insights that perspective brings, how well it flows into the “profunctor optics” formulation, and how you can apply these observations to practical usage of lenses and prisms.

    The “final code” in this post is available online as a “stack executable” that, when run, will pop you into a ghci session with all of the final definitions in scope, so you can play around with them :)

    Read more … Comments

  • Starting a Patreon

    Short version: I’ve started a Patreon page for those who want to donate or help me sustain my writing in my post-doctoral life! :D

    Long version: Hi all! Hope you can excuse the two non-technical and personal posts in a row :) To give some context on this announcement, here’s some news — I graduated*!

    Graduating* from a Doctoral Program at Chapman University

    *Well, not exactly :) I haven’t finished my dissertation and defended my thesis yet (but expect to in the upcoming months); Chapman University was generous enough to allow me to walk in this past Spring’s ceremony.

    I have a lot of excitement about my future after my defense and official commencement. However, with this excitement comes a lot of uncertainty.

    As a student I have been able to devote time on-and-off again to this blog, which has become a passion of mine. I’ve even managed to find time to live-stream some coding projects occasionally! When I started this blog almost five years ago, I only wanted to write down what I was learning to help myself remember, and to help others going through the same thing as me. I never expected the warm reception from the Haskell and Programming community. Every once in a while I meet people who tell me personally that an article of mine had inspired them to get into functional programming, or had been the final straw to helping them get over a conceptual hurdle. It’s moments like these that really make everything worth it. And, as a researcher in a field dominated by imperative and dynamically typed programming, watching and encouraging the growth of functional programming has brought me a lot of joy.

    Read more … Comments

  • In Memory of Ertugrul Söylemez (1985 - 2018)

    I was heartbroken to find out recently that Ertugrul Söylemez has passed away suddenly. Many have come forward to express their sadness about this passing and how much this has impacted the Haskell community, and how much of a loss it is for functional programming at large.

    They aren’t wrong; Ertugrul was one of the faces of the friendly, warm, encouraging, patient Haskell teaching that Haskell has grown to be known for. Ertugrul was also one of the original pioneers in the implementation and theory of Functional Reactive Programming and continued to innovate even through this year. His name is now and forever will be synonymous with the “pull-based” variant of functional reactive programming. And the freenode #haskell channel and the Haskell community at large will have one less friendly face who always enjoys helping new people learn.

    However, I wanted to just put some words down about his personal influence in my Haskell, Academic, and FOSS career.

    When I was a new Haskeller, a lot of things confused me. But the passion of people like Ertugrul to help me understand concepts that I found interesting late into the night was one of the things that really made it worth it.

    One of these lead to the creation of my first ever Haskell library, auto. auto is basically literally a direct translation of one of our conversations (and somewhat of a derivative of his own library netwire), and throughout the entire implementation process he was open to the many questions I had. And some of the features of the library (including implicit serialization) were directly his innovations put into practice.

    In a slightly different context — as a new PhD student, I was told to follow wherever my curiosity lead me. One of those lines lead me to “comonadic” image processing, which was directly inspired by this unfinished article of his.

    Read more … Comments

  • A Purely Functional Typed Approach to Trainable Models (Part 3)

    Hi again! Today we’re going to jump straight into tying together the functional framework described in this series and see how it can give us some interesting insight, as well as wrapping it up by talking about the scaffolding needed to turn this all into a working system you can apply today.

    The name of the game is a purely functional typed approach to writing trainable models using differentiable programming. Be sure to check out Part 1 and Part 2 if you haven’t, because this is a direct continuation.

    My favorite part about this system really is how we have pretty much free reign over how we can combine and manipulate our models, since they are just functions. Combinators — a word I’m going to be using to mean higher-order functions that return functions — tie everything together so well. Some models we might have thought were standalone entities might just be derivable from other models using basic functional combinators. And the best part is that they’re never necessary; just helpful.

    Again, if you want to follow along, the source code for the written code in this module is available on github.

    Read more … Comments

  • A Purely Functional Typed Approach to Trainable Models (Part 2)

    Welcome back! We’re going to be jumping right back into describing a vision of a purely functional typed approach to writing trainable models using differentiable programming. If you’re just joining us, be sure to check out Part 1 first!

    In the last post, we looked at models as “question and answer” systems. We described them as essentially being functions of type

    \[ f : P \rightarrow (A \rightarrow B) \]

    Where, for \(f_p(x) = y\), you have a “question” \(x : A\) and are looking for an “answer” \(y : B\). Picking a different \(p : P\) will give a different \(A \rightarrow B\) function. We claimed that training a model was finding just the right \(p\) to use with the model to yield the right \(A \rightarrow B\) function that models your situation.

    We then noted that if you have a set of (a, b) observations, and your function is differentiable, you can find the gradient of p with respect to the error of your model on each observation, which tells you how to nudge a given p in order to reduce how wrong your model is for that observation. By repeatedly making observations and taking those nudges, you can arrive at a suitable p to model any situation.

    This is great if we consider a model as “question and answer”, but sometimes things don’t fit so cleanly. Today, we’re going to be looking at a whole different type of model (“time series” models) and see how they are different, but also how they are really the same.

    Read more … Comments

  • A Purely Functional Typed Approach to Trainable Models (Part 1)

    With the release of backprop, I’ve been exploring the space of parameterized models of all sorts, from linear and logistic regression and other statistical models to artificial neural networks, feed-forward and recurrent (stateful). I wanted to see to what extent we can really apply automatic differentiation and iterative gradient decent-based training to all of these different models. Basically, I wanted to see how far we can take differentiable programming (a la Yann LeCun) as a paradigm for writing trainable models.

    Building on other writers, I’m starting to see a picture unifying all of these models, painted in the language of purely typed functional programming. I’m already applying these to models I’m using in real life and in my research, and I thought I’d take some time to put my thoughts to writing in case anyone else finds these illuminating or useful.

    As a big picture, I really believe that a purely functional typed approach to differentiable programming is the way to move forward in the future for models like artificial neural networks. In this light, the drawbacks of object-oriented and imperative approaches becomes very apparent.

    I’m not the first person to attempt to build a conceptual framework for these types of models in a purely functional typed sense – Christopher Olah’s famous post wrote a great piece in 2015 that this post heavily builds off of, and is definitely worth a read! We’ll be taking some of his ideas and seeing how they work in real code!

    This will be a three-part series, and the intended audience is people who have a passing familiarity with statistical modeling or machine learning/deep learning. The code in these posts is written in Haskell, using the backprop and hmatrix (with hmatrix-backprop) libraries, but the main themes and messages won’t be about haskell, but rather about differentiable programming in a purely functional typed setting in general. This isn’t a Haskell post as much as it is an exploration, using Haskell syntax/libraries to implement the points. The backprop library is roughly equivalent to autograd in python, so all of the ideas apply there as well.

    The source code for the written code in this module is available on github, if you want to follow along!

    Read more … Comments