### A Purely Functional Typed Approach to Trainable Models (Part 2)

Welcome back! We’re going to be jumping right back into describing a vision of a purely functional typed approach to writing trainable models using differentiable programming. If you’re just joining us, be sure to check out Part 1 first!

In the last post, we looked at models as “question and answer” systems. We described them as essentially being functions of type

Where, for , you have a “question” and are looking for an “answer” . Picking a *different* will give a *different* function. We claimed that training a model was finding just the right to use with the model to yield the right function that models your situation.

We then noted that if you have a set of `(a, b)`

observations, and your function is differentiable, you can find the *gradient* of `p`

with respect to the error of your model on each observation, which tells you how to nudge a given `p`

in order to reduce how wrong your model is for that observation. By repeatedly making observations and taking those nudges, you can arrive at a suitable `p`

to model any situation.

This is great if we consider a model as “question and answer”, but sometimes things don’t fit so cleanly. Today, we’re going to be looking at a whole different type of model (“time series” models) and see how they are different, but also how they are really the same.