Hi all! If you don’t already know, Advent of Code is in full swing this year! If you’re participating and using Haskell, you’re welcome to join us at glguy’s semi-official Haskell Leaderboard (join code
43100-84040706)! There are also Haskellers on freenode ##adventofcode, and also #adventofcode on the Functional Programming slack. I also wrote a haskell library to the API, if you’re looking to streamline your process!
My daily reflections are online, where I talk about how I approach each problem and what insight purely typed Functional Programming gives us for each problem.
Every once in a while I’ll find a challenge that I think can be pulled out as a short blog post, so I’ll bring them onto the blog for a more long-term sort of longevity!1
In this one, I’ll talk about using group theory to solve the Day 5 challenge. Spoilers for those who have not solved it yet!
You’ve managed to sneak in to the prototype suit manufacturing lab. The Elves are making decent progress, but are still struggling with the suit’s size reduction capabilities.
While the very latest in 1518 alchemical technology might have solved their problem eventually, you can do better. You scan the chemical composition of the suit’s material and discover that it is formed by extremely long polymers (one of which is available as your puzzle input).
The polymer is formed by smaller units which, when triggered, react with each other such that two adjacent units of the same type and opposite polarity are destroyed. Units’ types are represented by letters; units’ polarity is represented by capitalization. For instance,
Rare units with the same type but opposite polarity, whereas
sare entirely different types and do not react.
Areact, leaving nothing behind.
bBdestroys itself, leaving
aA. As above, this then destroys itself, leaving nothing.
abAB, no two adjacent units are of the same type, and so nothing happens.
aabAAB, even though
AAare of the same type, their polarities match, and so nothing happens.
Now, consider a larger example,
dabAcCaCBAcCcaDA The first 'cC' is removed. dabAaCBAcCcaDA This creates 'Aa', which is removed. dabCBAcCcaDA Either 'cC' or 'Cc' are removed (the result is the same). dabCBAcaDA No further actions can be taken.
After all possible reactions, the resulting polymer contains 10 units.
How many units remain after fully reacting the polymer you scanned?
A fully reacted polymer is an element of the free group generated by the set of 26 letters of the alphabet, . Basically, we can talk about interpeting the string
d <> a <> b <> A <> c <> C <> a <> C <> B <> etc., where
<> is the group action (“mappend”, in Haskell-speak) and
A stands for “
inject :: Char -> FreeGroupL Char takes a
Char and turns it into the element of our free group. We can do this by using the library’s
invert (from the
The question is essentially asking for the length of the Tietze list representation of the final result. We can get this using
FG.toList, and so our entire part 1 is just:
Was this problem actually deliberately written to obscure the fact that we have a group action? We can’t say for sure, but it definitely seems to paint an “imperative”, step-by-step picture. Most implementations might scan the list for things to replace, and iterate over things until there is no more change. The problem is also written to maybe imply the fact that ordering of reduction of different items across the string might matter (is it left to right, or right to left?).
However, we know that ordering of reduction can’t matter, because we have a group action
<>, which we know from group theory is, by definition, associative. Knowing that we have a group, we can immediately throw away any thought of caring about things like sequential, imperative order, and the complications that might come from it.
I think that these useful properties about the reduction process are not obvious from an initial reading. Being able to recognize that we have a group is the key to unlocking all of these insights, for free!
Time to improve the polymer.
One of the unit types is causing problems; it’s preventing the polymer from collapsing as much as it should. Your goal is to figure out which unit type is causing the most problems, remove all instances of it (regardless of polarity), fully react the remaining polymer, and measure its length.
For example, again using the polymer
- Removing all
dbcCCBcCcD. Fully reacting this polymer produces
dbCBcD, which has length 6.
- Removing all
daAcCaCAcCcaDA. Fully reacting this polymer produces
daCAcaDA, which has length 8.
- Removing all
dabAaBAaDA. Fully reacting this polymer produces
daDA, which has length 4.
- Removing all
abAcCaCBAcCcaA. Fully reacting this polymer produces
abCBAc, which has length 6.
In this example, removing all
cunits was best, producing the answer 4.
What is the length of the shortest polymer you can produce by removing all units of exactly one type and fully reacting the result?
Even though the problem again seems to be written to obscure this fact, we can see that Part 2 is basically describing a group homomorphism. That is, it talks about functions that map (the free group on the 26 letters of the alphabet) to (the free group on the letters of the alphabet excluding some cleaned letter).
Luckily, the free group comes equipped with a handy function create group homomorphisms “for free”:
That is, given any
a -> b for a group
b, we get a guaranteed group homormohpsim from
FreeGroupL a to
b. We can write a function from the generators of our group (in this case,
Char), and it’ll give us a group homomorphism on the free group on our generators.
We can use this to write our group homomorphism
(Note that this type signature looks a lot like
=<<, for monads. Coincidence?)
We can now create a “cleaned” version of our reaction by using:
And so that’s part 2! We just need
clean and this next function:
Basically, we find the minimum of all of the possible “cleaned” lengths.
The best part about this, I think, is that we actually introduced a large optimization, completely by accident, thanks to group theory.
Because we recognize that this is a group homomorphism, we know the properties of group homomorphisms apply. Namely, the most important one: “Aggregating, then cleaning” is the same as “cleaning, then aggregating”.
That’s because all group homomorphisms necessarily obey the law:
This means that we are free to either “clean first, then aggregate”, or “aggregate first, then clean”.
What’s in a Group?
Now, I don’t know about you, but I definitely feel that this choice (clean, aggregate vs. aggregate, clean) we have is definitely not obvious just from reading the problem immediately. Indeed, it seems like the problem might be written to obscure this choice from us: it’s implying that “cleaning, then reacting” is the only correct way, and “reacing, then cleaning” is not something that is even mentioned.
But, thanks to group theory, we know that these are equivalent, so we can substitute which ever version is more efficient!
This is, I believe, at the heart of what people say is the advantage of “using monoids”, “using monads”, “using functors”, etc. in Haskell. That’s because if we state our programs in terms of monoids, monads, groups, functors, etc., then we get the entire body of group theory (or monad theory, or functor theory, etc.) to help us make program reductions that aren’t immediately obviously legal but that have already been proven to be equivalent by mathematicians. We hijack their work!
We get program optimizations and reductions and substitutions for free, by “stealing” from the large body of such things that mathematicians have spent centuries collecting.
Epilogue: A Note on Cleaning
Since I have posted this, I have gotten a few comments on different platforms debating whether or not “clean then aggregate” is truly same as “aggregate then clean”.
Just to clarify,
clean c here does not mean “remove
c”. It means “re-interpret our group element with the new property that
c is the identity element”. Clean is like a “re-reaction”. To draw the chemical analogy, it can be thought of as a redox reaction that doesn’t “remove” atoms, but rather “rewrites the rules of the polymer”. The identity is that “aggregate the polymer, and then rewrite its roles” is the same as “rewrite the rules, then aggregate the polymer”.
Clean is not a “removal”. It’s “re-interpretation”.
To be more clear, if we have a function
cleanB which cleans out the letter
b, we can take
abAcdD as an example and aggregate it to get:
Now, we can clean that aggregation to get:
That’s our “aggregate, then clean”. Our “clean, then aggregate” would be:
The library free-algebras gives us
foldMapFree, which creates group homomorphisms by construction. That means that whatever function we get out of
foldMapFree, it is mathematically proven that:
Or, that “aggregate then clean” is the same as “clean then aggregate”.
This is because the implementation of
foldMapFree (and therefore
cleanB) is not to simply “remove”
b, but rather to move us into a new group where
b is the identity. This completely re-interprets the structure of the polymer, and the implementation is careful to do this properly so that it does commute.
These short posts won’t be counted as “paid” Patreon posts.↩