Dynamic programming sounds very simple conceptually, but can quickly get complex.
The basic idea is “memoization” — storing previous values in memory. Under certain circumstances, you need to keep track of previous values. Dynamic programming is frequently useful as a second layer on top of recursive programming. If our recursive calls are repeating the same work, we can instead maintain a cache and eliminate the need for duplicate work.
As is tradition, we will start with a recursive implementation of fibonacci.
This simple example will completely balloon out of control as
count increases. Let's look at the recursive calls for
Each node has two children, and then each of those children will also have two children. The runtime for this method is a whopping
Now let’s implement fibonacci, but whenever we calculate a value, we save it to memory.
This function could be less than six lines, but hopefully you will find it more understandable this way. If we look at the tree above, it will be cut down quite a bit. We will only have to calculate a value the first time it comes up.
This brings our time complexity down to a handy
O(n), because we only have to calculate once for each value.
Top-down or Bottom-up
If you look at the above code for fibonacci, it is a pretty classic recursive problem. Each function call breaks it down until we hit a base case. This would frequently be called top down. Conceptually, we start from the top, and work our way down the tree. However, there is another way we could think about this. Unlike normal divide-and-conquer recursive problems, we are not actually solving the entire tree. Since we know what section of the tree we are solving, we could take a bottom-up approach, and start from 0.
This iterative approach is acceptable in this case. Although our original idea of fibonacci was as a tree, in the end we only truly traverse through a straight line. That allows for an iterative approach. Note that sometimes, an iterative approach is not possible.
Figuring out whether to go top-down or bottom-up, or even figuring out how to implement each of those, is the complexity of dynamic programming.
A Harder Problem
[2,3,7,18] is also a valid best-subsequence.
Here I will start by taking a bottom-up approach. I will build out a list of the best subsequences that end at each index. It will look something like this:
If I have a list of optimal subsequences for each previous index, I can easily find the optimal subsequence for the current index. I just have to find the longest subsequence that I can append the current value to.
This is the key of dynamic programming. You are building your cache to refer to while simultaneously solving the problem.
Now I will write out a solution. There are a few things to keep in mind.
- For each index, I want to keep track of the longest subsequence at that index.
- I will build the subsequence for the next index based on the previous ones.
In my solution, I will simplify the arrays a bit. Instead of keeping track of an entire array for each index, I will just keep track of the last value (and the length of what the array would be). Remember that the problem only cares about the length of the longest increasing subsequence, not the sequence itself.
The solution time complexity is
O(n^2), because of our two nested
for loops. The space complexity is
lengths are both the same length as
Note that this can be solved as a recursive algorithm (it can also be solved slightly faster, with a complicated sort and search). If you decide to do so, this will help conceptualize the difference between top-down and bottom-up dynamic algorithm design.
Originally published at elliott-king.github.io.