\chapter{Dynamic programming} \index{dynamic programming} \key{Dynamic programming} is a technique that combines the correctness of complete search and the efficiency of greedy algorithms. Dynamic programming can be applied if the problem can be divided into overlapping subproblems that can be solved independently. There are two uses for dynamic programming: \begin{itemize} \item \key{Findind an optimal solution}: We want to find a solution that is as large as possible or as small as possible. \item \key{Counting the number of solutions}: We want to calculate the total number of possible solutions. \end{itemize} We will first see how dynamic programming can be used for finding an optimal solution, and then we will use the same idea for counting the solutions. Understanding dynamic programming is a milestone in every competitive programmer's career. While the basic idea of the technique is simple, the challenge is how to apply it for different problems. This chapter introduces a set of classic problems that are a good starting point. \section{Coin problem} We first discuss a problem that we have already seen in Chapter 6: Given a set of coin values $\{c_1,c_2,\ldots,c_k\}$ and a sum of money $x$, our task is to form the sum $x$ using as few coins as possible. In Chapter 6, we solved the problem using a greedy algorithm that always selects the largest possible coin. The greedy algorithm works, for example, when coins are euro coins, but in the general case the greedy algorithm does not necessarily produce an optimal solution. Now it is time to solve the problem efficiently using dynamic programming, so that the algorithm works for any coin set. The dynamic programming algorithm is based on a recursive function that goes through all possibilities how to form the sum, like a brute force algorithm. However, the dynamic programming algorithm is efficient because it uses \emph{memoization} to calculate the answer to each subproblem only once. \subsubsection{Recursive formulation} The idea in dynamic programming is to formulate the problem recursively so that the answer to the problem can be calculated from the answers for smaller subproblems. In the coin problem, a natural recursive problem is as follows: what is the smallest number of coins required for constructing sum $x$? Let $f(x)$ be a function that gives the answer to the problem, i.e., $f(x)$ is the smallest number of coins required for constructing sum $x$. The values of the function depend on the values of the coins. For example, if the coin values are $\{1,3,4\}$, the first values of the function are as follows: \[ \begin{array}{lcl} f(0) & = & 0 \\ f(1) & = & 1 \\ f(2) & = & 2 \\ f(3) & = & 1 \\ f(4) & = & 1 \\ f(5) & = & 2 \\ f(6) & = & 2 \\ f(7) & = & 2 \\ f(8) & = & 2 \\ f(9) & = & 3 \\ f(10) & = & 3 \\ \end{array} \] First, $f(0)=0$ because no coins are needed for the sum $0$. Moreover, $f(3)=1$ because the sum $3$ can be formed using coin 3, and $f(5)=2$ because the sum 5 can be formed using coins 1 and 4. The essential property in the function is that the value $f(x)$ can be calculated recursively from the smaller values of the function. For example, if the coin set is $\{1,3,4\}$, there are three ways to select the first coin in a solution: we can choose coin 1, 3 or 4. If coin 1 is chosen, the remaining task is to form the sum $x-1$. Similarly, if coin 3 or 4 is chosen, we should form the sum $x-3$ or $x-4$. Thus, the recursive formula is \[f(x) = \min(f(x-1),f(x-3),f(x-4))+1\] where the function $\min$ returns the smallest of its parameters. In the general case, for the coin set $\{c_1,c_2,\ldots,c_k\}$, the recursive formula is \[f(x) = \min(f(x-c_1),f(x-c_2),\ldots,f(x-c_k))+1.\] The base case for the function is \[f(0)=0,\] because no coins are needed for constructing the sum 0. In addition, it is convenient to define \[f(x)=\infty\hspace{8px}\textrm{if $x<0$}.\] This means that an infinite number of coins is needed for forming a negative sum of money. This prevents the situation that the recursive function would form a solution where the initial sum of money is negative. Once a recursive function that solves the problem has been found, we can directly implement a solution in C++: \begin{lstlisting} int f(int x) { if (x == 0) return 0; if (x < 0) return 1e9; int u = 1e9; for (int i = 1; i <= k; i++) { u = min(u, f(x-c[i])+1); } return u; } \end{lstlisting} The code assumes that the available coins are $\texttt{c}[1], \texttt{c}[2], \ldots, \texttt{c}[k]$, and the value $10^9$ denotes infinity. This function works but it is not efficient yet, because it goes through a large number of ways to construct the sum. However, the function becomes efficient by using memoization. \subsubsection{Memoization} \index{memoization} Dynamic programming allows us to calculate the value of a recursive function efficiently using \key{memoization}. This means that an auxiliary array is used for storing the values of the function for different parameters. For each parameter, the value of the function is calculated recursively only once, and after this, the value can be directly retrieved from the array. In this problem, we can use the array \begin{lstlisting} int d[N]; \end{lstlisting} where $\texttt{d}[x]$ will contain the value $f(x)$. The constant $N$ should be chosen so that all required values of the function fit in the array. After this, the function can be efficiently implemented as follows: \begin{lstlisting} int f(int x) { if (x == 0) return 0; if (x < 0) return 1e9; if (d[x]) return d[x]; int u = 1e9; for (int i = 1; i <= k; i++) { u = min(u, f(x-c[i])+1); } d[x] = u; return d[x]; } \end{lstlisting} The function handles the base cases $x=0$ and $x<0$ as previously. Then the function checks if $f(x)$ has already been calculated and stored in $\texttt{d}[x]$. If $f(x)$ is found in the array, the function directly returns it. Otherwise the function calculates the value recursively and stores it in $\texttt{d}[x]$. Using memoization the function works efficiently, because the answer for each $x$ is calculated recursively only once. After a value of $f(x)$ has been stored in the array, it can be efficiently retrieved whenever the function will be called again with parameter $x$. The time complexity of the resulting algorithm is $O(xk)$ where the sum is $x$ and the number of coins is $k$. In practice, the algorithm is usable if $x$ is so small that it is possible to allocate an array for all possible function parameters. Note that the array can also be constructed using a loop that calculates all the values instead of a recursive function: \begin{lstlisting} d[0] = 0; for (int i = 1; i <= x; i++) { int u = 1e9; for (int j = 1; j <= k; j++) { if (i-c[j] < 0) continue; u = min(u, d[i-c[j]]+1); } d[i] = u; } \end{lstlisting} This implementation is shorter and somewhat more efficient than recursion, and experienced competitive programmers often prefer dynamic programming solutions that are implemented using loops. Still, the underlying idea is the same as in the recursive function. \subsubsection{Constructing the solution} Sometimes we are asked both to find the value of an optimal solution and also to give an example how such a solution can be constructed. In the coin problem, this means that the algorithm should show how to select the coins that produce the sum $x$ using as few coins as possible. We can construct the solution by adding another array to the code. The array indicates for each sum of money the first coin that should be chosen in an optimal solution. In the following code, the array \texttt{e} is used for this: \begin{lstlisting} d[0] = 0; for (int i = 1; i <= x; i++) { d[i] = 1e9; for (int j = 1; j <= k; j++) { if (i-c[j] < 0) continue; int u = d[i-c[j]]+1; if (u < d[i]) { d[i] = u; e[i] = c[j]; } } } \end{lstlisting} After this, we can print the coins needed for the sum $x$ as follows: \begin{lstlisting} while (x > 0) { cout << e[x] << "\n"; x -= e[x]; } \end{lstlisting} \subsubsection{Counting the number of solutions} Let us now consider a variation of the problem that is otherwise like the original problem, but we should count the total number of solutions instead of finding the optimal solution. For example, if the coins are $\{1,3,4\}$ and the target sum is $5$, there are a total of 6 solutions: \begin{multicols}{2} \begin{itemize} \item $1+1+1+1+1$ \item $1+1+3$ \item $1+3+1$ \item $3+1+1$ \item $1+4$ \item $4+1$ \end{itemize} \end{multicols} The number of the solutions can be calculated using the same idea as finding the optimal solution. The difference is that when finding the optimal solution, we maximize or minimize something in the recursion, but now we will calculate sums of numbers of solutions. In the coin problem, we can define a function $f(x)$ that returns the number of ways to construct the sum $x$ using the coins. For example, $f(5)=6$ when the coins are $\{1,3,4\}$. The value of $f(x)$ can be calculated recursively using the formula \[ f(x) = f(x-c_1)+f(x-c_2)+\cdots+f(x-c_k),\] because to form the sum $x$, we have to first choose some coin $c_i$ and then form the sum $x-c_i$. The base cases are $f(0)=1$, because there is exactly one way to form the sum 0 using an empty set of coins, and $f(x)=0$, when $x<0$, because it's not possible to form a negative sum of money. If the coin set is $\{1,3,4\}$, the function is \[ f(x) = f(x-1)+f(x-3)+f(x-4) \] and the first values of the function are: \[ \begin{array}{lcl} f(0) & = & 1 \\ f(1) & = & 1 \\ f(2) & = & 1 \\ f(3) & = & 2 \\ f(4) & = & 4 \\ f(5) & = & 6 \\ f(6) & = & 9 \\ f(7) & = & 15 \\ f(8) & = & 25 \\ f(9) & = & 40 \\ \end{array} \] The following code calculates the value of $f(x)$ using dynamic programming by filling the array \texttt{d} for parameters $0 \ldots x$: \begin{lstlisting} d[0] = 1; for (int i = 1; i <= x; i++) { for (int j = 1; j <= k; j++) { if (i-c[j] < 0) continue; d[i] += d[i-c[j]]; } } \end{lstlisting} Often the number of solutions is so large that it is not required to calculate the exact number but it is enough to give the answer modulo $m$ where, for example, $m=10^9+7$. This can be done by changing the code so that all calculations are done in modulo $m$. In the above code, it is enough to add the line \begin{lstlisting} d[i] %= m; \end{lstlisting} after the line \begin{lstlisting} d[i] += d[i-c[j]]; \end{lstlisting} Now we have covered all basic techniques related to dynamic programming. Since dynamic programming can be used in many different situations, we will now go through a set of problems that show further examples about possibilities of dynamic programming. \section{Longest increasing subsequence} \index{longest increasing subsequence} Given an array that contains $n$ numbers $x_1,x_2,\ldots,x_n$, our task is to find the \key{longest increasing subsequence} in the array. This is a sequence of array elements that goes from left to right, and each element in the sequence is larger than the previous element. For example, in the array \begin{center} \begin{tikzpicture}[scale=0.7] \draw (0,0) grid (8,1); \node at (0.5,0.5) {$6$}; \node at (1.5,0.5) {$2$}; \node at (2.5,0.5) {$5$}; \node at (3.5,0.5) {$1$}; \node at (4.5,0.5) {$7$}; \node at (5.5,0.5) {$4$}; \node at (6.5,0.5) {$8$}; \node at (7.5,0.5) {$3$}; \footnotesize \node at (0.5,1.4) {$1$}; \node at (1.5,1.4) {$2$}; \node at (2.5,1.4) {$3$}; \node at (3.5,1.4) {$4$}; \node at (4.5,1.4) {$5$}; \node at (5.5,1.4) {$6$}; \node at (6.5,1.4) {$7$}; \node at (7.5,1.4) {$8$}; \end{tikzpicture} \end{center} the longest increasing subsequence contains 4 elements: \begin{center} \begin{tikzpicture}[scale=0.7] \fill[color=lightgray] (1,0) rectangle (2,1); \fill[color=lightgray] (2,0) rectangle (3,1); \fill[color=lightgray] (4,0) rectangle (5,1); \fill[color=lightgray] (6,0) rectangle (7,1); \draw (0,0) grid (8,1); \node at (0.5,0.5) {$6$}; \node at (1.5,0.5) {$2$}; \node at (2.5,0.5) {$5$}; \node at (3.5,0.5) {$1$}; \node at (4.5,0.5) {$7$}; \node at (5.5,0.5) {$4$}; \node at (6.5,0.5) {$8$}; \node at (7.5,0.5) {$3$}; \draw[thick,->] (1.5,-0.25) .. controls (1.75,-1.00) and (2.25,-1.00) .. (2.4,-0.25); \draw[thick,->] (2.6,-0.25) .. controls (3.0,-1.00) and (4.0,-1.00) .. (4.4,-0.25); \draw[thick,->] (4.6,-0.25) .. controls (5.0,-1.00) and (6.0,-1.00) .. (6.5,-0.25); \footnotesize \node at (0.5,1.4) {$1$}; \node at (1.5,1.4) {$2$}; \node at (2.5,1.4) {$3$}; \node at (3.5,1.4) {$4$}; \node at (4.5,1.4) {$5$}; \node at (5.5,1.4) {$6$}; \node at (6.5,1.4) {$7$}; \node at (7.5,1.4) {$8$}; \end{tikzpicture} \end{center} Let $f(k)$ be the length of the longest increasing subsequence that ends at position $k$. Using this function, the answer to the problem is the largest of the values $f(1),f(2),\ldots,f(n)$. For example, in the above array the values of the function are as follows: \[ \begin{array}{lcl} f(1) & = & 1 \\ f(2) & = & 1 \\ f(3) & = & 2 \\ f(4) & = & 1 \\ f(5) & = & 3 \\ f(6) & = & 2 \\ f(7) & = & 4 \\ f(8) & = & 2 \\ \end{array} \] When calculating the value of $f(k)$, there are two possibilities how the subsequence that ends at position $k$ is constructed: \begin{enumerate} \item The subsequence only contains the element $x_k$. In this case $f(k)=1$. \item The subsequence is constructed by adding the element $x_k$ to a subsequence that ends at position $i$ where $i,line width=2pt] (4.5,-4.5) -- (1.5,-1.5); \end{scope} \end{tikzpicture} \end{center} The last characters of \texttt{LOVE} and \texttt{MOVIE} are equal, so the edit distance between them equals the edit distance between \texttt{LOV} and \texttt{MOVI}. We can use one editing operation to remove the character \texttt{I} from \texttt{MOVI}. Thus, the edit distance is one larger than the edit distance between \texttt{LOV} and \texttt{MOV}, etc. \section{Counting tilings} Sometimes the states in a dynamic programming solution are more complex than fixed combinations of numbers. As an example, we consider the problem of calculating the number of distinct ways to fill an $n \times m$ grid using $1 \times 2$ and $2 \times 1$ size tiles. For example, one valid solution for the $4 \times 7$ grid is \begin{center} \begin{tikzpicture}[scale=.65] \draw (0,0) grid (7,4); \draw[fill=gray] (0+0.2,0+0.2) rectangle (2-0.2,1-0.2); \draw[fill=gray] (2+0.2,0+0.2) rectangle (4-0.2,1-0.2); \draw[fill=gray] (4+0.2,0+0.2) rectangle (6-0.2,1-0.2); \draw[fill=gray] (0+0.2,1+0.2) rectangle (2-0.2,2-0.2); \draw[fill=gray] (2+0.2,1+0.2) rectangle (4-0.2,2-0.2); \draw[fill=gray] (1+0.2,2+0.2) rectangle (3-0.2,3-0.2); \draw[fill=gray] (1+0.2,3+0.2) rectangle (3-0.2,4-0.2); \draw[fill=gray] (4+0.2,3+0.2) rectangle (6-0.2,4-0.2); \draw[fill=gray] (0+0.2,2+0.2) rectangle (1-0.2,4-0.2); \draw[fill=gray] (3+0.2,2+0.2) rectangle (4-0.2,4-0.2); \draw[fill=gray] (6+0.2,2+0.2) rectangle (7-0.2,4-0.2); \draw[fill=gray] (4+0.2,1+0.2) rectangle (5-0.2,3-0.2); \draw[fill=gray] (5+0.2,1+0.2) rectangle (6-0.2,3-0.2); \draw[fill=gray] (6+0.2,0+0.2) rectangle (7-0.2,2-0.2); \end{tikzpicture} \end{center} and the total number of solutions is 781. The problem can be solved using dynamic programming by going through the grid row by row. Each row in a solution can be represented as a string that contains $m$ characters from the set $\{\sqcap, \sqcup, \sqsubset, \sqsupset \}$. For example, the above solution consists of four rows that correspond to the following strings: \begin{itemize} \item $\sqcap \sqsubset \sqsupset \sqcap \sqsubset \sqsupset \sqcap$ \item $\sqcup \sqsubset \sqsupset \sqcup \sqcap \sqcap \sqcup$ \item $\sqsubset \sqsupset \sqsubset \sqsupset \sqcup \sqcup \sqcap$ \item $\sqsubset \sqsupset \sqsubset \sqsupset \sqsubset \sqsupset \sqcup$ \end{itemize} Let $f(k,x)$ denote the number of ways to construct a solution for rows $1 \ldots k$ in the grid so that string $x$ corresponds to row $k$. It is possible to use dynamic programing here, because the state of a row is constrained only by the state of the previous row. A solution is valid if row $1$ does not contain the character $\sqcup$, row $n$ does not contain the character $\sqcap$, and all consecutive rows are \emph{compatible}. For example, the rows $\sqcup \sqsubset \sqsupset \sqcup \sqcap \sqcap \sqcup$ and $\sqsubset \sqsupset \sqsubset \sqsupset \sqcup \sqcup \sqcap$ are compatible, while the rows $\sqcap \sqsubset \sqsupset \sqcap \sqsubset \sqsupset \sqcap$ and $\sqsubset \sqsupset \sqsubset \sqsupset \sqsubset \sqsupset \sqcup$ are not compatible. Since a row consists of $m$ characters and there are four choices for each character, the number of distinct rows is at most $4^m$. Thus, the time complexity of the solution is $O(n 4^{2m})$ because we can go through the $O(4^m)$ possible states for each row, and for each state, there are $O(4^m)$ possible states for the previous row. In practice, it is a good idea to rotate the grid so that the shorter side has length $m$, because the factor $4^{2m}$ dominates the time complexity. It is possible to make the solution more efficient by using a better representation for the rows as strings. It turns out that it is sufficient to know the columns of the previous row that contain the first square of a vertical tile. Thus, we can represent a row using only characters $\sqcap$ and $\Box$, where $\Box$ is a combination of characters $\sqcup$, $\sqsubset$ and $\sqsupset$. Using this representation, there are only $2^m$ distinct rows and the time complexity is $O(n 2^{2m})$. As a final note, there is also a surprising direct formula for calculating the number of tilings: \[ \prod_{a=1}^{\lceil n/2 \rceil} \prod_{b=1}^{\lceil m/2 \rceil} 4 \cdot (\cos^2 \frac{\pi a}{n + 1} + \cos^2 \frac{\pi b}{m+1})\] This formula is very efficient, because it calculates the number of tilings in $O(nm)$ time, but since the answer is a product of real numbers, a practical problem in using the formula is how to store the intermediate results accurately.