\chapter{Square root algorithms} \index{square root algorithm} A \key{square root algorithm} is an algorithm that has a square root in its time complexity. A square root can be seen as a ''poor man's logarithm'': the complexity $O(\sqrt n)$ is better than $O(n)$ but worse than $O(\log n)$. In any case, many square root algorithms are fast and usable in practice. As an example, let us consider the problem of creating a data structure that supports two operations on an array: modifying an element at a given position and calculating the sum of elements in the given range. We have previously solved the problem using binary indexed and segment trees, that support both operations in $O(\log n)$ time. However, now we will solve the problem in another way using a square root structure that allows us to modify elements in $O(1)$ time and calculate sums in $O(\sqrt n)$ time. The idea is to divide the array into \emph{blocks} of size $\sqrt n$ so that each block contains the sum of elements inside the block. For example, an array of 16 elements will be divided into blocks of 4 elements as follows: \begin{center} \begin{tikzpicture}[scale=0.7] \draw (0,0) grid (16,1); \draw (0,1) rectangle (4,2); \draw (4,1) rectangle (8,2); \draw (8,1) rectangle (12,2); \draw (12,1) rectangle (16,2); \node at (0.5, 0.5) {5}; \node at (1.5, 0.5) {8}; \node at (2.5, 0.5) {6}; \node at (3.5, 0.5) {3}; \node at (4.5, 0.5) {2}; \node at (5.5, 0.5) {7}; \node at (6.5, 0.5) {2}; \node at (7.5, 0.5) {6}; \node at (8.5, 0.5) {7}; \node at (9.5, 0.5) {1}; \node at (10.5, 0.5) {7}; \node at (11.5, 0.5) {5}; \node at (12.5, 0.5) {6}; \node at (13.5, 0.5) {2}; \node at (14.5, 0.5) {3}; \node at (15.5, 0.5) {2}; \node at (2, 1.5) {21}; \node at (6, 1.5) {17}; \node at (10, 1.5) {20}; \node at (14, 1.5) {13}; \end{tikzpicture} \end{center} In this structure, it is easy to modify array elements, because it is only needed to update the sum of a single block after each modification, which can be done in $O(1)$ time. For example, the following picture shows how the value of an element and the sum of the corresponding block change: \begin{center} \begin{tikzpicture}[scale=0.7] \fill[color=lightgray] (5,0) rectangle (6,1); \draw (0,0) grid (16,1); \fill[color=lightgray] (4,1) rectangle (8,2); \draw (0,1) rectangle (4,2); \draw (4,1) rectangle (8,2); \draw (8,1) rectangle (12,2); \draw (12,1) rectangle (16,2); \node at (0.5, 0.5) {5}; \node at (1.5, 0.5) {8}; \node at (2.5, 0.5) {6}; \node at (3.5, 0.5) {3}; \node at (4.5, 0.5) {2}; \node at (5.5, 0.5) {5}; \node at (6.5, 0.5) {2}; \node at (7.5, 0.5) {6}; \node at (8.5, 0.5) {7}; \node at (9.5, 0.5) {1}; \node at (10.5, 0.5) {7}; \node at (11.5, 0.5) {5}; \node at (12.5, 0.5) {6}; \node at (13.5, 0.5) {2}; \node at (14.5, 0.5) {3}; \node at (15.5, 0.5) {2}; \node at (2, 1.5) {21}; \node at (6, 1.5) {15}; \node at (10, 1.5) {20}; \node at (14, 1.5) {13}; \end{tikzpicture} \end{center} Calculating the sum of elements in a range is a bit more difficult. It turns out that we can always divide the range into three parts such that the sum consists of values of single elements and sums of blocks between them: \begin{center} \begin{tikzpicture}[scale=0.7] \fill[color=lightgray] (3,0) rectangle (4,1); \fill[color=lightgray] (12,0) rectangle (13,1); \fill[color=lightgray] (13,0) rectangle (14,1); \draw (0,0) grid (16,1); \fill[color=lightgray] (4,1) rectangle (8,2); \fill[color=lightgray] (8,1) rectangle (12,2); \draw (0,1) rectangle (4,2); \draw (4,1) rectangle (8,2); \draw (8,1) rectangle (12,2); \draw (12,1) rectangle (16,2); \node at (0.5, 0.5) {5}; \node at (1.5, 0.5) {8}; \node at (2.5, 0.5) {6}; \node at (3.5, 0.5) {3}; \node at (4.5, 0.5) {2}; \node at (5.5, 0.5) {5}; \node at (6.5, 0.5) {2}; \node at (7.5, 0.5) {6}; \node at (8.5, 0.5) {7}; \node at (9.5, 0.5) {1}; \node at (10.5, 0.5) {7}; \node at (11.5, 0.5) {5}; \node at (12.5, 0.5) {6}; \node at (13.5, 0.5) {2}; \node at (14.5, 0.5) {3}; \node at (15.5, 0.5) {2}; \node at (2, 1.5) {21}; \node at (6, 1.5) {15}; \node at (10, 1.5) {20}; \node at (14, 1.5) {13}; \draw [decoration={brace}, decorate, line width=0.5mm] (14,-0.25) -- (3,-0.25); \end{tikzpicture} \end{center} Since the number of single elements is $O(\sqrt n)$ and the number of blocks is also $O(\sqrt n)$, the time complexity of the sum query is $O(\sqrt n)$. In this case, the parameter $\sqrt n$ balances two things: the array is divided into $\sqrt n$ blocks, each of which contains $\sqrt n$ elements. In practice, it is not needed to use the exact value of $\sqrt n$ as a parameter, but it may be better to use parameters $k$ and $n/k$ where $k$ is different from $\sqrt n$. The optimal parameter depends on the problem and input. For example, if an algorithm often goes through the blocks but rarely inspects single elements inside the blocks, it may be a good idea to divide the array into $k < \sqrt n$ blocks, each of which contains $n/k > \sqrt n$ elements. \section{Combining algorithms} In this section we discuss two square root algorithms that are based on combining two algorithms into one algorithm. In both cases, we could use either of the algorithms alone and solve the problem in $O(n^2)$ time. However, by combining the algorithms, the running time becomes $O(n \sqrt n)$. \subsubsection{Case processing} Suppose that we are given a two-dimensional grid that contains $n$ cells. Each cell is assigned a letter, and our task is to find two cells with the same letter whose distance is minimum, where the distance between cells $(x_1,y_1)$ and $(x_2,y_2)$ is $|x_1-x_2|+|y_1-y_2|$. For example, consider the following grid: \begin{center} \begin{tikzpicture}[scale=0.7] \node at (0.5,0.5) {A}; \node at (0.5,1.5) {B}; \node at (0.5,2.5) {C}; \node at (0.5,3.5) {A}; \node at (1.5,0.5) {C}; \node at (1.5,1.5) {D}; \node at (1.5,2.5) {E}; \node at (1.5,3.5) {F}; \node at (2.5,0.5) {B}; \node at (2.5,1.5) {A}; \node at (2.5,2.5) {G}; \node at (2.5,3.5) {B}; \node at (3.5,0.5) {D}; \node at (3.5,1.5) {F}; \node at (3.5,2.5) {E}; \node at (3.5,3.5) {A}; \draw (0,0) grid (4,4); \end{tikzpicture} \end{center} In this case, the minimum distance is 2 between the two 'E' letters. Let us consider the problem of calculating the minimum distance between two cells with a \emph{fixed} letter $c$. There are two algorithms for this: \emph{Algorithm 1:} Go through all pairs of cells with letter $c$, and calculate the minimum distance between such cells. This will take $O(k^2)$ time where $k$ is the number of cells with letter $c$. \emph{Algorithm 2:} Perform a breadth-first search that simultaneously starts at each cell with letter $c$. The minimum distance between two cells with letter $c$ will be calculated in $O(n)$ time. Now we can go through all letters that appear in the grid and use either of the above algorithms. If we always used Algorithm 1, the running time would be $O(n^2)$, because all cells may have the same letters and $k=n$. Also if we always used Algorithm 2, the running time would be $O(n^2)$, because all cells may have different letters and there would be $n$ searches. However, we can \emph{combine} the two algorithms and use different algorithms for different letters depending on how many times each letter appears in the grid. Assume that a letter $c$ appears $k$ times. If $k \le \sqrt n$, we use Algorithm 1, and if $k > \sqrt n$, we use Algorithm 2. It turns out that by doing this, the total running time of the algorithm is only $O(n \sqrt n)$. First, suppose that we use Algorithm 1 for a letter $c$. Since $c$ appears at most $\sqrt n$ times in the grid, we compare each cell with letter $c$ $O(\sqrt n)$ times with other cells. Thus, the time used for processing all such cells is $O(n \sqrt n)$. Then, suppose that we use Algorithm 2 for a letter $c$. There are at most $\sqrt n$ such letters, so processing those letters also takes $O(n \sqrt n)$ time. \subsubsection{Batch processing} Consider again a two-dimensional grid that contains $n$ cells. Initially, each cell except one is white. We perform $n-1$ operations, each of which is given a white cell. Each operation fist calculates the minimum distance between the white cell and any black cell, and then paints the white cell black. For example, consider the following operation: \begin{center} \begin{tikzpicture}[scale=0.7] \fill[color=black] (1,1) rectangle (2,2); \fill[color=black] (3,1) rectangle (4,2); \fill[color=black] (0,3) rectangle (1,4); \node at (2.5,3.5) {*}; \draw (0,0) grid (4,4); \end{tikzpicture} \end{center} There are three black cells and the cell marked with * will be painted black next. Before painting the cell, the minimum distance to a black cell is calculated. In this case the minimum distance is 2 to the right cell. There are two algorithms for solving the problem: \emph{Algorithm 1:} After each operation, use breadth-first search to calculate for each white cell the distance to the nearest black cell. Each search takes $O(n)$ time, so the total running time is $O(n^2)$. \emph{Algorithm 2:} Maintain a list of cells that have been painted black, go through this list at each operation and then add a new cell to the list. The size of the list is $O(n)$, so the algorithm takes $O(n^2)$ time. We can combine the above algorithms by dividing the operations into $O(\sqrt n)$ \emph{batches}, each of which consists of $O(\sqrt n)$ operations. At the beginning of each batch, we calculate for each white cell the minimum distance to a black cell using breadth-first search. Then, when processing a batch, we maintain a list of cells that have been painted black in the current batch. The list contains $O(\sqrt n)$ elements, because there are $O(\sqrt n)$ operations in each batch. Now, the distance between a white cell and the nearest black cell is either the precalculated distance or the distance to a cell that appears in the list. The resulting algorithm works in $O(n \sqrt n)$ time. First, there are $O(\sqrt n)$ breadth-first searches and each search takes $O(n)$ time. Second, the total number of distances calculated during the algorithm is $O(n)$, and when calculating each distance, we go through a list of $O(\sqrt n)$ squares. \section{Integer partitions} Some square root algorithms are based on the following observation: if a positive integer $n$ is represented as a sum of positive integers, such a sum contains only $O(\sqrt n)$ \emph{distinct} numbers. The reason for this is that a sum with the maximum amount of distinct numbers has to be of the form \[1+2+3+ \cdots = n.\] The sum of the numbers $1,2,\ldots,k$ is \[\frac{k(k+1)}{2},\] so the maximum amount of distinct numbers is $k = O(\sqrt n)$. Next we will discuss two problems that can be solved efficiently using this observation. \subsubsection{Knapsack} Suppose that we are given a list of integer weights whose sum is $n$. Our task is to find out all sums that can be formed using a subset of the weights. For example, if the weights are $\{1,3,3\}$, the possible sums are as follows: \begin{itemize}[noitemsep] \item $0$ (empty set) \item $1$ \item $3$ \item $1+3=4$ \item $3+3=6$ \item $1+3+3=7$ \end{itemize} Using the standard knapsack approach (see Chapter 7.4), the problem can be solved as follows: we define a function $f(k,s)$ whose value is 1 if the sum $s$ can be formed using the first $k$ weights, and 0 otherwise. All values of this function can be calculated in $O(n^2)$ time using dynamic programming. However, we can make the algorithm more efficient by using the fact that the sum of the weights is $n$, which means that there are at most $O(\sqrt n)$ distinct weights. Thus, we can process the weights in groups such that all weights in each group are equal. It turns out that we can process each group in $O(n)$ time, which yields an $O(n \sqrt n)$ time algorithm. The idea is to use an array that records the sums of weights that can be formed using the groups processed so far. The array contains $n$ elements: element $k$ is 1 if the sum $k$ can be formed and 0 otherwise. To process a group of weights, we can easily scan the array from left to right and record the new sums of weights that can be formed using this group and the previous groups. \subsubsection{String construction} Given a string and a dictionary of words, consider the problem of counting the number of ways the string can be constructed using the dictionary words. For example, if the string is \texttt{ABAB} and the dictionary is $\{\texttt{A},\texttt{B},\texttt{AB}\}$, there are 4 ways: $\texttt{A}+\texttt{B}+\texttt{A}+\texttt{B}$, $\texttt{AB}+\texttt{A}+\texttt{B}$, $\texttt{A}+\texttt{B}+\texttt{AB}$ and $\texttt{AB}+\texttt{AB}$. Assume that the length of the string is $n$ and the total length of the dictionary words is $m$. A natural way to solve the problem is to use dynamic programming: we can define a function $f$ such that $f(k)$ denotes the number of ways to construct a prefix of length $k$ of the string using the dictionary words. Using this function, $f(n)$ gives the answer to the problem. There are several ways to calculate the values of $f$. One method is to store the dictionary words in a trie and go through all ways to select the last word in each prefix, which results in an $O(n^2)$ time algorithm. However, instead of using a trie, we can also use string hashing and always go through the dictionary words and compare their hash values. The most straightforward implementation of this idea yields an $O(nm)$ time algorithm, because the dictionary may contain $m$ words. However, we can make the algorithm more efficient by considering the dictionary words grouped by their lengths. Each group can be processed in constant time, because all hash values of dictionary words may be stored in a set. Since the total length of the words is $m$, there are at most $O(\sqrt m)$ distinct word lengths and at most $O(\sqrt m)$ groups. Thus, the running time of the algorithm is only $O(n \sqrt m)$. \section{Mo's algorithm} \index{Mo's algorithm} \key{Mo's algorithm}\footnote{According to \cite{cod15}, this algorithm is named after Mo Tao, a Chinese competitive programmer, but the technique has appeared earlier in the literature \cite{ken06}.} can be used in many problems that require processing range queries in a \emph{static} array. Since the array is static, the queries can be processed in any order. Before processing the queries, the algorithm sorts them in a special order which guarantees that the algorithm works efficiently. At each moment in the algorithm, there is an active range and the algorithm maintains the answer to a query related to that range. The algorithm processes the queries one by one, and always moves the endpoints of the active range by inserting and removing elements. The time complexity of the algorithm is $O(n \sqrt n f(n))$ when the array contains $n$ elements, there are $n$ queries and each insertion and removal of an element takes $O(f(n))$ time. The trick in Mo's algorithm is the order in which the queries are processed: The array is divided into blocks of $O(\sqrt n)$ elements, and the queries are sorted primarily by the number of the block that contains the first element in the range, and secondarily by the position of the last element in the range. It turns out that using this order, the algorithm only performs $O(n \sqrt n)$ operations, because the left endpoint moves $n$ times $O(\sqrt n)$ steps, and the right endpoint moves $\sqrt n$ times $O(n)$ steps. Thus, both endpoints move a total of $O(n \sqrt n)$ steps during the algorithm. \subsubsection*{Example} As an example, consider a problem where we are given a set of queries, each of them corresponding to a range in an array, and our task is to calculate for each query the number of \emph{distinct} elements in the range. In Mo's algorithm, the queries are always sorted in the same way, but it depends on the problem how the answer to the query is maintained. In this problem, we can maintain an array \texttt{count} where $\texttt{count}[x]$ indicates the number of times an element $x$ occurs in the active range. When we move from one query to another query, the active range changes. For example, if the current range is \begin{center} \begin{tikzpicture}[scale=0.7] \fill[color=lightgray] (1,0) rectangle (5,1); \draw (0,0) grid (9,1); \node at (0.5, 0.5) {4}; \node at (1.5, 0.5) {2}; \node at (2.5, 0.5) {5}; \node at (3.5, 0.5) {4}; \node at (4.5, 0.5) {2}; \node at (5.5, 0.5) {4}; \node at (6.5, 0.5) {3}; \node at (7.5, 0.5) {3}; \node at (8.5, 0.5) {4}; \end{tikzpicture} \end{center} and the next range is \begin{center} \begin{tikzpicture}[scale=0.7] \fill[color=lightgray] (2,0) rectangle (7,1); \draw (0,0) grid (9,1); \node at (0.5, 0.5) {4}; \node at (1.5, 0.5) {2}; \node at (2.5, 0.5) {5}; \node at (3.5, 0.5) {4}; \node at (4.5, 0.5) {2}; \node at (5.5, 0.5) {4}; \node at (6.5, 0.5) {3}; \node at (7.5, 0.5) {3}; \node at (8.5, 0.5) {4}; \end{tikzpicture} \end{center} there will be three steps: the left endpoint moves one step to the right, and the right endpoint moves two steps to the right. After each step, the array \texttt{count} needs to be updated. After adding an element $x$, we increase the value of $\texttt{count}[x]$ by 1, and if $\texttt{count}[x]=1$ after this, we also increase the answer to the query by 1. Similarly, after removing an element $x$, we decrease the value of $\texttt{count}[x]$ by 1, and if $\texttt{count}[x]=0$ after this, we also decrease the answer to the query by 1. In this problem, the time needed to perform each step is $O(1)$, so the total time complexity of the algorithm is $O(n \sqrt n)$.