Handle some confilicts in chapter 1 and 2

This commit is contained in:
Roope Salmi 2017-02-26 16:50:31 +02:00 committed by Roope Salmi
commit 57e13ada8b
26 changed files with 520 additions and 311 deletions

BIN
book.pdf

Binary file not shown.

View File

@ -1,6 +1,6 @@
\documentclass[twoside,12pt,a4paper,english]{book} \documentclass[twoside,12pt,a4paper,english]{book}
%\includeonly{luku01,kirj} %\includeonly{chapter01,list}
\usepackage[english]{babel} \usepackage[english]{babel}
\usepackage[utf8]{inputenc} \usepackage[utf8]{inputenc}
@ -27,6 +27,8 @@
\usepackage{titlesec} \usepackage{titlesec}
\usepackage{skak}
\usetikzlibrary{patterns,snakes} \usetikzlibrary{patterns,snakes}
\pagestyle{plain} \pagestyle{plain}
@ -48,6 +50,7 @@
\author{\Large Antti Laaksonen} \author{\Large Antti Laaksonen}
\makeindex \makeindex
\usepackage[totoc]{idxlayout}
\titleformat{\subsubsection} \titleformat{\subsubsection}
{\normalfont\large\bfseries\sffamily}{\thesubsection}{1em}{} {\normalfont\large\bfseries\sffamily}{\thesubsection}{1em}{}
@ -64,7 +67,7 @@
\setcounter{tocdepth}{1} \setcounter{tocdepth}{1}
\tableofcontents \tableofcontents
\include{johdanto} \include{preface}
\mainmatter \mainmatter
\pagenumbering{arabic} \pagenumbering{arabic}
@ -73,41 +76,45 @@
\newcommand{\key}[1] {\textbf{#1}} \newcommand{\key}[1] {\textbf{#1}}
\part{Basic techniques} \part{Basic techniques}
\include{luku01} \include{chapter01}
\include{luku02} \include{chapter02}
\include{luku03} \include{chapter03}
\include{luku04} \include{chapter04}
\include{luku05} \include{chapter05}
\include{luku06} \include{chapter06}
\include{luku07} \include{chapter07}
\include{luku08} \include{chapter08}
\include{luku09} \include{chapter09}
\include{luku10} \include{chapter10}
\part{Graph algorithms} \part{Graph algorithms}
\include{luku11} \include{chapter11}
\include{luku12} \include{chapter12}
\include{luku13} \include{chapter13}
\include{luku14} \include{chapter14}
\include{luku15} \include{chapter15}
\include{luku16} \include{chapter16}
\include{luku17} \include{chapter17}
\include{luku18} \include{chapter18}
\include{luku19} \include{chapter19}
\include{luku20} \include{chapter20}
\part{Advanced topics} \part{Advanced topics}
\include{luku21} \include{chapter21}
\include{luku22} \include{chapter22}
\include{luku23} \include{chapter23}
\include{luku24} \include{chapter24}
\include{luku25} \include{chapter25}
\include{luku26} \include{chapter26}
\include{luku27} \include{chapter27}
\include{luku28} \include{chapter28}
\include{luku29} \include{chapter29}
\include{luku30} \include{chapter30}
\include{kirj}
\cleardoublepage
\phantomsection
\addcontentsline{toc}{chapter}{Bibliography}
\include{list}
\cleardoublepage \cleardoublepage
\printindex \printindex
\end{document}la \end{document}

View File

@ -117,10 +117,10 @@ but now it suffices to write \texttt{cout}.
The code can be compiled using the following command: The code can be compiled using the following command:
\begin{lstlisting} \begin{lstlisting}
g++ -std=c++11 -O2 -Wall code.cpp -o code g++ -std=c++11 -O2 -Wall code.cpp -o bin
\end{lstlisting} \end{lstlisting}
This command produces a binary file \texttt{code} This command produces a binary file \texttt{bin}
from the source code \texttt{code.cpp}. from the source code \texttt{code.cpp}.
The compiler follows the C++11 standard The compiler follows the C++11 standard
(\texttt{-std=c++11}), (\texttt{-std=c++11}),
@ -286,7 +286,7 @@ Still, it is good to know that
the \texttt{g++} compiler also provides the \texttt{g++} compiler also provides
a 128-bit type \texttt{\_\_int128\_t} a 128-bit type \texttt{\_\_int128\_t}
with a value range of with a value range of
$-2^{127} \ldots 2^{127}-1$ or $-10^{38} \ldots 10^{38}$. $-2^{127} \ldots 2^{127}-1$ or about $-10^{38} \ldots 10^{38}$.
However, this type is not available in all contest systems. However, this type is not available in all contest systems.
\subsubsection{Modular arithmetic} \subsubsection{Modular arithmetic}
@ -624,7 +624,7 @@ For example, in the above set
New sets can be constructed using set operations: New sets can be constructed using set operations:
\begin{itemize} \begin{itemize}
\item The \key{intersection} $A \cap B$ consists of elements \item The \key{intersection} $A \cap B$ consists of elements
that are both in $A$ and $B$. that are in both $A$ and $B$.
For example, if $A=\{1,2,5\}$ and $B=\{2,4\}$, For example, if $A=\{1,2,5\}$ and $B=\{2,4\}$,
then $A \cap B = \{2\}$. then $A \cap B = \{2\}$.
\item The \key{union} $A \cup B$ consists of elements \item The \key{union} $A \cup B$ consists of elements
@ -778,7 +778,9 @@ n! & = & n \cdot (n-1)! \\
\index{Fibonacci number} \index{Fibonacci number}
The \key{Fibonacci numbers} arise in many situations. The \key{Fibonacci numbers}
%\footnote{Fibonacci (c. 1175--1250) was an Italian mathematician.}
arise in many situations.
They can be defined recursively as follows: They can be defined recursively as follows:
\[ \[
\begin{array}{lcl} \begin{array}{lcl}
@ -790,7 +792,8 @@ f(n) & = & f(n-1)+f(n-2) \\
The first Fibonacci numbers are The first Fibonacci numbers are
\[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, \ldots\] \[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, \ldots\]
There is also a closed-form formula There is also a closed-form formula
for calculating Fibonacci numbers: for calculating Fibonacci numbers\footnote{This formula is sometimes called
\index{Binet's formula} \key{Binet's formula}.}:
\[f(n)=\frac{(1 + \sqrt{5})^n - (1-\sqrt{5})^n}{2^n \sqrt{5}}.\] \[f(n)=\frac{(1 + \sqrt{5})^n - (1-\sqrt{5})^n}{2^n \sqrt{5}}.\]
\subsubsection{Logarithms} \subsubsection{Logarithms}
@ -887,12 +890,13 @@ The International Collegiate Programming Contest (ICPC)
is an annual programming contest for university students. is an annual programming contest for university students.
Each team in the contest consists of three students, Each team in the contest consists of three students,
and unlike in the IOI, the students work together; and unlike in the IOI, the students work together;
there is even only one computer available for each team. there is only one computer available for each team.
The ICPC consists of several stages, and finally the The ICPC consists of several stages, and finally the
best teams are invited to the World Finals. best teams are invited to the World Finals.
While there are tens of thousands of participants While there are tens of thousands of participants
in the contest, there are only 128 final slots available, in the contest, there are only a small number\footnote{The exact number of final
slots varies from year to year; in 2016, there were 128 final slots.} of final slots available,
so even advancing to the finals so even advancing to the finals
is a great achievement in some regions. is a great achievement in some regions.
@ -924,7 +928,7 @@ Google Code Jam and Yandex.Algorithm.
Of course, companies also use those contests for recruiting: Of course, companies also use those contests for recruiting:
performing well in a contest is a good way to prove one's skills. performing well in a contest is a good way to prove one's skills.
\section{Books} \section{Resources}
\subsubsection{Competitive programming books} \subsubsection{Competitive programming books}
@ -933,12 +937,11 @@ concentrate on competitive programming and algorithmic problem solving:
\begin{itemize} \begin{itemize}
\item S. Halim and F. Halim: \item S. Halim and F. Halim:
\emph{Competitive Programming 3: The New Lower Bound of Programming Contests}, 2013 \emph{Competitive Programming 3: The New Lower Bound of Programming Contests} \cite{hal13}
\item S. S. Skiena and M. A. Revilla: \item S. S. Skiena and M. A. Revilla:
\emph{Programming Challenges: The Programming Contest Training Manual}, \emph{Programming Challenges: The Programming Contest Training Manual} \cite{ski03}
Springer, 2003 \item K. Diks et al.: \emph{Looking for a Challenge? The Ultimate Problem Set from
\item \emph{Looking for a Challenge? The Ultimate Problem Set from the University of Warsaw Programming Competitions} \cite{dik12}
the University of Warsaw Programming Competitions}, 2012
\end{itemize} \end{itemize}
The first two books are intended for beginners, The first two books are intended for beginners,
@ -952,9 +955,9 @@ Some good books are:
\begin{itemize} \begin{itemize}
\item T. H. Cormen, C. E. Leiserson, R. L. Rivest and C. Stein: \item T. H. Cormen, C. E. Leiserson, R. L. Rivest and C. Stein:
\emph{Introduction to Algorithms}, MIT Press, 2009 (3rd edition) \emph{Introduction to Algorithms} \cite{cor09}
\item J. Kleinberg and É. Tardos: \item J. Kleinberg and É. Tardos:
\emph{Algorithm Design}, Pearson, 2005 \emph{Algorithm Design} \cite{kle05}
\item S. S. Skiena: \item S. S. Skiena:
\emph{The Algorithm Design Manual}, Springer, 2008 (2nd edition) \emph{The Algorithm Design Manual} \cite{ski08}
\end{itemize} \end{itemize}

View File

@ -281,7 +281,11 @@ Still, there are many important problems for which
no polynomial algorithm is known, i.e., no polynomial algorithm is known, i.e.,
nobody knows how to solve them efficiently. nobody knows how to solve them efficiently.
\key{NP-hard} problems are an important set \key{NP-hard} problems are an important set
of problems, for which no polynomial algorithm is known \cite{gar79}. of problems, for which no polynomial algorithm
is known\footnote{A classic book on the topic is
M. R. Garey's and D. S. Johnson's
\emph{Computers and Intractability: A Guide to the Theory
of NP-Completeness} \cite{gar79}.}.
\section{Estimating efficiency} \section{Estimating efficiency}
@ -309,15 +313,14 @@ assuming a time limit of one second.
\begin{center} \begin{center}
\begin{tabular}{ll} \begin{tabular}{ll}
input size ($n$) & required time complexity \\ typical input size & required time complexity \\
\hline \hline
$n \le 10^{18}$ & $O(1)$ or $O(\log n)$ \\
$n \le 10^{12}$ & $O(\sqrt n)$ \\
$n \le 10^6$ & $O(n)$ or $O(n \log n)$ \\
$n \le 5000$ & $O(n^2)$ \\
$n \le 500$ & $O(n^3)$ \\
$n \le 25$ & $O(2^n)$ \\
$n \le 10$ & $O(n!)$ \\ $n \le 10$ & $O(n!)$ \\
$n \le 20$ & $O(2^n)$ \\
$n \le 500$ & $O(n^3)$ \\
$n \le 5000$ & $O(n^2)$ \\
$n \le 10^6$ & $O(n \log n)$ or $O(n)$ \\
$n$ is large & $O(1)$ or $O(\log n)$ \\
\end{tabular} \end{tabular}
\end{center} \end{center}
@ -353,8 +356,8 @@ time and even in $O(n)$ time.
Given an array of $n$ integers $x_1,x_2,\ldots,x_n$, Given an array of $n$ integers $x_1,x_2,\ldots,x_n$,
our task is to find the our task is to find the
\key{maximum subarray sum}\footnote{Bentley's \key{maximum subarray sum}\footnote{J. Bentley's
book \emph{Programming Pearls} \cite{ben86} made this problem popular.}, i.e., book \emph{Programming Pearls} \cite{ben86} made the problem popular.}, i.e.,
the largest possible sum of numbers the largest possible sum of numbers
in a contiguous region in the array. in a contiguous region in the array.
The problem is interesting when there may be The problem is interesting when there may be
@ -444,8 +447,8 @@ and the sum of the numbers is calculated to the variable $s$.
The variable $p$ contains the maximum sum found during the search. The variable $p$ contains the maximum sum found during the search.
The time complexity of the algorithm is $O(n^3)$, The time complexity of the algorithm is $O(n^3)$,
because it consists of three nested loops and because it consists of three nested loops
each loop contains $O(n)$ steps. that go through the input.
\subsubsection{Algorithm 2} \subsubsection{Algorithm 2}
@ -471,7 +474,9 @@ After this change, the time complexity is $O(n^2)$.
\subsubsection{Algorithm 3} \subsubsection{Algorithm 3}
Surprisingly, it is possible to solve the problem Surprisingly, it is possible to solve the problem
in $O(n)$ time, which means that we can remove in $O(n)$ time\footnote{In \cite{ben86}, this linear-time algorithm
is attributed to J. B. Kadene, and the algorithm is sometimes
called \index{Kadene's algorithm} \key{Kadene's algorithm}.}, which means that we can remove
one more loop. one more loop.
The idea is to calculate, for each array position, The idea is to calculate, for each array position,
the maximum sum of a subarray that ends at that position. the maximum sum of a subarray that ends at that position.

View File

@ -326,7 +326,8 @@ of the algorithm is at least $O(n^2)$.
It is possible to sort an array efficiently It is possible to sort an array efficiently
in $O(n \log n)$ time using algorithms in $O(n \log n)$ time using algorithms
that are not limited to swapping consecutive elements. that are not limited to swapping consecutive elements.
One such algorithm is \key{mergesort} One such algorithm is \key{mergesort}\footnote{According to \cite{knu983},
mergesort was invented by J. von Neumann in 1945.}
that is based on recursion. that is based on recursion.
Mergesort sorts a subarray \texttt{t}$[a,b]$ as follows: Mergesort sorts a subarray \texttt{t}$[a,b]$ as follows:
@ -538,8 +539,7 @@ but use some other information.
An example of such an algorithm is An example of such an algorithm is
\key{counting sort} that sorts an array in \key{counting sort} that sorts an array in
$O(n)$ time assuming that every element in the array $O(n)$ time assuming that every element in the array
is an integer between $0 \ldots c$ where $c$ is an integer between $0 \ldots c$ and $c=O(n)$.
is a small constant.
The algorithm creates a \emph{bookkeeping} array The algorithm creates a \emph{bookkeeping} array
whose indices are elements in the original array. whose indices are elements in the original array.

View File

@ -196,7 +196,7 @@ for (auto x : s) {
} }
\end{lstlisting} \end{lstlisting}
An important property of sets An important property of sets is
that all the elements are \emph{distinct}. that all the elements are \emph{distinct}.
Thus, the function \texttt{count} always returns Thus, the function \texttt{count} always returns
either 0 (the element is not in the set) either 0 (the element is not in the set)
@ -723,7 +723,7 @@ $5 \cdot 10^6$ & $10{,}0$ s & $2{,}3$ s & $0{,}9$ s \\
\end{tabular} \end{tabular}
\end{center} \end{center}
Algorithm 1 and 2 are equal except that Algorithms 1 and 2 are equal except that
they use different set structures. they use different set structures.
In this problem, this choice has an important effect on In this problem, this choice has an important effect on
the running time, because algorithm 2 the running time, because algorithm 2

View File

@ -436,18 +436,18 @@ the $4 \times 4$ board are numbered as follows:
\end{tikzpicture} \end{tikzpicture}
\end{center} \end{center}
Let $q(n)$ denote the number of ways
to place $n$ queens to te $n \times n$ chessboard.
The above backtracking The above backtracking
algorithm tells us that algorithm tells us that $q(n)=92$.
there are 92 ways to place 8
queens to the $8 \times 8$ chessboard.
When $n$ increases, the search quickly becomes slow, When $n$ increases, the search quickly becomes slow,
because the number of the solutions increases because the number of the solutions increases
exponentially. exponentially.
For example, calculating the ways to For example, calculating $q(16)=14772512$
place 16 queens to the $16 \times 16$ using the above algorithm already takes about a minute
chessboard already takes about a minute on a modern computer\footnote{There is no known way to efficiently
on a modern computer calculate larger values of $q(n)$. The current record is
(there are 14772512 solutions). $q(27)=234907967154122528$, calculated in 2016 \cite{q27}.}.
\section{Pruning the search} \section{Pruning the search}
@ -716,7 +716,8 @@ check if the sum of any of the subsets is $x$.
The running time of such a solution is $O(2^n)$, The running time of such a solution is $O(2^n)$,
because there are $2^n$ subsets. because there are $2^n$ subsets.
However, using the meet in the middle technique, However, using the meet in the middle technique,
we can achieve a more efficient $O(2^{n/2})$ time solution. we can achieve a more efficient $O(2^{n/2})$ time solution\footnote{This
technique was introduced in 1974 by E. Horowitz and S. Sahni \cite{hor74}.}.
Note that $O(2^n)$ and $O(2^{n/2})$ are different Note that $O(2^n)$ and $O(2^{n/2})$ are different
complexities because $2^{n/2}$ equals $\sqrt{2^n}$. complexities because $2^{n/2}$ equals $\sqrt{2^n}$.

View File

@ -103,8 +103,12 @@ is 6, the greedy algorithm produces the solution
$4+1+1$ while the optimal solution is $3+3$. $4+1+1$ while the optimal solution is $3+3$.
It is not known if the general coin problem It is not known if the general coin problem
can be solved using any greedy algorithm. can be solved using any greedy algorithm\footnote{However, it is possible
to \emph{check} in polynomial time
if the greedy algorithm presented in this chapter works for
a given set of coins \cite{pea05}.}.
However, as we will see in Chapter 7, However, as we will see in Chapter 7,
in some cases,
the general problem can be efficiently the general problem can be efficiently
solved using a dynamic solved using a dynamic
programming algorithm that always gives the programming algorithm that always gives the
@ -530,7 +534,9 @@ the string \texttt{AB} or the string \texttt{C}.
\subsubsection{Huffman coding} \subsubsection{Huffman coding}
\key{Huffman coding} \cite{huf52} is a greedy algorithm \key{Huffman coding}\footnote{D. A. Huffman discovered this method
when solving a university course assignment
and published the algorithm in 1952 \cite{huf52}.} is a greedy algorithm
that constructs an optimal code for that constructs an optimal code for
compressing a given string. compressing a given string.
The algorithm builds a binary tree The algorithm builds a binary tree
@ -671,114 +677,4 @@ character & codeword \\
\texttt{C} & 10 \\ \texttt{C} & 10 \\
\texttt{D} & 111 \\ \texttt{D} & 111 \\
\end{tabular} \end{tabular}
\end{center} \end{center}
% \subsubsection{Miksi algoritmi toimii?}
%
% Huffmanin koodaus on ahne algoritmi, koska se
% yhdistää aina kaksi solmua, joiden painot ovat
% pienimmät.
% Miksi on varmaa, että tämä menetelmä tuottaa
% aina optimaalisen koodin?
%
% Merkitään $c(x)$ merkin $x$ esiintymiskertojen
% määrää merkkijonossa sekä $s(x)$
% merkkiä $x$ vastaavan koodisanan pituutta.
% Näitä merkintöjä käyttäen merkkijonon
% bittiesityksen pituus on
% \[\sum_x c(x) \cdot s(x),\]
% missä summa käy läpi kaikki merkkijonon merkit.
% Esimerkiksi äskeisessä esimerkissä
% bittiesityksen pituus on
% \[5 \cdot 1 + 1 \cdot 3 + 2 \cdot 2 + 1 \cdot 3 = 15.\]
% Hyödyllinen havainto on, että $s(x)$ on yhtä suuri kuin
% merkkiä $x$ vastaavan solmun \emph{syvyys} puussa
% eli matka puun huipulta solmuun.
%
% Perustellaan ensin, miksi optimaalista koodia vastaa
% aina binääripuu, jossa jokaisesta solmusta lähtee
% alaspäin joko kaksi haaraa tai ei yhtään haaraa.
% Tehdään vastaoletus, että jostain solmusta lähtisi
% alaspäin vain yksi haara.
% Esimerkiksi seuraavassa puussa tällainen tilanne on solmussa $a$:
% \begin{center}
% \begin{tikzpicture}[scale=0.9]
% \node[draw, circle, minimum size=20pt] (3) at (3,1) {\phantom{$a$}};
% \node[draw, circle, minimum size=20pt] (2) at (4,0) {$b$};
% \node[draw, circle, minimum size=20pt] (5) at (5,1) {$a$};
% \node[draw, circle, minimum size=20pt] (6) at (4,2) {\phantom{$a$}};
%
% \path[draw,thick,-] (2) -- (5);
% \path[draw,thick,-] (3) -- (6);
% \path[draw,thick,-] (5) -- (6);
% \end{tikzpicture}
% \end{center}
% Tällainen solmu $a$ on kuitenkin aina turha, koska se
% tuo vain yhden bitin lisää polkuihin, jotka kulkevat
% solmun kautta, eikä sen avulla voi erottaa kahta
% koodisanaa toisistaan. Niinpä kyseisen solmun voi poistaa
% puusta, minkä seurauksena syntyy parempi koodi,
% eli optimaalista koodia vastaavassa puussa ei voi olla
% solmua, josta lähtee vain yksi haara.
%
% Perustellaan sitten, miksi on joka vaiheessa optimaalista
% yhdistää kaksi solmua, joiden painot ovat pienimmät.
% Tehdään vastaoletus, että solmun $a$ paino on pienin,
% mutta sitä ei saisi yhdistää aluksi toiseen solmuun,
% vaan sen sijasta tulisi yhdistää solmu $b$
% ja jokin toinen solmu:
% \begin{center}
% \begin{tikzpicture}[scale=0.9]
% \node[draw, circle, minimum size=20pt] (1) at (0,0) {\phantom{$a$}};
% \node[draw, circle, minimum size=20pt] (2) at (-2,-1) {\phantom{$a$}};
% \node[draw, circle, minimum size=20pt] (3) at (2,-1) {$a$};
% \node[draw, circle, minimum size=20pt] (4) at (-3,-2) {\phantom{$a$}};
% \node[draw, circle, minimum size=20pt] (5) at (-1,-2) {\phantom{$a$}};
% \node[draw, circle, minimum size=20pt] (8) at (-2,-3) {$b$};
% \node[draw, circle, minimum size=20pt] (9) at (0,-3) {\phantom{$a$}};
%
% \path[draw,thick,-] (1) -- (2);
% \path[draw,thick,-] (1) -- (3);
% \path[draw,thick,-] (2) -- (4);
% \path[draw,thick,-] (2) -- (5);
% \path[draw,thick,-] (5) -- (8);
% \path[draw,thick,-] (5) -- (9);
% \end{tikzpicture}
% \end{center}
% Solmuille $a$ ja $b$ pätee
% $c(a) \le c(b)$ ja $s(a) \le s(b)$.
% Solmut aiheuttavat bittiesityksen pituuteen lisäyksen
% \[c(a) \cdot s(a) + c(b) \cdot s(b).\]
% Tarkastellaan sitten toista tilannetta,
% joka on muuten samanlainen kuin ennen,
% mutta solmut $a$ ja $b$ on vaihdettu keskenään:
% \begin{center}
% \begin{tikzpicture}[scale=0.9]
% \node[draw, circle, minimum size=20pt] (1) at (0,0) {\phantom{$a$}};
% \node[draw, circle, minimum size=20pt] (2) at (-2,-1) {\phantom{$a$}};
% \node[draw, circle, minimum size=20pt] (3) at (2,-1) {$b$};
% \node[draw, circle, minimum size=20pt] (4) at (-3,-2) {\phantom{$a$}};
% \node[draw, circle, minimum size=20pt] (5) at (-1,-2) {\phantom{$a$}};
% \node[draw, circle, minimum size=20pt] (8) at (-2,-3) {$a$};
% \node[draw, circle, minimum size=20pt] (9) at (0,-3) {\phantom{$a$}};
%
% \path[draw,thick,-] (1) -- (2);
% \path[draw,thick,-] (1) -- (3);
% \path[draw,thick,-] (2) -- (4);
% \path[draw,thick,-] (2) -- (5);
% \path[draw,thick,-] (5) -- (8);
% \path[draw,thick,-] (5) -- (9);
% \end{tikzpicture}
% \end{center}
% Osoittautuu, että tätä puuta vastaava koodi on
% \emph{yhtä hyvä tai parempi} kuin alkuperäinen koodi, joten vastaoletus
% on väärin ja Huffmanin koodaus
% toimiikin oikein, jos se yhdistää aluksi solmun $a$
% jonkin solmun kanssa.
% Tämän perustelee seuraava epäyhtälöketju:
% \[\begin{array}{rcl}
% c(b) & \ge & c(a) \\
% c(b)\cdot(s(b)-s(a)) & \ge & c(a)\cdot (s(b)-s(a)) \\
% c(b)\cdot s(b)-c(b)\cdot s(a) & \ge & c(a)\cdot s(b)-c(a)\cdot s(a) \\
% c(a)\cdot s(a)+c(b)\cdot s(b) & \ge & c(a)\cdot s(b)+c(b)\cdot s(a) \\
% \end{array}\]

View File

@ -708,7 +708,8 @@ depends on the values of the objects.
\index{edit distance} \index{edit distance}
\index{Levenshtein distance} \index{Levenshtein distance}
The \key{edit distance} or \key{Levenshtein distance} The \key{edit distance} or \key{Levenshtein distance}\footnote{The distance
is named after V. I. Levenshtein who discussed it in connection with binary codes \cite{lev66}.}
is the minimum number of editing operations is the minimum number of editing operations
needed to transform a string needed to transform a string
into another string. into another string.
@ -983,9 +984,10 @@ $2^m$ distinct rows and the time complexity is
$O(n 2^{2m})$. $O(n 2^{2m})$.
As a final note, there is also a surprising direct formula As a final note, there is also a surprising direct formula
for calculating the number of tilings\footnote{Surprisingly, for calculating the number of tilings:
this formula was discovered independently % \footnote{Surprisingly,
by \cite{kas61} and \cite{tem61} in 1961.}: % this formula was discovered independently
% by \cite{kas61} and \cite{tem61} in 1961.}:
\[ \prod_{a=1}^{\lceil n/2 \rceil} \prod_{b=1}^{\lceil m/2 \rceil} 4 \cdot (\cos^2 \frac{\pi a}{n + 1} + \cos^2 \frac{\pi b}{m+1})\] \[ \prod_{a=1}^{\lceil n/2 \rceil} \prod_{b=1}^{\lceil m/2 \rceil} 4 \cdot (\cos^2 \frac{\pi a}{n + 1} + \cos^2 \frac{\pi b}{m+1})\]
This formula is very efficient, because it calculates This formula is very efficient, because it calculates
the number of tilings in $O(nm)$ time, the number of tilings in $O(nm)$ time,

View File

@ -440,7 +440,8 @@ we can conclude that $\textrm{rmq}(2,7)=1$.
\index{binary indexed tree} \index{binary indexed tree}
\index{Fenwick tree} \index{Fenwick tree}
A \key{binary indexed tree} or \key{Fenwick tree} \cite{fen94} A \key{binary indexed tree} or \key{Fenwick tree}\footnote{The
binary indexed tree structure was presented by P. M. Fenwick in 1994 \cite{fen94}.}
can be seen as a dynamic version of a prefix sum array. can be seen as a dynamic version of a prefix sum array.
This data structure supports two $O(\log n)$ time operations: This data structure supports two $O(\log n)$ time operations:
calculating the sum of elements in a range calculating the sum of elements in a range
@ -738,7 +739,9 @@ takes $O(1)$ time using bit operations.
\index{segment tree} \index{segment tree}
A \key{segment tree} is a data structure A \key{segment tree}\footnote{The origin of this structure is unknown.
The bottom-up-implementation in this chapter corresponds to
the implementation in \cite{sta06}.} is a data structure
that supports two operations: that supports two operations:
processing a range query and processing a range query and
modifying an element in the array. modifying an element in the array.

View File

@ -391,7 +391,8 @@ to change an iteration over permutations into
an iteration over subsets, so that an iteration over subsets, so that
the dynamic programming state the dynamic programming state
contains a subset of a set and possibly contains a subset of a set and possibly
some additional information. some additional information\footnote{This technique was introduced in 1962
by M. Held and R. M. Karp \cite{hel62}.}.
The benefit in this is that The benefit in this is that
$n!$, the number of permutations of an $n$ element set, $n!$, the number of permutations of an $n$ element set,

View File

@ -24,7 +24,9 @@ for finding shortest paths.
\index{BellmanFord algorithm} \index{BellmanFord algorithm}
The \key{BellmanFord algorithm} \cite{bel58} finds the The \key{BellmanFord algorithm}\footnote{The algorithm is named after
R. E. Bellman and L. R. Ford who published it independently
in 1958 and 1956, respectively \cite{bel58,for56a}.} finds the
shortest paths from a starting node to all shortest paths from a starting node to all
other nodes in the graph. other nodes in the graph.
The algorithm can process all kinds of graphs, The algorithm can process all kinds of graphs,
@ -331,7 +333,9 @@ original BellmanFord algorithm.
\index{Dijkstra's algorithm} \index{Dijkstra's algorithm}
\key{Dijkstra's algorithm} \cite{dij59} finds the shortest \key{Dijkstra's algorithm}\footnote{E. W. Dijkstra published the algorithm in 1959 \cite{dij59};
however, his original paper does not mention how to implement the algorithm efficiently.}
finds the shortest
paths from the starting node to all other nodes, paths from the starting node to all other nodes,
like the BellmanFord algorithm. like the BellmanFord algorithm.
The benefit in Dijsktra's algorithm is that The benefit in Dijsktra's algorithm is that
@ -594,7 +598,9 @@ at most one distance to the priority queue.
\index{FloydWarshall algorithm} \index{FloydWarshall algorithm}
The \key{FloydWarshall algorithm} \cite{flo62} The \key{FloydWarshall algorithm}\footnote{The algorithm
is named after R. W. Floyd and S. Warshall
who published it independently in 1962 \cite{flo62,war62}.}
is an alternative way to approach the problem is an alternative way to approach the problem
of finding shortest paths. of finding shortest paths.
Unlike the other algorihms in this chapter, Unlike the other algorihms in this chapter,

View File

@ -123,7 +123,8 @@ maximum spanning trees by processing the edges in reverse order.
\index{Kruskal's algorithm} \index{Kruskal's algorithm}
In \key{Kruskal's algorithm} \cite{kru56}, the initial spanning tree In \key{Kruskal's algorithm}\footnote{The algorithm was published in 1956
by J. B. Kruskal \cite{kru56}.}, the initial spanning tree
only contains the nodes of the graph only contains the nodes of the graph
and does not contain any edges. and does not contain any edges.
Then the algorithm goes through the edges Then the algorithm goes through the edges
@ -409,7 +410,11 @@ belongs to more than one set.
Two $O(\log n)$ time operations are supported: Two $O(\log n)$ time operations are supported:
the \texttt{union} operation joins two sets, the \texttt{union} operation joins two sets,
and the \texttt{find} operation finds the representative and the \texttt{find} operation finds the representative
of the set that contains a given element. of the set that contains a given element\footnote{The structure presented here
was introduced in 1971 by J. D. Hopcroft and J. D. Ullman \cite{hop71}.
Later, in 1975, R. E. Tarjan studied a more sophisticated variant
of the structure \cite{tar75} that is discussed in many algorithm
textbooks nowadays.}.
\subsubsection{Structure} \subsubsection{Structure}
@ -567,7 +572,10 @@ the smaller set to the larger set.
\index{Prim's algorithm} \index{Prim's algorithm}
\key{Prim's algorithm} \cite{pri57} is an alternative method \key{Prim's algorithm}\footnote{The algorithm is
named after R. C. Prim who published it in 1957 \cite{pri57}.
However, the same algorithm was discovered already in 1930
by V. Jarník.} is an alternative method
for finding a minimum spanning tree. for finding a minimum spanning tree.
The algorithm first adds an arbitrary node The algorithm first adds an arbitrary node
to the tree. to the tree.

View File

@ -657,7 +657,9 @@ achieves these properties.
\index{Floyd's algorithm} \index{Floyd's algorithm}
\key{Floyd's algorithm} walks forward \key{Floyd's algorithm}\footnote{The idea of the algorithm is mentioned in \cite{knu982}
and attributed to R. W. Floyd; however, it is not known if Floyd was the first
who discovered the algorithm.} walks forward
in the graph using two pointers $a$ and $b$. in the graph using two pointers $a$ and $b$.
Both pointers begin at a node $x$ that Both pointers begin at a node $x$ that
is the starting node of the graph. is the starting node of the graph.

View File

@ -368,7 +368,10 @@ performs two depth-first searches.
\index{2SAT problem} \index{2SAT problem}
Strongly connectivity is also linked with the Strongly connectivity is also linked with the
\key{2SAT problem} \cite{asp79}. \key{2SAT problem}\footnote{The algorithm presented here was
introduced in \cite{asp79}.
There is also another well-known linear-time algorithm \cite{eve75}
that is based on backtracking.}.
In this problem, we are given a logical formula In this problem, we are given a logical formula
\[ \[
(a_1 \lor b_1) \land (a_2 \lor b_2) \land \cdots \land (a_m \lor b_m), (a_1 \lor b_1) \land (a_2 \lor b_2) \land \cdots \land (a_m \lor b_m),

View File

@ -266,14 +266,18 @@ is $3+4+3+1=11$.
\end{center} \end{center}
The idea is to construct a tree traversal array that contains The idea is to construct a tree traversal array that contains
three values for each node: (1) the identifier of the node, three values for each node: the identifier of the node,
(2) the size of the subtree, and (3) the value of the node. the size of the subtree, and the value of the node.
For example, the array for the above tree is as follows: For example, the array for the above tree is as follows:
\begin{center} \begin{center}
\begin{tikzpicture}[scale=0.7] \begin{tikzpicture}[scale=0.7]
\draw (0,1) grid (9,-2); \draw (0,1) grid (9,-2);
\node[left] at (-1,0.5) {node id};
\node[left] at (-1,-0.5) {subtree size};
\node[left] at (-1,-1.5) {node value};
\node at (0.5,0.5) {$1$}; \node at (0.5,0.5) {$1$};
\node at (1.5,0.5) {$2$}; \node at (1.5,0.5) {$2$};
\node at (2.5,0.5) {$6$}; \node at (2.5,0.5) {$6$};
@ -330,6 +334,10 @@ can be found as follows:
\fill[color=lightgray] (4,-1) rectangle (8,-2); \fill[color=lightgray] (4,-1) rectangle (8,-2);
\draw (0,1) grid (9,-2); \draw (0,1) grid (9,-2);
\node[left] at (-1,0.5) {node id};
\node[left] at (-1,-0.5) {subtree size};
\node[left] at (-1,-1.5) {node value};
\node at (0.5,0.5) {$1$}; \node at (0.5,0.5) {$1$};
\node at (1.5,0.5) {$2$}; \node at (1.5,0.5) {$2$};
\node at (2.5,0.5) {$6$}; \node at (2.5,0.5) {$6$};
@ -438,6 +446,10 @@ For example, the following array corresponds to the above tree:
\begin{tikzpicture}[scale=0.7] \begin{tikzpicture}[scale=0.7]
\draw (0,1) grid (9,-2); \draw (0,1) grid (9,-2);
\node[left] at (-1,0.5) {node id};
\node[left] at (-1,-0.5) {subtree size};
\node[left] at (-1,-1.5) {path sum};
\node at (0.5,0.5) {$1$}; \node at (0.5,0.5) {$1$};
\node at (1.5,0.5) {$2$}; \node at (1.5,0.5) {$2$};
\node at (2.5,0.5) {$6$}; \node at (2.5,0.5) {$6$};
@ -491,6 +503,10 @@ the array changes as follows:
\fill[color=lightgray] (4,-1) rectangle (8,-2); \fill[color=lightgray] (4,-1) rectangle (8,-2);
\draw (0,1) grid (9,-2); \draw (0,1) grid (9,-2);
\node[left] at (-1,0.5) {node id};
\node[left] at (-1,-0.5) {subtree size};
\node[left] at (-1,-1.5) {path sum};
\node at (0.5,0.5) {$1$}; \node at (0.5,0.5) {$1$};
\node at (1.5,0.5) {$2$}; \node at (1.5,0.5) {$2$};
\node at (2.5,0.5) {$6$}; \node at (2.5,0.5) {$6$};
@ -562,9 +578,9 @@ is node 2:
\node[draw, circle] (3) at (-2,1) {$2$}; \node[draw, circle] (3) at (-2,1) {$2$};
\node[draw, circle] (4) at (0,1) {$3$}; \node[draw, circle] (4) at (0,1) {$3$};
\node[draw, circle] (5) at (2,-1) {$7$}; \node[draw, circle] (5) at (2,-1) {$7$};
\node[draw, circle] (6) at (-3,-1) {$5$}; \node[draw, circle, fill=lightgray] (6) at (-3,-1) {$5$};
\node[draw, circle] (7) at (-1,-1) {$6$}; \node[draw, circle] (7) at (-1,-1) {$6$};
\node[draw, circle] (8) at (-1,-3) {$8$}; \node[draw, circle, fill=lightgray] (8) at (-1,-3) {$8$};
\path[draw,thick,-] (1) -- (2); \path[draw,thick,-] (1) -- (2);
\path[draw,thick,-] (1) -- (3); \path[draw,thick,-] (1) -- (3);
\path[draw,thick,-] (1) -- (4); \path[draw,thick,-] (1) -- (4);
@ -572,6 +588,9 @@ is node 2:
\path[draw,thick,-] (3) -- (6); \path[draw,thick,-] (3) -- (6);
\path[draw,thick,-] (3) -- (7); \path[draw,thick,-] (3) -- (7);
\path[draw,thick,-] (7) -- (8); \path[draw,thick,-] (7) -- (8);
\path[draw=red,thick,->,line width=2pt] (6) edge [bend left] (3);
\path[draw=red,thick,->,line width=2pt] (8) edge [bend right=40] (3);
\end{tikzpicture} \end{tikzpicture}
\end{center} \end{center}
@ -583,13 +602,17 @@ finding the lowest common ancestor of two nodes.
One way to solve the problem is to use the fact One way to solve the problem is to use the fact
that we can efficiently find the $k$th that we can efficiently find the $k$th
ancestor of any node in the tree. ancestor of any node in the tree.
Thus, we can first make sure that Using this, we can divide the problem of
both nodes are at the same level in the tree, finding the lowest common ancestor into two parts.
and then find the smallest value of $k$
such that the $k$th ancestor of both nodes is the same.
As an example, let us find the lowest common We use two pointers that initially point to the
ancestor of nodes $5$ and $8$: two nodes for which we should find the
lowest common ancestor.
First, we move one of the pointers upwards
so that both nodes are at the same level in the tree.
In the example case, we move from node 8 to node 6,
after which both nodes are at the same level:
\begin{center} \begin{center}
\begin{tikzpicture}[scale=0.9] \begin{tikzpicture}[scale=0.9]
@ -599,8 +622,8 @@ ancestor of nodes $5$ and $8$:
\node[draw, circle] (4) at (0,1) {$3$}; \node[draw, circle] (4) at (0,1) {$3$};
\node[draw, circle] (5) at (2,-1) {$7$}; \node[draw, circle] (5) at (2,-1) {$7$};
\node[draw, circle,fill=lightgray] (6) at (-3,-1) {$5$}; \node[draw, circle,fill=lightgray] (6) at (-3,-1) {$5$};
\node[draw, circle] (7) at (-1,-1) {$6$}; \node[draw, circle,fill=lightgray] (7) at (-1,-1) {$6$};
\node[draw, circle,fill=lightgray] (8) at (-1,-3) {$8$}; \node[draw, circle] (8) at (-1,-3) {$8$};
\path[draw,thick,-] (1) -- (2); \path[draw,thick,-] (1) -- (2);
\path[draw,thick,-] (1) -- (3); \path[draw,thick,-] (1) -- (3);
\path[draw,thick,-] (1) -- (4); \path[draw,thick,-] (1) -- (4);
@ -608,26 +631,30 @@ ancestor of nodes $5$ and $8$:
\path[draw,thick,-] (3) -- (6); \path[draw,thick,-] (3) -- (6);
\path[draw,thick,-] (3) -- (7); \path[draw,thick,-] (3) -- (7);
\path[draw,thick,-] (7) -- (8); \path[draw,thick,-] (7) -- (8);
\path[draw=red,thick,->,line width=2pt] (8) edge [bend right] (7);
\end{tikzpicture} \end{tikzpicture}
\end{center} \end{center}
Node $5$ is at level $3$, while node $8$ is at level $4$. After this, we determine the minimum number of steps
Thus, we first move one step upwards from node $8$ to node $6$. needed to move both pointers upwards so that
After this, it turns out that the parent of both nodes $5$ they will point to the same node.
and $6$ is node $2$, so we have found the lowest common ancestor. This node is the lowest common ancestor of the nodes.
The following picture shows how we move in the tree: In the example case, it suffices to move both pointers
one step upwards to node 2,
which is the lowest common ancestor:
\begin{center} \begin{center}
\begin{tikzpicture}[scale=0.9] \begin{tikzpicture}[scale=0.9]
\node[draw, circle] (1) at (0,3) {$1$}; \node[draw, circle] (1) at (0,3) {$1$};
\node[draw, circle] (2) at (2,1) {$4$}; \node[draw, circle] (2) at (2,1) {$4$};
\node[draw, circle] (3) at (-2,1) {$2$}; \node[draw, circle,fill=lightgray] (3) at (-2,1) {$2$};
\node[draw, circle] (4) at (0,1) {$3$}; \node[draw, circle] (4) at (0,1) {$3$};
\node[draw, circle] (5) at (2,-1) {$7$}; \node[draw, circle] (5) at (2,-1) {$7$};
\node[draw, circle,fill=lightgray] (6) at (-3,-1) {$5$}; \node[draw, circle] (6) at (-3,-1) {$5$};
\node[draw, circle] (7) at (-1,-1) {$6$}; \node[draw, circle] (7) at (-1,-1) {$6$};
\node[draw, circle,fill=lightgray] (8) at (-1,-3) {$8$}; \node[draw, circle] (8) at (-1,-3) {$8$};
\path[draw,thick,-] (1) -- (2); \path[draw,thick,-] (1) -- (2);
\path[draw,thick,-] (1) -- (3); \path[draw,thick,-] (1) -- (3);
\path[draw,thick,-] (1) -- (4); \path[draw,thick,-] (1) -- (4);
@ -637,20 +664,21 @@ The following picture shows how we move in the tree:
\path[draw,thick,-] (7) -- (8); \path[draw,thick,-] (7) -- (8);
\path[draw=red,thick,->,line width=2pt] (6) edge [bend left] (3); \path[draw=red,thick,->,line width=2pt] (6) edge [bend left] (3);
\path[draw=red,thick,->,line width=2pt] (8) edge [bend right] (7);
\path[draw=red,thick,->,line width=2pt] (7) edge [bend right] (3); \path[draw=red,thick,->,line width=2pt] (7) edge [bend right] (3);
\end{tikzpicture} \end{tikzpicture}
\end{center} \end{center}
Using this method, we can find the lowest common ancestor Since both parts of the algorithm can be performed in
of any two nodes in $O(\log n)$ time after an $O(n \log n)$ time $O(\log n)$ time using precomputed information,
preprocessing, because both steps can be we can find the lowest common ancestor of any two
performed in $O(\log n)$ time. nodes in $O(\log n)$ time using this technique.
\subsubsection{Method 2} \subsubsection{Method 2}
Another way to solve the problem is based on Another way to solve the problem is based on
a tree traversal array \cite{ben00}. a tree traversal array\footnote{This lowest common ancestor algorithm is based on \cite{ben00}.
This technique is sometimes called the \index{Euler tour technique}
\key{Euler tour technique} \cite{tar84}.}.
Once again, the idea is to traverse the nodes Once again, the idea is to traverse the nodes
using a depth-first search: using a depth-first search:
@ -689,23 +717,26 @@ using a depth-first search:
\end{tikzpicture} \end{tikzpicture}
\end{center} \end{center}
However, we use a bit different variant of However, we use a bit different tree
the tree traversal array where traversal array than before:
we add each node to the array \emph{always} we add each node to the array \emph{always}
when the depth-first search visits the node, when the depth-first search walks through the node,
and not only at the first visit. and not only at the first visit.
Hence, a node that has $k$ children appears $k+1$ times Hence, a node that has $k$ children appears $k+1$ times
in the array, and there are a total of $2n-1$ in the array and there are a total of $2n-1$
nodes in the array. nodes in the array.
We store two values in the array: We store two values in the array:
(1) the identifier of the node, and (2) the level of the the identifier of the node and the level of the
node in the tree. node in the tree.
The following array corresponds to the above tree: The following array corresponds to the above tree:
\begin{center} \begin{center}
\begin{tikzpicture}[scale=0.7] \begin{tikzpicture}[scale=0.7]
\node[left] at (-1,1.5) {node id};
\node[left] at (-1,0.5) {level};
\draw (0,1) grid (15,2); \draw (0,1) grid (15,2);
%\node at (-1.1,1.5) {\texttt{node}}; %\node at (-1.1,1.5) {\texttt{node}};
\node at (0.5,1.5) {$1$}; \node at (0.5,1.5) {$1$};
@ -770,6 +801,10 @@ can be found as follows:
\begin{center} \begin{center}
\begin{tikzpicture}[scale=0.7] \begin{tikzpicture}[scale=0.7]
\node[left] at (-1,1.5) {node id};
\node[left] at (-1,0.5) {level};
\fill[color=lightgray] (2,1) rectangle (3,2); \fill[color=lightgray] (2,1) rectangle (3,2);
\fill[color=lightgray] (5,1) rectangle (6,2); \fill[color=lightgray] (5,1) rectangle (6,2);
\fill[color=lightgray] (2,0) rectangle (6,1); \fill[color=lightgray] (2,0) rectangle (6,1);

View File

@ -22,7 +22,9 @@ problem and no efficient algorithm is known for solving the problem.
\index{Eulerian path} \index{Eulerian path}
An \key{Eulerian path} is a path An \key{Eulerian path}\footnote{L. Euler (1707--1783) studied such paths in 1736
when he solved the famous Königsberg bridge problem.
This was the birth of graph theory.} is a path
that goes exactly once through each edge in the graph. that goes exactly once through each edge in the graph.
For example, the graph For example, the graph
\begin{center} \begin{center}
@ -222,7 +224,8 @@ from node 2 to node 5:
\index{Hierholzer's algorithm} \index{Hierholzer's algorithm}
\key{Hierholzer's algorithm} is an efficient \key{Hierholzer's algorithm}\footnote{The algorithm was published
in 1873 after Hierholzer's death \cite{hie73}.} is an efficient
method for constructing method for constructing
an Eulerian circuit. an Eulerian circuit.
The algorithm consists of several rounds, The algorithm consists of several rounds,
@ -395,7 +398,9 @@ so we have successfully constructed an Eulerian circuit.
\index{Hamiltonian path} \index{Hamiltonian path}
A \key{Hamiltonian path} is a path A \key{Hamiltonian path}
%\footnote{W. R. Hamilton (1805--1865) was an Irish mathematician.}
is a path
that visits each node in the graph exactly once. that visits each node in the graph exactly once.
For example, the graph For example, the graph
\begin{center} \begin{center}
@ -481,12 +486,12 @@ Also stronger results have been achieved:
\begin{itemize} \begin{itemize}
\item \item
\index{Dirac's theorem} \index{Dirac's theorem}
\key{Dirac's theorem}: \key{Dirac's theorem}: %\cite{dir52}
If the degree of each node is at least $n/2$, If the degree of each node is at least $n/2$,
the graph contains a Hamiltonian path. the graph contains a Hamiltonian path.
\item \item
\index{Ore's theorem} \index{Ore's theorem}
\key{Ore's theorem}: \key{Ore's theorem}: %\cite{ore60}
If the sum of degrees of each non-adjacent pair of nodes If the sum of degrees of each non-adjacent pair of nodes
is at least $n$, is at least $n$,
the graph contains a Hamiltonian path. the graph contains a Hamiltonian path.
@ -525,7 +530,9 @@ It is possible to implement this solution in $O(2^n n^2)$ time.
\index{De Bruijn sequence} \index{De Bruijn sequence}
A \key{De Bruijn sequence} is a string that contains A \key{De Bruijn sequence}
%\footnote{N. G. de Bruijn (1918--2012) was a Dutch mathematician.}
is a string that contains
every string of length $n$ every string of length $n$
exactly once as a substring, for a fixed exactly once as a substring, for a fixed
alphabet of $k$ characters. alphabet of $k$ characters.
@ -546,7 +553,7 @@ and each edge adds one character to the string.
The following graph corresponds to the above example: The following graph corresponds to the above example:
\begin{center} \begin{center}
\begin{tikzpicture} \begin{tikzpicture}[scale=0.8]
\node[draw, circle] (00) at (-3,0) {00}; \node[draw, circle] (00) at (-3,0) {00};
\node[draw, circle] (11) at (3,0) {11}; \node[draw, circle] (11) at (3,0) {11};
\node[draw, circle] (01) at (0,2) {01}; \node[draw, circle] (01) at (0,2) {01};
@ -628,12 +635,13 @@ The search can be made more efficient by using
\key{heuristics} that attempt to guide the knight so that \key{heuristics} that attempt to guide the knight so that
a complete tour will be found quickly. a complete tour will be found quickly.
\subsubsection{Warnsdorff's rule} \subsubsection{Warnsdorf's rule}
\index{heuristic} \index{heuristic}
\index{Warnsdorff's rule} \index{Warnsdorf's rule}
\key{Warnsdorff's rule} is a simple and effective heuristic \key{Warnsdorf's rule}\footnote{This heuristic was proposed
in Warnsdorf's book \cite{war23} in 1823.} is a simple and effective heuristic
for finding a knight's tour. for finding a knight's tour.
Using the rule, it is possible to efficiently construct a tour Using the rule, it is possible to efficiently construct a tour
even on a large board. even on a large board.
@ -655,7 +663,7 @@ possible squares to which the knight can move:
\node at (3.5,1.5) {$d$}; \node at (3.5,1.5) {$d$};
\end{tikzpicture} \end{tikzpicture}
\end{center} \end{center}
In this situation, Warnsdorff's rule moves the knight to square $a$, In this situation, Warnsdorf's rule moves the knight to square $a$,
because after this choice, there is only a single possible move. because after this choice, there is only a single possible move.
The other choices would move the knight to squares where The other choices would move the knight to squares where
there would be three moves available. there would be three moves available.

View File

@ -24,7 +24,7 @@ z = \sqrt[3]{3}.\\
However, nobody knows if there are any three However, nobody knows if there are any three
\emph{integers} $x$, $y$ and $z$ \emph{integers} $x$, $y$ and $z$
that would satisfy the equation, but this that would satisfy the equation, but this
is an open problem in number theory. is an open problem in number theory \cite{bec07}.
In this chapter, we will focus on basic concepts In this chapter, we will focus on basic concepts
and algorithms in number theory. and algorithms in number theory.
@ -205,7 +205,9 @@ so the result of the function is $[2,2,2,3]$.
\index{sieve of Eratosthenes} \index{sieve of Eratosthenes}
The \key{sieve of Eratosthenes} is a preprocessing The \key{sieve of Eratosthenes}
%\footnote{Eratosthenes (c. 276 BC -- c. 194 BC) was a Greek mathematician.}
is a preprocessing
algorithm that builds an array using which we algorithm that builds an array using which we
can efficiently check if a given number between $2 \ldots n$ can efficiently check if a given number between $2 \ldots n$
is prime and, if it is not, find one prime factor of the number. is prime and, if it is not, find one prime factor of the number.
@ -327,7 +329,8 @@ The greatest common divisor and the least common multiple
are connected as follows: are connected as follows:
\[\textrm{lcm}(a,b)=\frac{ab}{\textrm{gcd}(a,b)}\] \[\textrm{lcm}(a,b)=\frac{ab}{\textrm{gcd}(a,b)}\]
\key{Euclid's algorithm} provides an efficient way \key{Euclid's algorithm}\footnote{Euclid was a Greek mathematician who
lived in about 300 BC. This is perhaps the first known algorithm in history.} provides an efficient way
to find the greatest common divisor of two numbers. to find the greatest common divisor of two numbers.
The algorithm is based on the following formula: The algorithm is based on the following formula:
\begin{equation*} \begin{equation*}
@ -355,6 +358,7 @@ For example,
Numbers $a$ and $b$ are \key{coprime} Numbers $a$ and $b$ are \key{coprime}
if $\textrm{gcd}(a,b)=1$. if $\textrm{gcd}(a,b)=1$.
\key{Euler's totient function} $\varphi(n)$ \key{Euler's totient function} $\varphi(n)$
%\footnote{Euler presented this function in 1763.}
gives the number of coprime numbers to $n$ gives the number of coprime numbers to $n$
between $1$ and $n$. between $1$ and $n$.
For example, $\varphi(12)=4$, For example, $\varphi(12)=4$,
@ -432,12 +436,16 @@ int modpow(int x, int n, int m) {
\index{Fermat's theorem} \index{Fermat's theorem}
\index{Euler's theorem} \index{Euler's theorem}
\key{Fermat's theorem} states that \key{Fermat's theorem}
%\footnote{Fermat discovered this theorem in 1640.}
states that
\[x^{m-1} \bmod m = 1\] \[x^{m-1} \bmod m = 1\]
when $m$ is prime and $x$ and $m$ are coprime. when $m$ is prime and $x$ and $m$ are coprime.
This also yields This also yields
\[x^k \bmod m = x^{k \bmod (m-1)} \bmod m.\] \[x^k \bmod m = x^{k \bmod (m-1)} \bmod m.\]
More generally, \key{Euler's theorem} states that More generally, \key{Euler's theorem}
%\footnote{Euler published this theorem in 1763.}
states that
\[x^{\varphi(m)} \bmod m = 1\] \[x^{\varphi(m)} \bmod m = 1\]
when $x$ and $m$ are coprime. when $x$ and $m$ are coprime.
Fermat's theorem follows from Euler's theorem, Fermat's theorem follows from Euler's theorem,
@ -517,7 +525,9 @@ cout << x*x << "\n"; // 2537071545
\index{Diophantine equation} \index{Diophantine equation}
A \key{Diophantine equation} is an equation of the form A \key{Diophantine equation}
%\footnote{Diophantus of Alexandria was a Greek mathematician who lived in the 3th century.}
is an equation of the form
\[ ax + by = c, \] \[ ax + by = c, \]
where $a$, $b$ and $c$ are constants where $a$, $b$ and $c$ are constants
and we should find the values of $x$ and $y$. and we should find the values of $x$ and $y$.
@ -637,7 +647,9 @@ are solutions.
\index{Lagrange's theorem} \index{Lagrange's theorem}
\key{Lagrange's theorem} states that every positive integer \key{Lagrange's theorem}
%\footnote{J.-L. Lagrange (1736--1813) was an Italian mathematician.}
states that every positive integer
can be represented as a sum of four squares, i.e., can be represented as a sum of four squares, i.e.,
$a^2+b^2+c^2+d^2$. $a^2+b^2+c^2+d^2$.
For example, the number 123 can be represented For example, the number 123 can be represented
@ -648,7 +660,9 @@ as the sum $8^2+5^2+5^2+3^2$.
\index{Zeckendorf's theorem} \index{Zeckendorf's theorem}
\index{Fibonacci number} \index{Fibonacci number}
\key{Zeckendorf's theorem} states that every \key{Zeckendorf's theorem}
%\footnote{E. Zeckendorf published the theorem in 1972 \cite{zec72}; however, this was not a new result.}
states that every
positive integer has a unique representation positive integer has a unique representation
as a sum of Fibonacci numbers such that as a sum of Fibonacci numbers such that
no two numbers are equal or consecutive no two numbers are equal or consecutive
@ -689,7 +703,9 @@ produces the smallest Pythagorean triple
\index{Wilson's theorem} \index{Wilson's theorem}
\key{Wilson's theorem} states that a number $n$ \key{Wilson's theorem}
%\footnote{J. Wilson (1741--1793) was an English mathematician.}
states that a number $n$
is prime exactly when is prime exactly when
\[(n-1)! \bmod n = n-1.\] \[(n-1)! \bmod n = n-1.\]
For example, the number 11 is prime, because For example, the number 11 is prime, because

View File

@ -342,7 +342,9 @@ corresponds to the binomial coefficient formula.
\index{Catalan number} \index{Catalan number}
The \key{Catalan number} $C_n$ equals the The \key{Catalan number}
%\footnote{E. C. Catalan (1814--1894) was a Belgian mathematician.}
$C_n$ equals the
number of valid number of valid
parenthesis expressions that consist of parenthesis expressions that consist of
$n$ left parentheses and $n$ right parentheses. $n$ left parentheses and $n$ right parentheses.
@ -678,7 +680,9 @@ elements should be changed.
\index{Burnside's lemma} \index{Burnside's lemma}
\key{Burnside's lemma} can be used to count \key{Burnside's lemma}
%\footnote{Actually, Burnside did not discover this lemma; he only mentioned it in his book \cite{bur97}.}
can be used to count
the number of combinations so that the number of combinations so that
only one representative is counted only one representative is counted
for each group of symmetric combinations. for each group of symmetric combinations.
@ -764,7 +768,10 @@ with 3 colors is
\index{Cayley's formula} \index{Cayley's formula}
\key{Cayley's formula} states that \key{Cayley's formula}
% \footnote{While the formula is named after A. Cayley,
% who studied it in 1889, it was discovered earlier by C. W. Borchardt in 1860.}
states that
there are $n^{n-2}$ labeled trees there are $n^{n-2}$ labeled trees
that contain $n$ nodes. that contain $n$ nodes.
The nodes are labeled $1,2,\ldots,n$, The nodes are labeled $1,2,\ldots,n$,
@ -827,7 +834,9 @@ be derived using Prüfer codes.
\index{Prüfer code} \index{Prüfer code}
A \key{Prüfer code} is a sequence of A \key{Prüfer code}
%\footnote{In 1918, H. Prüfer proved Cayley's theorem using Prüfer codes \cite{pru18}.}
is a sequence of
$n-2$ numbers that describes a labeled tree. $n-2$ numbers that describes a labeled tree.
The code is constructed by following a process The code is constructed by following a process
that removes $n-2$ leaves from the tree. that removes $n-2$ leaves from the tree.

View File

@ -245,8 +245,9 @@ two $n \times n$ matrices
in $O(n^3)$ time. in $O(n^3)$ time.
There are also more efficient algorithms There are also more efficient algorithms
for matrix multiplication\footnote{The first such for matrix multiplication\footnote{The first such
algorithm, with time complexity $O(n^{2.80735})$, algorithm was Strassen's algorithm,
was published in 1969 \cite{str69}, and published in 1969 \cite{str69},
whose time complexity is $O(n^{2.80735})$;
the best current algorithm the best current algorithm
works in $O(n^{2.37286})$ time \cite{gal14}.}, works in $O(n^{2.37286})$ time \cite{gal14}.},
but they are mostly of theoretical interest but they are mostly of theoretical interest
@ -749,7 +750,9 @@ $2 \rightarrow 1 \rightarrow 4 \rightarrow 2 \rightarrow 5$.
\index{Kirchhoff's theorem} \index{Kirchhoff's theorem}
\index{spanning tree} \index{spanning tree}
\key{Kirchhoff's theorem} provides a way \key{Kirchhoff's theorem}
%\footnote{G. R. Kirchhoff (1824--1887) was a German physicist.}
provides a way
to calculate the number of spanning trees to calculate the number of spanning trees
of a graph as a determinant of a special matrix. of a graph as a determinant of a special matrix.
For example, the graph For example, the graph

View File

@ -359,7 +359,10 @@ The expected value for $X$ in a geometric distribution is
\index{Markov chain} \index{Markov chain}
A \key{Markov chain} is a random process A \key{Markov chain}
% \footnote{A. A. Markov (1856--1922)
% was a Russian mathematician.}
is a random process
that consists of states and transitions between them. that consists of states and transitions between them.
For each state, we know the probabilities For each state, we know the probabilities
for moving to other states. for moving to other states.
@ -514,7 +517,11 @@ just to find one element?
It turns out that we can find order statistics It turns out that we can find order statistics
using a randomized algorithm without sorting the array. using a randomized algorithm without sorting the array.
The algorithm is a Las Vegas algorithm: The algorithm, called \key{quickselect}\footnote{In 1961,
C. A. R. Hoare published two algorithms that
are efficient on average: \index{quicksort} \index{quickselect}
\key{quicksort} \cite{hoa61a} for sorting arrays and
\key{quickselect} \cite{hoa61b} for finding order statistics.}, is a Las Vegas algorithm:
its running time is usually $O(n)$ its running time is usually $O(n)$
but $O(n^2)$ in the worst case. but $O(n^2)$ in the worst case.
@ -560,7 +567,9 @@ but one could hope that verifying the
answer would by easier than to calculate it from scratch. answer would by easier than to calculate it from scratch.
It turns out that we can solve the problem It turns out that we can solve the problem
using a Monte Carlo algorithm whose using a Monte Carlo algorithm\footnote{R. M. Freivalds published
this algorithm in 1977 \cite{fre77}, and it is sometimes
called \index{Freivalds' algoritm} \key{Freivalds' algorithm}.} whose
time complexity is only $O(n^2)$. time complexity is only $O(n^2)$.
The idea is simple: we choose a random vector The idea is simple: we choose a random vector
$X$ of $n$ elements, and calculate the matrices $X$ of $n$ elements, and calculate the matrices

View File

@ -248,7 +248,8 @@ and this is always the final state.
It turns out that we can easily classify It turns out that we can easily classify
any nim state by calculating any nim state by calculating
the \key{nim sum} $x_1 \oplus x_2 \oplus \cdots \oplus x_n$, the \key{nim sum} $x_1 \oplus x_2 \oplus \cdots \oplus x_n$,
where $\oplus$ is the xor operation. where $\oplus$ is the xor operation\footnote{The optimal strategy
for nim was published in 1901 by C. L. Bouton \cite{bou01}.}.
The states whose nim sum is 0 are losing states, The states whose nim sum is 0 are losing states,
and all other states are winning states. and all other states are winning states.
For example, the nim sum for For example, the nim sum for
@ -367,7 +368,8 @@ so the nim sum is not 0.
\index{SpragueGrundy theorem} \index{SpragueGrundy theorem}
The \key{SpragueGrundy theorem} generalizes the The \key{SpragueGrundy theorem}\footnote{The theorem was discovered
independently by R. Sprague \cite{spr35} and P. M. Grundy \cite{gru39}.} generalizes the
strategy used in nim to all games that fulfil strategy used in nim to all games that fulfil
the following requirements: the following requirements:

View File

@ -42,6 +42,7 @@ After this, it suffices to sum the areas
of the triangles. of the triangles.
The area of a triangle can be calculated, The area of a triangle can be calculated,
for example, using \key{Heron's formula} for example, using \key{Heron's formula}
%\footnote{Heron of Alexandria (c. 10--70) was a Greek mathematician.}
\[ \sqrt{s (s-a) (s-b) (s-c)},\] \[ \sqrt{s (s-a) (s-b) (s-c)},\]
where $a$, $b$ and $c$ are the lengths where $a$, $b$ and $c$ are the lengths
of the triangle's sides and of the triangle's sides and
@ -500,7 +501,8 @@ so $b$ is outside the polygon.
\section{Polygon area} \section{Polygon area}
A general formula for calculating the area A general formula for calculating the area
of a polygon is of a polygon\footnote{This formula is sometimes called the
\index{shoelace formula} \key{shoelace formula}.} is
\[\frac{1}{2} |\sum_{i=1}^{n-1} (p_i \times p_{i+1})| = \[\frac{1}{2} |\sum_{i=1}^{n-1} (p_i \times p_{i+1})| =
\frac{1}{2} |\sum_{i=1}^{n-1} (x_i y_{i+1} - x_{i+1} y_i)|, \] \frac{1}{2} |\sum_{i=1}^{n-1} (x_i y_{i+1} - x_{i+1} y_i)|, \]
where the vertices are where the vertices are

View File

@ -27,10 +27,10 @@ For example, the table
\begin{tabular}{ccc} \begin{tabular}{ccc}
person & arrival time & leaving time \\ person & arrival time & leaving time \\
\hline \hline
Uolevi & 10 & 15 \\ John & 10 & 15 \\
Maija & 6 & 12 \\ Maria & 6 & 12 \\
Kaaleppi & 14 & 16 \\ Peter & 14 & 16 \\
Liisa & 5 & 13 \\ Lisa & 5 & 13 \\
\end{tabular} \end{tabular}
\end{center} \end{center}
corresponds to the following events: corresponds to the following events:
@ -51,10 +51,10 @@ corresponds to the following events:
\draw[fill] (5,-5.5) circle [radius=0.05]; \draw[fill] (5,-5.5) circle [radius=0.05];
\draw[fill] (13,-5.5) circle [radius=0.05]; \draw[fill] (13,-5.5) circle [radius=0.05];
\node at (2,-1) {Uolevi}; \node at (2,-1) {John};
\node at (2,-2.5) {Maija}; \node at (2,-2.5) {Maria};
\node at (2,-4) {Kaaleppi}; \node at (2,-4) {Peter};
\node at (2,-5.5) {Liisa}; \node at (2,-5.5) {Lisa};
\end{tikzpicture} \end{tikzpicture}
\end{center} \end{center}
We go through the events from left to right We go through the events from left to right
@ -85,10 +85,10 @@ In the example, the events are processed as follows:
\draw[fill] (5,-5.5) circle [radius=0.05]; \draw[fill] (5,-5.5) circle [radius=0.05];
\draw[fill] (13,-5.5) circle [radius=0.05]; \draw[fill] (13,-5.5) circle [radius=0.05];
\node at (2,-1) {Uolevi}; \node at (2,-1) {John};
\node at (2,-2.5) {Maija}; \node at (2,-2.5) {Maria};
\node at (2,-4) {Kaaleppi}; \node at (2,-4) {Peter};
\node at (2,-5.5) {Liisa}; \node at (2,-5.5) {Lisa};
\path[draw,dashed] (10,0)--(10,-6.5); \path[draw,dashed] (10,0)--(10,-6.5);
\path[draw,dashed] (15,0)--(15,-6.5); \path[draw,dashed] (15,0)--(15,-6.5);
@ -122,7 +122,7 @@ The symbols $+$ and $-$ indicate whether the
value of the counter increases or decreases, value of the counter increases or decreases,
and the value of the counter is shown below. and the value of the counter is shown below.
The maximum value of the counter is 3 The maximum value of the counter is 3
between Uolevi's arrival time and Maija's leaving time. between John's arrival time and Maria's leaving time.
The running time of the algorithm is $O(n \log n)$, The running time of the algorithm is $O(n \log n)$,
because sorting the events takes $O(n \log n)$ time because sorting the events takes $O(n \log n)$ time
@ -270,7 +270,11 @@ we should find the following points:
This is another example of a problem This is another example of a problem
that can be solved in $O(n \log n)$ time that can be solved in $O(n \log n)$ time
using a sweep line algorithm. using a sweep line algorithm\footnote{Besides this approach,
there is also an
$O(n \log n)$ time divide-and-conquer algorithm \cite{sha75}
that divides the points into two sets and recursively
solves the problem for both sets.}.
We go through the points from left to right We go through the points from left to right
and maintain a value $d$: the minimum distance and maintain a value $d$: the minimum distance
between two points seen so far. between two points seen so far.
@ -396,21 +400,20 @@ an easy way to
construct the convex hull for a set of points construct the convex hull for a set of points
in $O(n \log n)$ time. in $O(n \log n)$ time.
The algorithm constructs the convex hull The algorithm constructs the convex hull
in two steps: in two parts:
first the upper hull and then the lower hull. first the upper hull and then the lower hull.
Both steps are similar, so we can focus on Both parts are similar, so we can focus on
constructing the upper hull. constructing the upper hull.
We sort the points primarily according to First, we sort the points primarily according to
x coordinates and secondarily according to y coordinates. x coordinates and secondarily according to y coordinates.
After this, we go through the points and always After this, we go through the points and
add the new point to the hull. add each point to the hull.
After adding a point we check using cross products Always after adding a point to the hull,
whether the tree last point in the hull turn left. we make sure that the last line segment
If this holds, we remove the middle point from the hull. in the hull does not turn left.
After this we keep checking the three last points As long as this holds, we repeatedly remove the
and removing points, until the three last points second last point from the hull.
do not turn left.
The following pictures show how The following pictures show how
Andrew's algorithm works: Andrew's algorithm works:

215
list.tex
View File

@ -25,6 +25,11 @@
On a routing problem. On a routing problem.
\emph{Quarterly of Applied Mathematics}, 16(1):87--90, 1958. \emph{Quarterly of Applied Mathematics}, 16(1):87--90, 1958.
\bibitem{bec07}
M. Beck, E. Pine, W. Tarrat and K. Y. Jensen.
New integer representations as the sum of three cubes.
\emph{Mathematics of Computation}, 76(259):1683--1690, 2007.
\bibitem{ben00} \bibitem{ben00}
M. A. Bender and M. Farach-Colton. M. A. Bender and M. Farach-Colton.
The LCA problem revisited. In The LCA problem revisited. In
@ -33,17 +38,46 @@
\bibitem{ben86} \bibitem{ben86}
J. Bentley. J. Bentley.
\emph{Programming Pearls}. \emph{Programming Pearls}.
Addison-Wesley, 1986. Addison-Wesley, 1999 (2nd edition).
\bibitem{bou01}
C. L. Bouton.
Nim, a game with a complete mathematical theory.
pro \emph{Annals of Mathematics}, 3(1/4):35--39, 1901.
% \bibitem{bur97}
% W. Burnside.
% \emph{Theory of Groups of Finite Order},
% Cambridge University Press, 1897.
\bibitem{cod15} \bibitem{cod15}
Codeforces: On ''Mo's algorithm'', Codeforces: On ''Mo's algorithm'',
\url{http://codeforces.com/blog/entry/20032} \url{http://codeforces.com/blog/entry/20032}
\bibitem{cor09}
T. H. Cormen, C. E. Leiserson, R. L. Rivest and C. Stein.
\emph{Introduction to Algorithms}, MIT Press, 2009 (3rd edition).
\bibitem{dij59} \bibitem{dij59}
E. W. Dijkstra. E. W. Dijkstra.
A note on two problems in connexion with graphs. A note on two problems in connexion with graphs.
\emph{Numerische Mathematik}, 1(1):269--271, 1959. \emph{Numerische Mathematik}, 1(1):269--271, 1959.
\bibitem{dik12}
K. Diks et al.
\emph{Looking for a Challenge? The Ultimate Problem Set from
the University of Warsaw Programming Competitions}, University of Warsaw, 2012.
% \bibitem{dil50}
% R. P. Dilworth.
% A decomposition theorem for partially ordered sets.
% \emph{Annals of Mathematics}, 51(1):161--166, 1950.
% \bibitem{dir52}
% G. A. Dirac.
% Some theorems on abstract graphs.
% \emph{Proceedings of the London Mathematical Society}, 3(1):69--81, 1952.
\bibitem{edm65} \bibitem{edm65}
J. Edmonds. J. Edmonds.
Paths, trees, and flowers. Paths, trees, and flowers.
@ -54,6 +88,11 @@
Theoretical improvements in algorithmic efficiency for network flow problems. Theoretical improvements in algorithmic efficiency for network flow problems.
\emph{Journal of the ACM}, 19(2):248--264, 1972. \emph{Journal of the ACM}, 19(2):248--264, 1972.
\bibitem{eve75}
S. Even, A. Itai and A. Shamir.
On the complexity of time table and multi-commodity flow problems.
\emph{16th Annual Symposium on Foundations of Computer Science}, 184--193, 1975.
\bibitem{fan94} \bibitem{fan94}
D. Fanding. D. Fanding.
A faster algorithm for shortest-path -- SPFA. A faster algorithm for shortest-path -- SPFA.
@ -69,21 +108,26 @@
Theoretical and practical improvements on the RMQ-problem, with applications to LCA and LCE. Theoretical and practical improvements on the RMQ-problem, with applications to LCA and LCE.
In \emph{Annual Symposium on Combinatorial Pattern Matching}, 36--48, 2006. In \emph{Annual Symposium on Combinatorial Pattern Matching}, 36--48, 2006.
\bibitem{fis11}
J. Fischer and V. Heun.
Space-efficient preprocessing schemes for range minimum queries on static arrays.
\emph{SIAM Journal on Computing}, 40(2):465--492, 2011.
\bibitem{flo62} \bibitem{flo62}
R. W. Floyd R. W. Floyd
Algorithm 97: shortest path. Algorithm 97: shortest path.
\emph{Communications of the ACM}, 5(6):345, 1962. \emph{Communications of the ACM}, 5(6):345, 1962.
\bibitem{for56a}
L. R. Ford.
Network flow theory.
RAND Corporation, Santa Monica, California, 1956.
\bibitem{for56} \bibitem{for56}
L. R. Ford and D. R. Fulkerson. L. R. Ford and D. R. Fulkerson.
Maximal flow through a network. Maximal flow through a network.
\emph{Canadian Journal of Mathematics}, 8(3):399--404, 1956. \emph{Canadian Journal of Mathematics}, 8(3):399--404, 1956.
\bibitem{fre77}
R. Freivalds.
Probabilistic machines can use less running time.
In \emph{IFIP congress}, 839--842, 1977.
\bibitem{gal14} \bibitem{gal14}
F. Le Gall. F. Le Gall.
Powers of tensors and fast matrix multiplication. Powers of tensors and fast matrix multiplication.
@ -106,13 +150,58 @@
\emph{2014 IEEE 55th Annual Symposium on Foundations of Computer Science}, \emph{2014 IEEE 55th Annual Symposium on Foundations of Computer Science},
621--630, 2014. 621--630, 2014.
\bibitem{gru39}
P. M. Grundy.
Mathematics and games.
\emph{Eureka}, 2(5):6--8, 1939.
\bibitem{gus97} \bibitem{gus97}
D. Gusfield. D. Gusfield.
\emph{Algorithms on Strings, Trees and Sequences: \emph{Algorithms on Strings, Trees and Sequences:
Computer Science and Computational Biology}, Computer Science and Computational Biology},
Cambridge University Press, 1997. Cambridge University Press, 1997.
% \bibitem{hal35}
% P. Hall.
% On representatives of subsets.
% \emph{Journal London Mathematical Society} 10(1):26--30, 1935.
\bibitem{hal13}
S. Halim and F. Halim.
\emph{Competitive Programming 3: The New Lower Bound of Programming Contests}, 2013.
\bibitem{hel62}
M. Held and R. M. Karp.
A dynamic programming approach to sequencing problems.
\emph{Journal of the Society for Industrial and Applied Mathematics}, 10(1):196--210, 1962.
\bibitem{hie73}
C. Hierholzer and C. Wiener.
Über die Möglichkeit, einen Linienzug ohne Wiederholung und ohne Unterbrechung zu umfahren.
\emph{Mathematische Annalen}, 6(1), 30--32, 1873.
\bibitem{hoa61a}
C. A. R. Hoare.
Algorithm 64: Quicksort.
\emph{Communications of the ACM}, 4(7):321, 1961.
\bibitem{hoa61b}
C. A. R. Hoare.
Algorithm 65: Find.
\emph{Communications of the ACM}, 4(7):321--322, 1961.
\bibitem{hop71}
J. E. Hopcroft and J. D. Ullman.
A linear list merging algorithm.
Technical report, Cornell University, 1971.
\bibitem{hor74}
E. Horowitz and S. Sahni.
Computing partitions with applications to the knapsack problem.
\emph{Journal of the ACM}, 21(2):277--292, 1974.
\bibitem{huf52} \bibitem{huf52}
D. A. Huffman.
A method for the construction of minimum-redundancy codes. A method for the construction of minimum-redundancy codes.
\emph{Proceedings of the IRE}, 40(9):1098--1101, 1952. \emph{Proceedings of the IRE}, 40(9):1098--1101, 1952.
@ -125,44 +214,140 @@
Efficient randomized pattern-matching algorithms. Efficient randomized pattern-matching algorithms.
\emph{IBM Journal of Research and Development}, 31(2):249--260, 1987. \emph{IBM Journal of Research and Development}, 31(2):249--260, 1987.
\bibitem{kas61} \bibitem{kle05}
P. W. Kasteleyn. J. Kleinberg and É. Tardos.
The statistics of dimers on a lattice: I. The number of dimer arrangements on a quadratic lattice. \emph{Algorithm Design}, Pearson, 2005.
\emph{Physica}, 27(12):1209--1225, 1961.
% \bibitem{kas61}
% P. W. Kasteleyn.
% The statistics of dimers on a lattice: I. The number of dimer arrangements on a quadratic lattice.
% \emph{Physica}, 27(12):1209--1225, 1961.
\bibitem{knu982}
D. E. Knuth.
\emph{The Art of Computer Programming. Volume 2: Seminumerical Algorithms}, AddisonWesley, 1998 (3rd edition).
\bibitem{knu983}
D. E. Knuth.
\emph{The Art of Computer Programming. Volume 3: Sorting and Searching}, AddisonWesley, 1998 (2nd edition).
% \bibitem{kon31}
% D. Kőnig.
% Gráfok és mátrixok.
% \emph{Matematikai és Fizikai Lapok}, 38(1):116--119, 1931.
\bibitem{kru56} \bibitem{kru56}
J. B. Kruskal. J. B. Kruskal.
On the shortest spanning subtree of a graph and the traveling salesman problem. On the shortest spanning subtree of a graph and the traveling salesman problem.
\emph{Proceedings of the American Mathematical Society}, 7(1):48--50, 1956. \emph{Proceedings of the American Mathematical Society}, 7(1):48--50, 1956.
\bibitem{lev66}
V. I. Levenshtein.
Binary codes capable of correcting deletions, insertions, and reversals.
\emph{Soviet physics doklady}, 10(8):707--710, 1966.
\bibitem{mai84} \bibitem{mai84}
M. G. Main and R. J. Lorentz. M. G. Main and R. J. Lorentz.
An $O(n \log n)$ algorithm for finding all repetitions in a string. An $O(n \log n)$ algorithm for finding all repetitions in a string.
\emph{Journal of Algorithms}, 5(3):422--432, 1984. \emph{Journal of Algorithms}, 5(3):422--432, 1984.
% \bibitem{ore60}
% Ø. Ore.
% Note on Hamilton circuits.
% \emph{The American Mathematical Monthly}, 67(1):55, 1960.
\bibitem{pac13} \bibitem{pac13}
J. Pachocki and J. Radoszweski. J. Pachocki and J. Radoszweski.
Where to use and how not to use polynomial string hashing. Where to use and how not to use polynomial string hashing.
\emph{Olympiads in Informatics}, 2013. \emph{Olympiads in Informatics}, 7(1):90--100, 2013.
% \bibitem{pic99}
% G. Pick.
% Geometrisches zur Zahlenlehre.
% \emph{Sitzungsberichte des deutschen naturwissenschaftlich-medicinischen Vereines
% für Böhmen "Lotos" in Prag. (Neue Folge)}, 19:311--319, 1899.
\bibitem{pea05}
D. Pearson.
A polynomial-time algorithm for the change-making problem.
\emph{Operations Research Letters}, 33(3):231--234, 2005.
\bibitem{pri57} \bibitem{pri57}
R. C. Prim. R. C. Prim.
Shortest connection networks and some generalizations. Shortest connection networks and some generalizations.
\emph{Bell System Technical Journal}, 36(6):1389--1401, 1957. \emph{Bell System Technical Journal}, 36(6):1389--1401, 1957.
% \bibitem{pru18}
% H. Prüfer.
% Neuer Beweis eines Satzes über Permutationen.
% \emph{Arch. Math. Phys}, 27:742--744, 1918.
\bibitem{q27}
27-Queens Puzzle: Massively Parallel Enumeration and Solution Counting.
\url{https://github.com/preusser/q27}
\bibitem{sha75}
M. I. Shamos and D. Hoey.
Closest-point problems.
\emph{16th Annual Symposium on Foundations of Computer Science}, 151--162, 1975.
\bibitem{sha81} \bibitem{sha81}
M. Sharir. M. Sharir.
A strong-connectivity algorithm and its applications in data flow analysis. A strong-connectivity algorithm and its applications in data flow analysis.
\emph{Computers \& Mathematics with Applications}, 7(1):67--72, 1981. \emph{Computers \& Mathematics with Applications}, 7(1):67--72, 1981.
\bibitem{ski08}
S. S. Skiena.
\emph{The Algorithm Design Manual}, Springer, 2008 (2nd edition).
\bibitem{ski03}
S. S. Skiena and M. A. Revilla.
\emph{Programming Challenges: The Programming Contest Training Manual},
Springer, 2003.
\bibitem{spr35}
R. Sprague.
Über mathematische Kampfspiele.
\emph{Tohoku Mathematical Journal}, 41:438--444, 1935.
\bibitem{sta06}
P. Stańczyk.
\emph{Algorytmika praktyczna w konkursach Informatycznych},
MSc thesis, University of Warsaw, 2006.
\bibitem{str69} \bibitem{str69}
V. Strassen. V. Strassen.
Gaussian elimination is not optimal. Gaussian elimination is not optimal.
\emph{Numerische Mathematik}, 13(4):354--356, 1969. \emph{Numerische Mathematik}, 13(4):354--356, 1969.
\bibitem{tem61} \bibitem{tar75}
H. N. V. Temperley and M. E. Fisher. R. E. Tarjan.
Dimer problem in statistical mechanics -- an exact result. Efficiency of a good but not linear set union algorithm.
\emph{Philosophical Magazine}, 6(68):1061--1063, 1961. \emph{Journal of the ACM}, 22(2):215--225, 1975.
\bibitem{tar84}
R. E. Tarjan and U. Vishkin.
Finding biconnected componemts and computing tree functions in logarithmic parallel time.
\emph{25th Annual Symposium on Foundations of Computer Science}, 12--20, 1984.
% \bibitem{tem61}
% H. N. V. Temperley and M. E. Fisher.
% Dimer problem in statistical mechanics -- an exact result.
% \emph{Philosophical Magazine}, 6(68):1061--1063, 1961.
\bibitem{war23}
H. C. von Warnsdorf.
\emph{Des Rösselsprunges einfachste und allgemeinste Lösung}.
Schmalkalden, 1823.
\bibitem{war62}
S. Warshall.
A theorem on boolean matrices.
\emph{Journal of the ACM}, 9(1):11--12, 1962.
% \bibitem{zec72}
% E. Zeckendorf.
% Représentation des nombres naturels par une somme de nombres de Fibonacci ou de nombres de Lucas.
% \emph{Bull. Soc. Roy. Sci. Liege}, 41:179--182, 1972.
\end{thebibliography} \end{thebibliography}

View File

@ -12,7 +12,7 @@ The book is especially intended for
students who want to learn algorithms and students who want to learn algorithms and
possibly participate in possibly participate in
the International Olympiad in Informatics (IOI) or the International Olympiad in Informatics (IOI) or
the International Collegiate Programming Contest (ICPC). in the International Collegiate Programming Contest (ICPC).
Of course, the book is also suitable for Of course, the book is also suitable for
anybody else interested in competitive programming. anybody else interested in competitive programming.