Improve grammar and language style in chapter 2

This commit is contained in:
Roope Salmi 2017-02-23 03:07:40 +02:00
parent 1bf55961f6
commit dc4e33512a
1 changed files with 20 additions and 20 deletions

View File

@ -16,7 +16,7 @@ for some input.
The idea is to represent the efficiency The idea is to represent the efficiency
as an function whose parameter is the size of the input. as an function whose parameter is the size of the input.
By calculating the time complexity, By calculating the time complexity,
we can find out whether the algorithm is good enough we can find out whether the algorithm is fast enough
without implementing it. without implementing it.
\section{Calculation rules} \section{Calculation rules}
@ -97,7 +97,7 @@ for (int i = 1; i <= n; i++) {
\subsubsection*{Phases} \subsubsection*{Phases}
If the code consists of consecutive phases, If the algorithm consists of consecutive phases,
the total time complexity is the largest the total time complexity is the largest
time complexity of a single phase. time complexity of a single phase.
The reason for this is that the slowest The reason for this is that the slowest
@ -211,15 +211,15 @@ $n$ must be divided by 2 to get 1.
A \key{square root algorithm} is slower than A \key{square root algorithm} is slower than
$O(\log n)$ but faster than $O(n)$. $O(\log n)$ but faster than $O(n)$.
A special property of square roots is that A special property of square roots is that
$\sqrt n = n/\sqrt n$, so the square root $\sqrt n$ lies $\sqrt n = n/\sqrt n$, so the square root $\sqrt n$ lies,
in some sense in the middle of the input. in some sense, in the middle of the input.
\item[$O(n)$] \item[$O(n)$]
\index{linear algorithm} \index{linear algorithm}
A \key{linear} algorithm goes through the input A \key{linear} algorithm goes through the input
a constant number of times. a constant number of times.
This is often the best possible time complexity, This is often the best possible time complexity,
because it is usually needed to access each because it is usually necessary to access each
input element at least once before input element at least once before
reporting the answer. reporting the answer.
@ -281,13 +281,13 @@ Still, there are many important problems for which
no polynomial algorithm is known, i.e., no polynomial algorithm is known, i.e.,
nobody knows how to solve them efficiently. nobody knows how to solve them efficiently.
\key{NP-hard} problems are an important set \key{NP-hard} problems are an important set
of problems for which no polynomial algorithm is known \cite{gar79}. of problems, for which no polynomial algorithm is known \cite{gar79}.
\section{Estimating efficiency} \section{Estimating efficiency}
By calculating the time complexity of an algorithm, By calculating the time complexity of an algorithm,
it is possible to check before it is possible to check, before
implementing the algorithm that it is implementing the algorithm, that it is
efficient enough for the problem. efficient enough for the problem.
The starting point for estimations is the fact that The starting point for estimations is the fact that
a modern computer can perform some hundreds of a modern computer can perform some hundreds of
@ -305,7 +305,7 @@ we can try to guess
the required time complexity of the algorithm the required time complexity of the algorithm
that solves the problem. that solves the problem.
The following table contains some useful estimates The following table contains some useful estimates
assuming that the time limit is one second. assuming a time limit of one second.
\begin{center} \begin{center}
\begin{tabular}{ll} \begin{tabular}{ll}
@ -322,7 +322,7 @@ $n \le 10$ & $O(n!)$ \\
\end{center} \end{center}
For example, if the input size is $n=10^5$, For example, if the input size is $n=10^5$,
it is probably expected that the time it should probably be expected that the time
complexity of the algorithm is $O(n)$ or $O(n \log n)$. complexity of the algorithm is $O(n)$ or $O(n \log n)$.
This information makes it easier to design the algorithm, This information makes it easier to design the algorithm,
because it rules out approaches that would yield because it rules out approaches that would yield
@ -347,7 +347,7 @@ for solving a problem such that their
time complexities are different. time complexities are different.
This section discusses a classic problem that This section discusses a classic problem that
has a straightforward $O(n^3)$ solution. has a straightforward $O(n^3)$ solution.
However, by designing a better algorithm it However, by designing a better algorithm, it
is possible to solve the problem in $O(n^2)$ is possible to solve the problem in $O(n^2)$
time and even in $O(n)$ time. time and even in $O(n)$ time.
@ -415,10 +415,10 @@ the following subarray produces the maximum sum $10$:
\subsubsection{Algorithm 1} \subsubsection{Algorithm 1}
A straightforward algorithm to the problem A straightforward algorithm to solve the problem
is to go through all possible ways to is to go through all possible ways of
select a subarray, calculate the sum of selecting a subarray, calculate the sum of
numbers in each subarray and maintain the numbers in each subarray and maintain
the maximum sum. the maximum sum.
The following code implements this algorithm: The following code implements this algorithm:
@ -473,12 +473,12 @@ After this change, the time complexity is $O(n^2)$.
Surprisingly, it is possible to solve the problem Surprisingly, it is possible to solve the problem
in $O(n)$ time, which means that we can remove in $O(n)$ time, which means that we can remove
one more loop. one more loop.
The idea is to calculate for each array position The idea is to calculate, for each array position,
the maximum sum of a subarray that ends at that position. the maximum sum of a subarray that ends at that position.
After this, the answer for the problem is the After this, the answer for the problem is the
maximum of those sums. maximum of those sums.
Condider the subproblem of finding the maximum-sum subarray Consider the subproblem of finding the maximum-sum subarray
that ends at position $k$. that ends at position $k$.
There are two possibilities: There are two possibilities:
\begin{enumerate} \begin{enumerate}
@ -517,7 +517,7 @@ It is interesting to study how efficient
algorithms are in practice. algorithms are in practice.
The following table shows the running times The following table shows the running times
of the above algorithms for different of the above algorithms for different
values of $n$ in a modern computer. values of $n$ on a modern computer.
In each test, the input was generated randomly. In each test, the input was generated randomly.
The time needed for reading the input was not The time needed for reading the input was not
@ -539,9 +539,9 @@ $10^7$ & > $10,0$ s & > $10,0$ s & $0{,}0$ s \\
The comparison shows that all algorithms The comparison shows that all algorithms
are efficient when the input size is small, are efficient when the input size is small,
but larger inputs bring out remarkable but larger inputs bring out remarkable
differences in running times of the algorithms. differences in the running times of the algorithms.
The $O(n^3)$ time algorithm 1 becomes slow The $O(n^3)$ time algorithm 1 becomes slow
when $n=10^4$, and the $O(n^2)$ time algorithm 2 when $n=10^4$, and the $O(n^2)$ time algorithm 2
becomes slow when $n=10^5$. becomes slow when $n=10^5$.
Only the $O(n)$ time algorithm 3 processes Only the $O(n)$ time algorithm 3 is able to process
even the largest inputs instantly. even the largest inputs instantly.