Tuesday, November 1, 2011

Algorithm Complexity

I've been going through project euler problems lately, and they tend to be good examples of how algorithm complexity can be important.  With a lot of them, reducing complexity takes a little insight.  I thought it might be interesting to walk through a solution.

To high level refresh, the complexity of some $f(n)$ that describes an algorithm can be characterized using...

$O$ - asymptotic upper bound
$\Omega$ - asymptotic lower bound
$\Theta$ - asymptotic upper and lower bound

There are also...

$o$ - exclusive asymptotic upper bound
$\omega$ - exclusive asymptotic lower bound

The specific problem has to do with triangle numbers.  All code is haskell.


"What is the value of the first triangle number to have over five hundred divisors?"


The obvious algorithm is to compute each triangle number by summing, and iterate over the integers from one to the number counting each divisor.  The code for this is straightforward.

main = putStrLn $ show $ (fst . head)
                         (filter ((>500) . snd) (zip t (map divs t)))
       where t = map tri [1..]
 
tri n = sum [1..n]
 
divs 0 = 0
divs n = length $ filter (\n' -> n `mod` n' == 0) [1..n]

It's simple, and it is correct.  However, it's impractical.

Consider the complexity.  The run-time to sum $1..n$ is $n$.  The run-time to iterate over all possible divisors of triangle number $n$ is $\frac{n^2 + n}{2}$.

This gives a complexity of $\Theta(n^2)$.   Since by definition the value being searched for has more than 500 divisors, the complexity to test each triangle number is impractical.

However, there is a practical solution.  Furthermore, it only requires some optimization.  Before eliminating the asymptotic bottleneck though, there is a simple strength reduction.

The $\Theta(n)$ algorithm to compute $\sum\limits_{i=1}^n i$ can easily be replaced with an $O(1)$algorithm.

tri n = (n^2 + n) `quot` 2

This equation sums $1..n$.  It's often attributed to Carl Friedrich Gauss as an anecdote  It reduces run-time to $\frac{n^2 + n}{2} + 1$ which is still $\Theta(n^2)$.  So, again it doesn't eliminate the asymptotic bottleneck.  Optimizing the algorithm to count divisors takes a bit more insight.

import Data.List (find, group)
import Data.Maybe (fromMaybe)
 
main = putStrLn $ show $ (fst . head)
                         (filter ((>500) . snd) (zip t (map divs t)))
    where t = map tri [1..]
 
facts :: Int -> [Int]
facts x
    | (x < head pms) = []
    | otherwise = h : facts (x `div` h)
    where h = fromMaybe x (find (\y -> ((x `mod` y) == 0))
                              (takeWhile (<= isqrt x) pms))
 
pms :: [Int]
pms = sieve (2 : [3,5..])
 
sieve :: [Int] -> [Int]
sieve (p:xs) = p : sieve [ x | x <- xs, x `mod` p > 0 ]
 
tri :: Int -> Int
tri n = (n^2 + n) `quot` 2
 
isqrt :: Int -> Int
isqrt = truncate . sqrt . fromIntegral
 
divs :: Int -> Int
divs 1 = 1
divs x = product $ map ((+1) . length) ps
    where ps = group $ facts x

This algorithm is based on a simple consequence of the fundamental theorem of arithmetic.  Instead of iterating over all integers below $\frac{n^2 + n}{2}$, it counts the combinations of prime factors.  This replaces the $\frac{n^2 + n}{2}$ algorithm to find divisors with a $2\sqrt{\frac{n^2 + n}{2}}$ or $O(n)$ algorithm, since it only iterates over the primes below $\sqrt{\frac{n^2 + n}{2}}$

The realized run-time actually has a tighter bound, since prime numbers tend to grow more sparse further on the number line.  However, knowing a bound of $O(n)$ is sufficient given the  problem.

The execution time for the final version is...

real     0m0.037s
user    0m0.034s
sys      0m0.002s

There is still room for micro-optimization (starting at 14410396, better cpu utilization, etc...).  However, these would likely only give constant factor speedups.  More importantly though, optimizing the complexity turned the impractical algorithm, into an algorithm that finds a solution in less than a second.