Thursday, August 25, 2016

Sh-Utils

I wrote a set of simple shell commands a little while back.  I actually ended up using them pretty regularly (quick and dirty source control, manual merging of configuration files, editing large compressed and delimited files in place, etc...).  Since they save some non-trivial keystrokes, I cleaned up the code and pushed it to github.  Likewise, I built a python package and published it via PyPI.

The package is called sh-utils.  The commands are...

pm - move p to p'
cpm - copy p to p'
upm - undo p to p'
sw - swap two paths
pt - pivot file over a command
pts - pivot file over a command (stdin)

There are descriptions on the github and PyPI pages, but I'll try to go into a bit more detail.

The three pm commands are mostly meant to work with path (file, directory, pipe, socket, etc...) backups.  For example, it's fairly reasonable to move or copy a file to a .bak (.old, .orig, etc...).  Vice versa, it's fairly reasonable to want to undo it.  So, pm takes a list of paths and moves them safely to a backup by prefixing (-p) or suffixing (-s) a value to the path name.  Likewise, cpm does the same, but copies the path instead of moving it.  upm safely undoes both of them.

For example, making a backup before editing a file...

~ cpm foo

To recover it...

~ upm foo\'

They're additive and subtractive as well, so it's easy to keep a stack of path(s) backups.  Unfortunately, that can make things a bit messy and possibly painful.  So, instead of having to run upm a bunch of times to unwind an stack of backups, upm has a flag (-a) that conflates the latest and earliest paths in a single command.

The next command is sw, which swaps two paths.  Normally swapping two paths takes a few commands and a pivot file (move a path to a pivot, move second to first, move the pivot to the second).  Instead, sw does everything in one command and as close to atomically as possible.

For example, swapping a file with its backup...

~ sw foo foo\'

or...

~ sw foo* 

Last are two pivot commands.  Similar to sw, doing something in place on a file normally needs more than one step and a pivot file.  Instead, pt and pts do everything in one command and as close to atomically as possible.

Normally I would sort a file...

~ sort foo > foo\'
~ mv foo\' foo

With pt...

~ pt sort foo

Finally, pts is similar to pt except it passes the file on stdin, which can be useful for pipelines.

For example, if you wanted to uncompress a delimited file, sort it, parse it, square a column, and re-compress it, normally it would be...

~ xz -dc foo.xz | sort | awk '{ print \$1, \$2 * \$2 }' | xz > foo.xz\'
~ mv foo.xz\' foo.xz

Instead, with pts...

~ pts sh -c "xz -d | sort | awk '{ print \$1, \$2 * \$2 }' | xz" foo.xz

If anyone ends up using these and you have any questions, feel free to ask.

Tuesday, June 3, 2014

Hypertext Browsing

I tend to browse the web using Firefox with some basic plugins, which probably is true for most people (Chrome, Safari, etc...).  It's overkill for a lot of things though.  A lot of the time, all I really need is a way to enter a URL, render some text, search for some text, follow links, and possibly enter text into forms.  All of these can be done with a slightly simpler browser, w3m.

If you are resource constrained, you prefer working strictly with text, or you don't have access to a graphics environment, w3m can come in handy.  The footprint is a few orders of magnitude less than the graphical variants, and it's meant for keyboard navigation.

The motions are probably familiar to anyone who's used Vi/Vim before.

<h>- left
<j>- down
<k>- up
<l>- right
<w>- next word
<^>- line start
<$>- line end

Directional keys can also be used.  Navigating links and input fields is generally the same as the graphical browsers.

<tab>- next field
<s-tab>- previous field

Instead of the mouse, w3m gives a menu for links, which can be slightly faster than using a mouse (see Fitts' Law).

<esc-m>- link list (move)
<esc-l>- link list (follow)

Another easy way to navigate is searching for text.  Again, Vi/Vim users will probably recognize the default key bindings.

</>- search forward
<?>- search backward

Last, to open new URLs and tabs...

<U>- open URL
<T>- new tab

There are more complex combinations, but the above should be a good start.  The full set of navigation and configuration options can be found in the man page as well as the help screen (<H>).

Putting it all together...


One of the things I've found w3m is good for is as a basic interface for things that have a form of HTTP/HTML API.


It's admittedly limited, depending on the protocol or features sites you try to browse rely on.  Support for more recent protocols (SPDY, HTML5, AJAX, etc...) is decidedly lacking.

Although, on the development side, it's also useful for a number of things like navigating doxygen/javadoc, parsing jhat output, and even browsing wikipedia.


It's also useful to work with from the command line.

~ watch -n 10 w3m http://www.wrh.noaa.gov/sew -dump

I combine it with other things like tmux, mutt, vim and emacs which makes it a little more useful, but the above are some of the basics.

Downloads, documentation, etc... can be found on w3m's homepage for anyone curious.

Saturday, February 15, 2014

Dynamic Programming

I was going through another project euler problem, and the solution ended up being a example of dynamic programming.  I've gone through project euler solutions in the past, so I thought it might be interesting to walk through another.

In short, dynamic programming is a way of using cached information to optimize choice in a recursive problem.  By eliminating the non-optimal choices, it reduces the complexity (possibly exponentially).  It's actually a bit of an understated concept, since a lot of problems can naturally be expressed with recursion.  A lot of the problems are also generally not feasible without it.  Obviously the larger the problem, the larger the run-time improvement.

The specific problem has to do with optimally traversing a graph.  Since the problem was simple enough, I ended up writing the solution in yasm (amd64) assembly.

"Find the maximum total from top to bottom in triangle.txt, a 15K text file containing a triangle with one-hundred rows."

So, a valid move on the graph follows a directed edge to the left or right child.  An exhaustive search is $\Theta(n2^{n-1})$ with respect  to height ($n$ nodes per path, $2^{n-1}$ paths).  It's estimated there are $10^{80}$ atoms in the known universe.  So given only a few hundred rows, the number of distinct traversals possible is roughly equivalent to the number of atoms in all of known existence.

Obviously, a pointer based tree structure implies the complexity, since it implies an exhaustive search.  However, there are more ways than one to represent a tree.  One possible representation to avoid the complexity is the same layout that's used for heaps (Eytzinger).

Finally, the dynamic programming step is comparing each of the possible traversals, and caching the optimal one as if it had been taken...

   segment .data

a   dq \
0, \
59, \
73, 41, \
.
.
.
l   equ     100

    segment .bss

    segment .text

    global _start

_start:

    xor     rsi, rsi
    mov     rdi, 1
    mov     rdx, 1

level:

    inc     rdx
    cmp     rdx, l
    cmovg   rax, [a + rsi*8]
    jg      max

    mov     rcx, rdx
    mov     rbx, rcx
    shl     rbx, 3
    
    inc     rsi
    inc     rdi

    mov     rax, [a + rsi*8]
    add     [a + rdi*8], rax
    mov     rax, [a + rsi*8 + rbx - 16]
    add     [a + rdi*8 + rbx - 8], rax
    
    dec     rcx

    inc     rdi

next: 

    dec     rcx
    jz      level

    mov     rax, [a + rsi*8]
    inc     rsi
    cmp     rax, [a + rsi*8]
    cmovl   rax, [a + rsi*8]
    add     [a + rdi*8], rax

    inc     rdi

    jmp     next

max:

    dec     rdx
    jz      _end

    inc     rsi
    cmp     rax, [a + rsi*8]
    cmovl   rax, [a + rsi*8]

    jmp     max

_end:

    xor     rdi, rdi
    mov     rax, 60

    syscall

With dynamic programming, run-time becomes $2n^2 - 4n - 4$ or $O(n^2)$ (the math is left as an exercise).

And final execution time is...

real     0m0.001s
user    0m0.000s
sys      0m0.000s

So, other than what would probably be an extremely large scale solution taking the intuitive approach,  the problem is essentially infeasible.  Using dynamic programming, the problem can be solved in under a millisecond.

Wednesday, January 25, 2012

Sony PRS

I bought a Sony PRS a while ago, which I've been using a decent amount.  Managing books was a bit cumbersome though.  Copying anything onto it normally needs a computer, a USB connection, and management software.  Thankfully, I was able to simplify it a bit.

The PRS has a built-in 802.11g radio, so obviously data was already being transmitted over ethernet.  It also has a web browser, so again it obviously supported HTTP.  Normally the browser only supports text/html though, which precluded a lot of stuff I actually wanted it for.

PRS+ helped fix this.  It implements a lot of the things I thought were missing from the stock firmware; specifically non-html MIME type support.  Actually it integrates this with the stock browser, which even made a user friendly interface possible.  Authentication and encrypted HTTP unfortunately aren't implemented, which could make using it outside of home a bit impractical, but downloading from sites like project gutenberg normally doesn't need either.

Finally, to make everything I wanted to be able to read available via HTTP, I setup an apache webserver with mod_autoindex.  The mod_autoindex module isn't completely necessary,  but it gives nice HTML listings for directories which makes things a bit more user friendly.

Et, voila...




A big thank you to the PRS+ developers, otherwise I would have been stuck.  For anyone curious, it adds a lot over the stock firmware (file browser, screenshots, etc...).

Tuesday, November 1, 2011

Algorithm Complexity

I've been going through project euler problems lately, and they tend to be good examples of how algorithm complexity can be important.  With a lot of them, reducing complexity takes a little insight.  I thought it might be interesting to walk through a solution.

To high level refresh, the complexity of some $f(n)$ that describes an algorithm can be characterized using...

$O$ - asymptotic upper bound
$\Omega$ - asymptotic lower bound
$\Theta$ - asymptotic upper and lower bound

There are also...

$o$ - exclusive asymptotic upper bound
$\omega$ - exclusive asymptotic lower bound

The specific problem has to do with triangle numbers.  All code is haskell.


"What is the value of the first triangle number to have over five hundred divisors?"


The obvious algorithm is to compute each triangle number by summing, and iterate over the integers from one to the number counting each divisor.  The code for this is straightforward.

main = putStrLn $ show $ (fst . head)
                         (filter ((>500) . snd) (zip t (map divs t)))
       where t = map tri [1..]
 
tri n = sum [1..n]
 
divs 0 = 0
divs n = length $ filter (\n' -> n `mod` n' == 0) [1..n]

It's simple, and it is correct.  However, it's impractical.

Consider the complexity.  The run-time to sum $1..n$ is $n$.  The run-time to iterate over all possible divisors of triangle number $n$ is $\frac{n^2 + n}{2}$.

This gives a complexity of $\Theta(n^2)$.   Since by definition the value being searched for has more than 500 divisors, the complexity to test each triangle number is impractical.

However, there is a practical solution.  Furthermore, it only requires some optimization.  Before eliminating the asymptotic bottleneck though, there is a simple strength reduction.

The $\Theta(n)$ algorithm to compute $\sum\limits_{i=1}^n i$ can easily be replaced with an $O(1)$algorithm.

tri n = (n^2 + n) `quot` 2

This equation sums $1..n$.  It's often attributed to Carl Friedrich Gauss as an anecdote  It reduces run-time to $\frac{n^2 + n}{2} + 1$ which is still $\Theta(n^2)$.  So, again it doesn't eliminate the asymptotic bottleneck.  Optimizing the algorithm to count divisors takes a bit more insight.

import Data.List (find, group)
import Data.Maybe (fromMaybe)
 
main = putStrLn $ show $ (fst . head)
                         (filter ((>500) . snd) (zip t (map divs t)))
    where t = map tri [1..]
 
facts :: Int -> [Int]
facts x
    | (x < head pms) = []
    | otherwise = h : facts (x `div` h)
    where h = fromMaybe x (find (\y -> ((x `mod` y) == 0))
                              (takeWhile (<= isqrt x) pms))
 
pms :: [Int]
pms = sieve (2 : [3,5..])
 
sieve :: [Int] -> [Int]
sieve (p:xs) = p : sieve [ x | x <- xs, x `mod` p > 0 ]
 
tri :: Int -> Int
tri n = (n^2 + n) `quot` 2
 
isqrt :: Int -> Int
isqrt = truncate . sqrt . fromIntegral
 
divs :: Int -> Int
divs 1 = 1
divs x = product $ map ((+1) . length) ps
    where ps = group $ facts x

This algorithm is based on a simple consequence of the fundamental theorem of arithmetic.  Instead of iterating over all integers below $\frac{n^2 + n}{2}$, it counts the combinations of prime factors.  This replaces the $\frac{n^2 + n}{2}$ algorithm to find divisors with a $2\sqrt{\frac{n^2 + n}{2}}$ or $O(n)$ algorithm, since it only iterates over the primes below $\sqrt{\frac{n^2 + n}{2}}$

The realized run-time actually has a tighter bound, since prime numbers tend to grow more sparse further on the number line.  However, knowing a bound of $O(n)$ is sufficient given the  problem.

The execution time for the final version is...

real     0m0.037s
user    0m0.034s
sys      0m0.002s

There is still room for micro-optimization (starting at 14410396, better cpu utilization, etc...).  However, these would likely only give constant factor speedups.  More importantly though, optimizing the complexity turned the impractical algorithm, into an algorithm that finds a solution in less than a second.