Sunday, February 24, 2013

This Week's Coding Practice: Stacks and the Tower of Hanoi

A stack is a common data type that allows its elements to be accessed in LIFO (last in, first out) order. Many parts of the physical world around us function like a stack. A common analogy is a stack of plates on a table: it's simple to put a new plate on top or remove the top-most plate. It's impossible to insert a new plate into the middle of the stack. It's also impossible to remove any plates other than the top-most one. I ran into yet another real-world example of a stack the other day: a narrow rectangular parking lot with only one side facing a driveway. The lot has just enough space to fit around 4-5 cars end-to-end. When people want to park, they just drive straight in. Eventually, they get parked in by someone, who also gets parked in by somebody else, and so on. Lucky last doesn't get parked in by anyone else, but gets nagged by all the people he parked in whenever they want to leave.

Another popular example of a stack is the famous Tower of Hanoi puzzle.  There are three rods, and an arbitrary number of disks of unique sizes.  Initially, all the disks are placed on the first rod, ordered by size (smallest disk at the top).  The goal of the puzzle is to move all the disks to another rod, one by one, without ever putting a larger disk on top of a smaller one.  The task for this week was to write an algorithm that solves this puzzle.

It may not be obvious from the description, but each peg can be modeled as a stack, since its physically impossible to access elements in any order other than LIFO (for example, randomly, or FIFO).  The entire puzzle is then modeled by 3 separate stacks.

How does one go about solving the puzzle?  It turns out that it has been thoroughly studied, and several well-known recursive and iterative algorithms exist.  However, since I have a tendency to do things the hard way, I didn't study those algorithms prior to solving the puzzle, and came up with a less elegant but home-brewed solution on my own.

Having spent a significant amount of time on the problems at Project Euler, I've acquired a instinctive reaction: whenever facing a problem for the first time, try brute force.  It's almost never the best (or even borderline satisfactory) solution, but is a fairly quick way of getting acquainted with a problem.  It's also a good starting point for trying out other things.  Lastly, it's better than nothing.

With that in mind, I approached the puzzle as a search problem (a more general instance of the binary search).  At each step, you represent the current state of the puzzle as 3 individual states.  To get to the next steps, a disks is moved from one stack to another, without violating the rules of the puzzle.  Once you get to a state in which all of the discs are on the second (or third) stack, then you've solved the puzzle.  In which order should we look at the next steps?  One simple way is DFS (depth-first search).

I quickly realized that a lot of the resulting steps weren't helpful in solving the puzzle: moving a single disc backwards and forwards between two rods achieves nothing.  It also causes some branches of the DFS to be infinitely long, meaning the search will fail to terminate.  One way to get around this problem is to search only those states that haven't been encountered yet.  This requires keeping track of the states encountered thus far, which could be done with a hash map.

As it turns out, DFS isn't the best way to perform the search, since the solution it yields isn't necessarily the simplest.  This can be addressed by switching to breadth-first search (BFS): the solution it yields will have the minimum possible number of steps (which, as it turns out, is equal to $2^{n-1}$).

After writing my solution out on paper, I quickly coded it up in Python.  Since I had a bit of spare time, I decided to try out my JavaScript chops and implement something I could show off graphically.  While fiddling with JS took me way longer than coming up with the actual algorithm, I got there in the end, and came up with this:



The search problem can be taken further by implementing an A* search that uses a heuristic to determine the best action to take at each step.  If a suitable heuristic can be formulated, this kind of search can significantly reduce the running time and memory usage of the algorithm.  It was tempting to keep digging, but I'll leave that to another day.

Full source (including the Python code) is available as a gist.

2 comments:

  1. Good expample to compare declarative and imperative styles of programming.

    There is an assumption that 1 line of declarative style code requires 20 lines of imperative code.

    So, we have 343 lines of Pythone lines so far.
    Comparing them with 9 lines of Prolog code give us ratio of 1:38!!!

    Well done!

    Below is Prolog source....

    hanoi:-hanoi(5).
    hanoi(N) :- move(N, left, middle, right).
    move(1, A, _, C) :- inform(A, C), !.
    move(N, A, B, C) :-
    N1 is N-1,
    move(N1, A, C, B),
    inform(A, C), !,
    move(N1, B, A, C).
    inform(Loc1, Loc2) :- write("Move a disk from ", Loc1, " to ", Loc2),nl, !.

    ReplyDelete
  2. Thanks for commenting ;)

    First, I don't know where you got the number 343 from. hanoi.py is 69 lines, with comments. So the Prolog:Python ratio is less than 1:10. The other two files are part of the separate JavaScript solution, most of which is drawing functions and other GUI boilerplate.

    Second, I don't think comparing my solution to a Prolog one facilitates a fair or meaningful comparison of declarative or imperative. A significant part of the Python solution was implementing a search -- that part is conveniently built into Prolog. If you really want to compare line counts, have a look at http://www.python-course.eu/towers_of_hanoi.php -- their solution is a little over 10 lines.

    ReplyDelete