Once the deck is sorted, then searching through becomes simpler. There are several ways to perform the search. One of the simplest to explain is the binary search, where you pick a card from the middle of the deck (dividing it into lower and upper halves), and compare it to the value you're searching. If you got lucky and it's the card you want, then you're all done. If the card you want is less than the card you just picked, then you look at the lower half. Otherwise, you look at the upper half.

What's the computational complexity of this search? At each step, you effectively halve the number of cards you have to look at next: 52, 26, 13, 6, 3, 1. The number of search steps is thus $\log_2 52$, or approximately 6 steps. The overall complexity is thus $\log N$, where $N$ is the number of elements to search.

The coding problem this time was to implement a binary search algorithm that searches a sorted array. It works mostly like what I described above, with the exception of how it handles elements with the same value. When the array contains more than one element of the same value (e.g. a messed-up deck with 2 Queens of Hearts), then the algorithm searches for the

*first*occurrence of the element.

Here is a JavaScript demo (butt-ugly, but does what it's supposed to). The "<<" and ">>" arrows to move to the previous and next search steps, respectively. At each step, the blue, orange and red numbers represent the first, middle and end of the search, respectively. A green element indicates the search has reached its goal and has terminated.

## No comments:

## Post a Comment