Binary search is an extremely important algorithmic technique in the armory of all programmers. It isn't typically thought of as an advanced algorithm, yet is deceptively simple, and features a few nuances to be aware of. In its simplest form, binary search tries to find a matching element from a sorted array of elements. I specifically said array, and not list, because the elements have to be contiguously stored and randomly accessible by index for binary search's O(log N) time-bound to hold.
To perform binary search, we usually define a condition that would be satisfied by some prefix of the array (could be of length zero) and not satisfied by the remaining array. This condition divides the array into two parts, and informs our algorithm which part could be discarded in the next step, thus reducing the search domain and zeroing in on a matching element if one exists. Below is a simple binary search implementation that works.
int binary_search(const vector<int>& nums, int key) {
int low = 0, high = num.size() - 1;
while (low < high) {
int mid = low + (high - low)/2;
if (nums[mid] < key) {
// discard the left part
low = mid + 1;
} else if (nums[mid] > key) {
// discard the right part
high = mid - 1;
} else {
return mid; // we found a key
}
}
return -1; // we didn't find the key
}
The above implementation is about as simple as things can get. In most applications, we end up doing more with binary search. Let's modify the above problem slightly, to that of finding the location where the key would be inserted if it isn't already present - arguably a more useful variant. We really have to change what we return when we don't find a matching value:
return low; // or high
}
Quick summary
What's the key insight into binary search?
The first is that it's binary, meaning we divide the sorted array into two mutually exclusive sections. One section includes the target element if present, the other section decidedly doesn't.
The second is that we compute the mid-point of the array and check if it sits in the section that might include the target, or the one that doesn't. If it doesn't include the target, we update one end of the search space to exclude the mid-point, i.e one end becomes right = mid-1 or left = mid+1 as the case maybe. If on the other hand, the mid point is in the section that may include the target, then we update one end of the search space to point to the mid-point - we don't exclude it. So it becomes left = mid or right = mid as the case may be. This process continues until left and right converge inside the inclusive section, right at its boundary.
Finally, an important implementation note which can help. We can compute the mid-point in one of two ways:
mid = left + (right - left)/2;
mid = right - (right - left)/2;
When left and right are right next to each other, mid equals left in the first case, and right in the second case. Which form should be used depends on which update rule was used before that. If left was set equal to mid earlier, then the first form would repeatedly result in mid remaining at the same value and causing an infinite loop. If right was set equal to mid earlier, the same problem would appear with the second form.
So if we set mid = left, then use the second form, and if we set mid = right, then use the first form. It doesn't make a difference for update rules that add or subtract 1 on mid.
Finding ranges
Now, imagine that there can be repeated elements, and we'd like to find the first position where a repeated element occurs, as well as the last position. We can always run the regular binary search to find some occurrence of the element, and then do linear search backward or forward to find the first or the last matching element. But, this makes an O(log N) algorithm into a worst-case linear algorithm. There must be a better way.
One way to think about it is to observe that the first matching element in the array occurs at the border of a prefix for which nums[i] < key, and a second region for which nums[i] >= key. We are looking for the first element of the second region. This means that our search domain should always include this boundary.
We quickly modify our code above.
int find_first_of(const vector<int>& nums, int key) {
int low = 0, high = nums.size() - 1;
while (low < high) {
int mid = low + (high - low)/2;
if (nums[mid] < key) {
// discard the left part
low = mid + 1;
} else { // nums[mid] >= key
// discard the right part
high = mid;
}
}
return low; // we didn't find the key
}
Notice the slight modifications. We no longer check specifically for equality. Instead we try to ensure that the remaining search domain after each step includes the boundary and zeroes in on the first element after the boundary. This is all good. What if we wanted to return the last element instead? Can't be too different, can it? Well in this case, the search domain must include the boundary between elements less than or equal to the target, and elements greater than the target.
int find_last_of(const vector<int>& nums, int key) {
int low = 0, high = nums.size() - 1;
while (low < high) {
int mid = low + (high - low)/2;
if (nums[mid] <= key) {
// discard the left part
low = mid;
} else { // nums[mid] > key
// discard the right part
high = mid-1;
}
}
return low; // we didn't find the key
}
The update rules changed slightly, but guess what - the above code doesn't work. It goes into an infinite loop! The reason is that mid is computed as low+(high-low)/2, but when low and high are next to each other, i.e. the range is only two elements, then (high-low)/2 is 0, and so mid equals low. This means in successive iterations, the value of low doesn't change when ideally it should converge to high. Why hasn't this hit us so far? Notice that in the earlier examples, we never assigned mid to low, like we did here. What's the solution? Well, there is no mid for a two-element range. So use mid=high-(high-low)/2. This will mean that mid would become equal to high eventually. Since we assign mid to low, eventually low and high would converge. Conversely, if we used this way of computing mid in find_first_of, that one would go into an infinite loop.
Searching in a rotated array
An array is said to be rotated by k positions, if the original elements of the array have been shifted by k index positions. And spillover elements wrap around from zero, hence called rotation. If an array is sorted but rotated, and we don't know by how much, can binary search be of any help? For that matter, can binary search determine the amount of rotation? Turns out that it can, without much need to resort to anything linear.
If a sorted array is rotated, then the largest and smallest elements would be neighbours somewhere in the array. How do we find this location? In an array with a large number of duplicates, this might be tricky to do. But in an array with unique elements this is simple. In the rotated array, the beginning would be larger than the end, and the max element would lie somewhere between the beginning and the end, either in the first half, or in the second half. We call this the pivot point, and following is the code to find it.
int find_pivot(vector<int> nums) {
int low = 0, high = nums.size()-1;
while (low < high) {
int mid = high - (high-low)/2;
if (nums[low] < nums[mid]) {
low = mid;
} else {
high = mid-1;
}
}
return low;
}
Notice how we use the appropriate form for calculating mid, based on the update rules - to prevent infinite loops. This would terminate and return the position of the largest element.
Searching an element in the rotated array is also similar, and requires just a few changes, as shown below.
int find_elem_in_rotated(vector<int> nums, int key) {
int low = 0, high = nums.size()-1;
while (low < high) {
int mid = high - (high-low)/2;
if (nums[low] < nums[mid]) {
if (nums[low] <= key && key < nums[mid]) {
high = mid - 1;
} else {
low = mid;
}
} else {
if (nums[mid] <= key && key <= nums[high]) {
low = mid;
} else {
high = mid - 1;
}
}
}
return low;
}
This would return the location of the key if present, or the position where it can be inserted, if not present.
No comments:
Post a Comment