In the field of Image Processing, the extraction of geometric features from images is very common problem. Over the years, several different approaches have been devised to extract these features.
These different approaches can be characterized and classified in several different ways.
Some of the techniques involve global examination of the image, while others only involve local examination of each pixel in the image.
Typically, the first step in the process is to perform some form of edge detection on the image, and to generate a new Binary edge image that provides the necessary segmentation of the original image.
Edge detection algorithms operate on the premise that each pixel in a grayscale digital image has a First derivative, with regard to the change in intensity at that point, if a significant change occurs at a given pixel in the image, then a black pixel is placed in the binary image, otherwise, a white pixel is placed there instead.
In general, the gradient is compared at each pixel that gives the degree of change at each point in the image.The question basically amounts to how much change in the intensity should be required in order to constitute a feature point in the binary image.
And usually a predefined threshold value T is used to classified edge points.
To find the accuracy location of an edge, a Second derivative is often used to find the point that corresponds to the local maximum and minimum in the first derivative.
This is often referred to as a Zero Crossing because it is the point at which the second derivative equals to zero, but its left and right neighbors are non-zero and have opposite signs.
IPT's function edge provides several derivative estimators based on the criteria just you learned above.
For some of these estimators, it is possible to specify whether the edge detector is sensitive to horizontal or vertical edges or to both. The general syntax for the edge function is:
where f is the input image, the most popularly used approaches are listed in the following table, and parameters are additional parameters explained in the following discussion.
In the output, g is a logical array with 1's at the locations where edge points were detected in f and 0's elsewhere. Parameter t is optional, it gives the threshold used by edge to determine which gradient values are strong enough to be called edge points.
|Edge Detector||Basic Properties|
|Sobel||Finds edges using the Sobel approximation to the derivatives.|
|Canny||Finds edges by looking for local maxima of the gradient of f(x,y). The gradient is calculated using the derivative of a Gaussian filter. The method uses two thresholds to detect strong and weak edges, and includes the weak edges in the output only if they are connnected to strong edges. Therefore, this method is more likely to detect true weak edges.|
|Zero Crossing||Finds edges by looking for zero crossing after filtering f(x,y) with a user defined filter.|
Now let us try those different methods and parameters to see what is the difference between them:
T=100; f=zeros(128,128); f(32:96,32:96)=255; [g1, t1]=edge(f, 'sobel', 'vertical'); imshow(g1); t1
sigma=1; f=zeros(128,128); f(32:96,32:96)=255; [g3, t3]=edge(f, 'canny', [0.04 0.10], sigma); figure,imshow(g3); t3
The Hough Transform (HT) is a robust method for finding lines in images that was developed by Paul Hough.
The main idea for the HT is as follows:
The real solution to implement this algorithm is to quantize the parameter space by using a 2D array of counters, where the array coordinates represent the parameters of the line; this is commonly known as an accumulator array.
The HT method for finding lines in images generally consists of the following three stages:
The MATLAB has a function called hough that computes the Hough Transform.
In the following example, we will illustrate the use of function hough on a simple binary image.
First we consruct an image containing isolated foreground pixels in serveral locations.
f=zeros(101,101); f(1,1)=1; f(101,1)=1; f(1,101)=1; f(101,101)=1; f(51,51)=1;
Then we compute and display the Hough Transform.
The result you got should looks like this:
Now we have hough space stored in H, the next step is peak detection.
MATLAB provides us a function named houghpeaks to do this, the syntax is
where H is the Hough Transfrom matrix, and the numpeaks is the maximum number of peak locations to look for.
In the output,r and c are the row and column coordinates of the identified peaks, HNEW is the Hough Transform with peak neighborhood suppressed.
Once a set of candidate peaks has been identified in the Hough transform, it remains to be determined if there are line segments associated with those peaks, as well as start and ending points.
For each peak, the first step is to find the location of all nonzero pixels in the image that contributed to that peak. the function houghpixels can do this.
The detail of this function is left as your future reading.
Image threshoding plays a very important role in image segmentation.
In this section we discuss the way of choosing the threshod automatically.
For choosing a threshod automatically, the following algorithm is applied:
The above iterative method can be implemented as follows:
T=0.5*(double(min(f(:)))+double(max(f(:)))); done = false; while ~done g = f >= T; Tnext = 0.5*(mean(f(g))+mean(f(~g))); done = abs (T-Tnext) < 0.5; T = Tnext; end
where g is a binary mask, f(g) is a subset of f defined by g.
You can also use IPT's function graythresh to reah the same goal.