Color Edge Detection

In Chapter 3, it was established that color images may be described quantitatively at each pixel by a set of three tristimulus values Tj, T2, T3, which are proportional to the amount of red, green and blue primary lights required to match the pixel color. The luminance of the color is a weighted sum Y a1T1 + a2T2 + a3T3 of the tristimulus values, where the ai are constants that depend on the spectral characteristics of the primaries. Several definitions of a color edge have been proposed...

Info

Equation 13.4-8 is the final result giving the complete camera imaging model transformation between an object and an image point. The explicit relationship between an object point (X, Y, Z) and its image plane projection (x, y) can be obtained by performing the matrix multiplications analytically and then forming the Cartesian coordinates by dividing the first two components of w by the fourth. Upon performing these operations, one obtains -(X - Xg) sin 8 sin - (Y - Yg) cos 8 sin - (Z - Zg) cos...

Amplitude Features

The most basic of all image features is some measure of image amplitude in terms of luminance, tristimulus value, spectral value or other units. There are many degrees of freedom in establishing image amplitude features. Image variables such as luminance or tristimulus values may be utilized directly, or alternatively, some linear, nonlinear, or perhaps non-invertible transformation can be performed to generate variables in a new amplitude space. Amplitude measurements may be made at specific...

Range

( ) Quantized image contrast enhancement FIGURE 10.1-1. Continuous and quantized image contrast enhancement. A digitally processed image may occupy a range different from the range of the original image. In fact, the numerical range of the processed image may encompass negative values, which cannot be mapped directly into a light intensity range. Figure 10.1-2 illustrates several possibilities of scaling an output image back into the domain of values occupied by the original image. By the first...

Shape Orientation Descriptors

The spatial orientation of an object with respect to a horizontal reference axis is the basis of a set of orientation descriptors developed at the Stanford Research Institute (33). These descriptors, defined below, are described in Figure 18.4-1. 1. Image-oriented bounding box the smallest rectangle oriented along the rows of the image that encompasses the object 2. Image-oriented box height dimension of box height for image-oriented box 3. Image-oriented box width dimension of box width for...

Image Restoration Exercises

12.1 Develop a program that computes a 512 x 512 Wiener filter transfer function for a 5 x 5 pyramid blur impulse response array and white noise with a SNR of 10.0. Steps (a) Fetch the impulse response array from the repository. (b) Convert the impulse response array to an image and embed it in a 512 x 512 zero background array. (c) Compute the two-dimensional Fourier transform of the embedded impulse response array. (d) Form the Wiener filter transfer function according to Eq. 12.2-23. (e)...

Drax ray wFrax ray ray

(ax, my) + WN( x, y) ( x, y) In the latter formulation, the transfer function of the restoration filter can be expressed in terms of the signal-to-noise power ratio at each spatial frequency. Figure 12.2-3 shows cross-sectional sketches of a typical ideal image spectrum, noise spectrum, blur transfer function and the resulting Wiener filter transfer function. As noted from the figure, this version of the Wiener filter acts as a bandpass filter. It performs as an inverse filter at low spatial...

Image Manipulation Exercises

2.1 Develop a program that passes a monochrome image through the log part of the monochrome vision model of Figure 2.4-4. Steps (a) Convert an unsigned integer, 8-bit, monochrome source image to floating point datatype. (b) Scale the source image over the range 1.0 to 100.0. (c) Compute the source image logarithmic lightness function of Eq. 5.3-4. (d) Scale the log source image for display. The executable example_monochrome_vision performs this exercise. Refer to the window-level manual page...

Optical Systems Models

One of the major advances in the field of optics during the past 50 years has been the application of system concepts to optical imaging. Imaging devices consisting of lenses, mirrors, prisms and so on, can be considered to provide a deterministic transformation of an input spatial light distribution to some output spatial light distribution. Also, the system concept can be extended to encompass the spatial propagation of light through free space or some dielectric medium. FIGURE 11.2-1....

Segment Labeling

The result of any successful image segmentation is the unique labeling of each pixel that lies within a specific distinct segment. One means of labeling is to append to each pixel of an image the label number or index of its segment. A more succinct method is to specify the closed contour of each segment. If necessary, contour filling techniques (41) can be used to label each pixel within a contour. The following describes two common techniques of contour following. The contour following...

Binary Image Connectivity

Binary image morphological operations are based on the geometrical relationship or connectivity of pixels that are deemed to be of the same class (10,11). In the binary image of Figure 14.1-1 , the ring of black pixels, by all reasonable definitions of connectivity, divides the image into three segments the white pixels exterior to the ring, the white pixels interior to the ring and the black pixels of the ring itself. The pixels within each segment are said to be connected to one another. This...

Generalized Twodimensional Linear Operator

A large class of image processing operations are linear in nature an output image field is formed from linear combinations of pixels of an input image field. Such operations include superposition, convolution, unitary transformation and discrete linear filtering. Consider the N1 x N2 element input image array F(n1, n2). A generalized linear operation on this image field results in a M1 x M2 output image array P(m 1, m2) as defined by P( m m2 ) II F( j, n2) O( n , n2 mj, m2 )

B

Where Bc and BR are generalized inverses of Bc and Br, respectively. Thus, when the blur matrix is of separable form, it becomes possible to form the estimate of the image by sequentially applying the generalized inverse of the row blur matrix to each row of the observed image array and then using the column generalized inverse operator on each column of the array. Pseudoinverse restoration of large images can be accomplished in an approximate fashion by a block mode restoration process,...

F

The properties of the Fourier transform previously proved in series form obviously hold in the matrix formulation. One of the major contributions to the field of image processing was the discovery (5) of an efficient computational algorithm for the discrete Fourier transform (DFT). Brute-force computation of the discrete Fourier transform of a one-dimensional sequence of N values requires on the order of N2 complex multiply and add operations. A fast Fourier transform (FFT) requires on the...

A42 Compact Disk Directory

The compact disk contains the following directories or folders. ProgrammersManual PDF files Front.pdf, Part1.pdf, Part2.pdf, Appendicies.pdf Images Piks PIKS image file format files of source images Images Tiff TIFF image file format files of many of the figures images in the book Demos C program Solaris and Windows source and executable demonstration programs SciExamples C program Solaris and Windows source and executable programs DipExamples C program Solaris and Windows executables of...

Basic Geometrical Methods

Image translation, size scaling and rotation can be analyzed from a unified standpoint. Let D(j, k) for 0 < j < J -1 and 0 < k < K -1 denote a discrete destination image that is created by geometrical modification of a discrete source image S(p, q) for 0 < p < P - 1 and 0 < q < Q -1. In this derivation, the source and destination images may be different in size. Geometrical image transformations are usually based on a Cartesian coordinate system representation in which pixels...

S arctanJ j j 1 1857b j [x s x s 1 J

K(sj) 0(sj) - 0(sj _1 ) (18.5-7C) where Sj represents the jth step of arc position. Figure 18.5-2 contains results of the Fourier expansion of the discrete curvature function. Bartolini et al. (38) have developed a Fourier descriptor-based shape matching technique called WARP in which a dynamic time warping distance is used for shape comparison. FIGURE 18.5-2. Fourier expansions of curvature function. 18.6. THINNING AND SKELETONIZING FIGURE 18.5-2. Fourier expansions of curvature function....

S2m C J M y mjCy1636

S3(m) fp(m +1 )f2n M(p,0) dp d6 (16.3-7) s am) rre(m + 1) m (p, 8) dp de (16.3-8) FIGURE 16.3-2. Discrete Fourier spectra of objects log magnitude displays. For a discrete image array F(j, k) , the discrete Fourier transform FIGURE 16.3-2. Discrete Fourier spectra of objects log magnitude displays. For a discrete image array F(j, k) , the discrete Fourier transform

Superposition And Convolution

In Chapter 1, superposition and convolution operations were derived for continuous two-dimensional image fields. This chapter provides a derivation of these operations for discrete two-dimensional images. Three types of superposition and convolution operators are defined finite area, sampled image and circulant area. The finite-area operator is a linear filtering process performed on a discrete image data array. The sampled image operator is a discrete model of a continuous two-dimensional...

R E Y 114

Are used as transmission coordinates, where RE and BE are the gamma-corrected EBU red and blue components, respectively. The YUV coordinate system was initially proposed as the NTSC transmission standard but was later replaced by the YIQ system because it was found (4) that the I and Q signals could be reduced in bandwidth to a greater degree than the U and V signals for an equal level of visual quality. (b) l, -0.276 to 0.347 (c) Q, 0.147 to 0.169 FIGURE 3.5-14. YIQ components of the gamma...

Piks Scientific Overview

The PIKS Scientific profile provides a comprehensive set of image processing functionality to service virtually all image processing applications. It supports all pixel data types and the full five-dimensional PIKS image data object. It provides the following processing features 3. Support 1, 2, 4 and 8 global resampling interpolation of images and ROIs 9. Automatic source promotion 10. Data object repository At the time of publication of this book, the PixelSoft implementation of PIKS...

N

H(m, n) H(m, n) + G(j, k) (17.4-15) Figure 17.4-9 gives an example of the O'Gorman and Clowes version of the Hough transform. The original image is 512 x 512 pixels, and the Hough array is of size 511 x 511 cells. The Hough array has been flipped bottom to top for display. Kesidis and Papamarkos (76) have developed an algorithm for computing an inverse Hough transform (IHT) of a binary image. The algorithm detects peaks of the sinusoidal curves in the Hough transform (HT) space and decomposes...

Image Sampling and Reconstruction91

4.1 Image Sampling and Reconstruction Concepts, 91 4.2 Monochrome Image Sampling Systems, 99 4.3 Monochrome Image Reconstruction Systems, 110 4.4 Color Image Sampling Systems, 119 5.2 Processing Quantized Variables, 133 5.3 Monochrome and Color Image Quantization, 136 PART 3 DISCRETE TWO-DIMENSIONAL PROCESSING 145 6 Discrete Image Mathematical Characterization 147 6.1 Vector-Space Image Representation, 147 6.2 Generalized Two-Dimensional Linear Operator, 149 6.3 Image Statistical...

A14 Solutions To Linear Systems

The general system of linear equations specified by where T is a P x Q matrix may be considered to represent a system of P equations in Q unknowns. Three possibilities exist 1. The system of equations has a unique solution f for which Tf p . 2. The system of equations is satisfied by multiple solutions. 3. The system of equations does not possess an exact solution. If the system of equations possesses at least one solution, the system is called consistent otherwise, it is inconsistent. The lack...

Color Image Enhancement

The image enhancement techniques discussed previously have all been applied to monochrome images. This section considers the enhancement of natural color images and introduces the pseudocolor and false color image enhancement methods. In the literature, the terms pseudocolor and false color have often been used improperly. Pseudocolor produces a color image from a monochrome image, while false color produces an enhanced color image from an original natural color image or from multispectral...

Fast Fourier Transform Convolution

As noted previously, the equivalent output vector for either finite-area or sampled image convolution can be obtained by an element selection operation on the extended output vector kE for circulant convolution or its matrix counterpart KE. FIGURE 9.2-3. Two-dimensional Fourier domain convolution matrices. This result, combined with Eq. 9.2-13, leads to a particularly efficient means of convolution computation indicated by the following steps 1. Embed the impulse response matrix in the upper...

Color Space Exercises

3.1 Develop a program that converts a linear RGB unsigned integer, 8-bit, color image to the XYZ color space and converts the XYZ color image back to the RGB color space. Steps (a) Display the RGB source linear color image. (b) Display the R, G and B components as monochrome images. (c) Convert the source image to unit range. (d) Convert the RGB source image to XYZ color space. (e) Display the X, Y and Z components as monochrome images. (f) Convert the XYZ destination image to RGB color space....

PIKS Imaging Model

Figure 20.1-1 describes the PIKS imaging model. The solid lines indicate data flow, and the dashed lines indicate control flow. The PIKS application program interface consists of four major parts 2. Operators, tools and utilities The PIKS data objects include both image and image-related, non-image data objects. The operators, tools and utilities are functional elements that are used to process images or data objects extracted from images. The system mechanisms manage and control the...

Fourier Transform Filtering

The discrete Fourier transform convolution processing algorithm of Section 9.3 is often utilized for computer simulation of continuous Fourier domain filtering. In this section, discrete Fourier transform filter design techniques are considered. The first step in the discrete Fourier transform filtering process is generation of the discrete domain transfer function. For simplicity, the following discussion is limited to one-dimensional signals. The extension to two dimensions is...

Boundary Segmentation

It is possible to segment an image into regions of common attribute by detecting the boundary of each region for which there is a significant change in attribute across the boundary. Boundary detection can be accomplished by means of edge detection as described in Chapter 15. Figure 17.4-1 illustrates the segmentation of a projectile from its background. In this example, a 11 x 11 derivative of Gaussian edge detector is used to generate the edge map of Figure 17.4-1b. Morphological thinning of...

Transform Domain Superposition

The superposition operations discussed in Chapter 7 can often be performed more efficiently by transform domain processing rather than by direct processing. Figure 9.2-1 and b illustrate block diagrams of the computational steps involved in direct finite area or sampled image superposition. In Figure 9.2-1d and e, an alternative form of processing is illustrated in which a unitary transformation operation is performed on the data vector f before multiplication by a finite area filter matrix D...

Multiplane Image Restoration

A multi-plane image consists of a set of two or more related pixel planes.1 Examples include color image, e.g. RGB, CMYK, YCbCr, L*a*b* multispectral image sequence volumetric image, e.g. computerized tomography temporal image sequence This classification is limited to three-dimensional images. Multi-Plane Restoration Methods. The monochrome image restoration techniques previously discussed in this chapter can be applied independently to each pixel plane of a multi-plane image. However, with...

Secondorder Derivative Edge Detection

Second-order derivative edge detection techniques employ some form of spatial second-order differentiation to accentuate edges. An edge is marked if a significant spatial change occurs in the second derivative. Two types of second-order derivative methods are considered Laplacian and directed second derivative. FIGURE 15.2-15. Morphological thinning of edge maps for the peppers_mon image. (d) FDOG, t 0.11 (e) FDOG thinned FIGURE 15.2-15. Morphological thinning of edge maps for the peppers_mon...

Clustering Segmentation

One of the earliest examples of image segmentation, by Haralick and Kelly (30) using data clustering, was the subdivision of multispectral aerial images of agricultural land into regions containing the same type of land cover. The clustering segmentation concept is simple however, it is usually computationally intensive. Consider a vector x x1, x2, , xN T of measurements at each pixel coordinate (j, k) in an image. The measurements could be point multispectral values, point color components or...

Constrained Image Restoration

The previously described image restoration techniques have treated images as arrays of numbers. They have not considered that a restored natural image should be subject to physical constraints. A restored natural image should be spatially smooth and strictly positive in amplitude. Smoothing and regularization techniques (34-36) have been used in an attempt to overcome the ill-conditioning problems associated with image restoration. Basically, these methods attempt to force smoothness on the...

Continuous Image Characterization

Although this book is concerned primarily with digital, as opposed to analog, image processing techniques. It should be remembered that most digital images represent continuous natural images. Exceptions are artificial digital images such as test patterns that are numerically created in the computer and images constructed by tomographic systems. Thus, it is important to understand the physics of image formation by sensors and optical systems including human visual perception. Another important...

Image Enhancement Exercises

10.1 Develop a program that displays the Q component of a YIQ color image over its full dynamic range. Steps (a) Display the source monochrome RGB image. (b) Scale the RGB image to unit range and convert it to the YIQ space. (c) Extract the Q component image. (d) Compute the amplitude extrema. (e) Use the window_level conversion function to display the Q component. The executable example_Q_display executes this example. See the monadic_arithmetic, color_conversion_linear, extrema,...

Image Sampling And Reconstruction

In digital image processing systems, one usually deals with arrays of numbers obtained by spatially sampling points of a physical image. After processing, another array of numbers is produced, and these numbers are then used to reconstruct a continuous image for viewing. Image samples nominally represent some physical measurements of a continuous image field, for example, measurements of the image intensity or photographic density. Measurement uncertainties exist in any physical measurement...

Image Segmentation

Segmentation of an image entails the division or separation of the image into regions of similar attribute. The most basic attribute for segmentation is image luminance amplitude for a monochrome image and color components for a color image. Image edges and texture are also useful attributes for segmentation. The definition of segmentation adopted in this chapter is deliberately restrictive no contextual information is utilized in the segmentation. Furthermore, segmentation does not involve...

Transform Coefficient Features

The coefficients of a two-dimensional transform of a luminance image specify the amplitude of the luminance patterns (two-dimensional basis functions) of a transform such that the weighted sum of the luminance patterns is identical to the image. By this characterization of a transform, the coefficients may be considered to indicate the degree of correspondence of a particular luminance pattern with an image field. If a basis pattern is of the same spatial form as a feature to be detected within...

Linear Processing Techniques

Most discrete image processing computational algorithms are linear in nature an output image array is produced by a weighted linear combination of elements of an input array. The popularity of linear operations stems from the relative simplicity of spatial linear processing as opposed to spatial nonlinear processing. However, for image processing operations, conventional linear processing is often computationally infeasible without efficient computational algorithms because of the large image...

M N

P TCF I I tr(m, n)VnUTm TCFTR (6.2-10) Hence the output image matrix P can be produced by sequential row and column operations. In many image processing applications, the linear transformations operator T is highly structured, and computational simplifications are possible. Special cases of interest are listed below and illustrated in Figure 6.2-1 for the case in which the input and output images are of the same dimension, M N. T diag Td, Tc* , Tcn (6.2-11) where TCj is the transformation...

Morphological Image Processing

Morphological image processing is a type of processing in which the spatial form or structure of objects within an image are modified. Dilation, erosion and skeletonization are three fundamental morphological operations. With dilation, an object grows uniformly in spatial extent, whereas with erosion an object shrinks uniformly. Skele-tonization results in a stick figure representation of an object. The basic concepts of morphological image processing trace back to the research on spatial set...

KG L 2o2 J

Where o is the Gaussian spread factor and X is the aspect ratio between the x and y axes. The rotation of coordinates is specified by (x ', y ') (x cos + y sin -x sin + y cos (16.6-14) where is the orientation angle with respect to the x axis. The continuous domain filter transfer function is given by (38) H(u, V) exp -2n2o2 (u'-F)2 + (v')2 (16.6-15) Figure 16.6-14 shows the relationship between the real and imaginary components of the impulse response array and the magnitude of the transfer...

Mesh Point Region

Relationship between regions of physical samples and mesh points for numerical representation of a superposition integral. the values of the impulse response function that are utilized in the summation of Eq. 7.2-9 are represented as dots. An important observation should be made about the discrete model of Eq. 7.2-9 for a sampled superposition integral the physical area of the ideal image field F (x, y) containing mesh points contributing to physical image samples is larger than...

Monochromatic Photography

The most common material for photographic image recording is silver halide emulsion, depicted in Figure 11.3-1. In this material, silver halide grains are suspended in a transparent layer of gelatin that is deposited on a glass, acetate or paper backing. If the backing is transparent, a transparency can be produced, and if the backing is a white paper, a reflection print can be obtained. When light strikes a grain, an electrochemical conversion process occurs, and part of the grain is converted...

R

These tools, together with the ROI binding tool, provide the capability to conceptually generate five-dimensional ROI control objects from lower dimensional descriptions by pixel plane extensions. For example, with the elliptical ROI generation tool, it is possible to generate a circular disk ROI in a spatial pixel plane, and then cause the disk to be replicated over the other pixel planes of a volumetric image to obtain a cylinder-shaped ROI. A ROI data object is expressed, notationally, as a...

Template Matching

One of the most fundamental means of object detection within an image field is by template matching, in which a replica of an object of interest is compared to all unknown objects in the image field (1-4). If the template match between an unknown object and the template is sufficiently close, the unknown object is labeled as the template object. As a simple example of the template-matching process, consider the set of binary black line figures against a white background as shown in Figure...

Unitary Transforms

Two-dimensional unitary transforms have found two major applications in image processing. Transforms have been utilized to extract features from images. For example, with the Fourier transform, the average value or dc term is proportional to the average image amplitude, and the high-frequency terms (ac term) give an indication of the amplitude and orientation of edges within an image. Dimensionality reduction in computation is a second image processing application. Stated simply, those...

Firstorder Derivative Edge Detection

There are two fundamental methods for generating first-order derivative edge gradients. One method involves generation of gradients in two orthogonal directions in an image the second utilizes a set of directional derivatives. 15.2.1. Orthogonal Gradient Generation An edge in a continuous domain edge segment F(x, y), such as the one depicted in Figure 15.1-2a, can be detected by forming the continuous one-dimensional gradient G (x, y) along a line normal to the edge slope, which is at an angle...

References

Green, Recent Developments in Digital Image Processing at the Image Processing Laboratory at the Jet Propulsion Laboratory, Proc. IEEE, 60, 7, July 1972, 821-828. 2. M. M. Sondhi, Image Restoration The Removal of Spatially Invariant Degradations, Proc. IEEE, 60, 7, July 1972, 842-853. 3. H. C. Andrews, Digital Image Restoration A Survey, IEEE Computer, 7, 5, May 1974, 36-45. 4. B. R. Hunt, Digital Image Processing, Proc. IEEE, 63, 4, April 1975, 693-708. 5. H. C....

FIGURE 1213 Gain correction of a CCD camera image 1212 Display Point Nonlinearity Correction

Correction of an image display for point luminance nonlinearities is identical in principle to the correction of point luminance nonlinearities of an image sensor. The procedure illustrated in Figure 12.1-4 involves distortion of the binary coded image luminance variable B to form a corrected binary coded luminance function B so that the displayed luminance C will be linearly proportional to B. In this formulation, the display may include a photographic record of a displayed light field. The...

Sensor Point Nonlinearity Correction

In imaging systems in which the source degradation can be separated into cascaded spatial and point effects, it is often possible directly to compensate for the point degradation 7 . Consider a physical imaging system that produces an observed image field FO x, y according to the separable model Fo x, y OQ Od C x, y,X 12.1-1 Digital Image Processing PIKS Scientific Inside, Fourth Edition, by William K. Pratt Copyright 2007 by John Wiley amp Sons, Inc.

Discrete Image Restoration Models

This chapter began with an introduction to a general model of an imaging system and a digital restoration process. Next, typical components of the imaging system were described and modeled within the context of the general model. Now, the discussion turns to the development of several discrete image restoration models. In the development of these models, it is assumed that the spectral wavelength response and temporal response characteristics of the physical imaging system can be separated from...

Edge Detection

Changes or discontinuities in an image amplitude attribute such as luminance or tri-stimulus value are fundamentally important primitive characteristics of an image because they often provide an indication of the physical extent of objects within the image. Local discontinuities in image luminance from one level to another are called luminance edges. Global luminance discontinuities, called luminance boundary segments, are considered in Section 17.4. In this chapter, the definition of a...

Image Feature Evaluation

There are two quantitative approaches to the evaluation of image features prototype performance and figure of merit. In the prototype performance approach for image classification, a prototype image with regions segments that have been independently categorized is classified by a classification procedure using various image features to be evaluated. The classification error is then measured for each feature Digital Image Processing PIKS Scientific Inside, Fourth Edition, by William K. Pratt...

Finitearea Superposition And Convolution

Mathematical expressions for finite-area superposition and convolution are developed below for both series and vector-space formulations. 7.1.1. Finite-Area Superposition and Convolution Series Formulation Let F n1, n2 denote an image array for n , 1, 2, , N. For notational simplicity, all arrays in this chapter are assumed square. In correspondence with Eq. 1.2-6, the image array can be represented at some point m1, m2 as a sum of amplitude weighted Dirac delta functions by the discrete...

Camera Imaging Model

The imaging model utilized in the preceding section to derive the perspective transformation assumed, for notational simplicity, that the center of the image plane was coincident with the center of the world reference coordinate system. In this section, the imaging model is generalized to handle physical cameras used in practical imaging geometries 18 . This leads to two important results a derivation of the fundamental relationship between an object and image point and a means of changing a...

A A D

Where Dmax is the maximum value of D j, k . The summations of Eqs. 10.4-4 and 10.4-5 can be implemented by convolutions with a uniform impulse array. But, overshoot and undershoot effects may occur. Better results are usually obtained with a pyramid or Gaussian-shaped array. Figure 10.4-5 shows the mean, standard deviation, spatial gain and Wallis statistical differencing result on a monochrome image. Figure 10.4-6 presents a medical imaging example. FIGURE 10.4-5. Wallis statistical...

Light Perception

Light, according to Webster's Dictionary 1 , is radiant energy which, by its action on the organs of vision, enables them to perform their function of sight. Much is known about the physical properties of light, but the mechanisms by which light interacts with the organs of vision is not as well understood. Light is known to be a form of electromagnetic radiation lying in a relatively narrow region of the electromagnetic spectrum over a wavelength band of about 350 to 780 nanometers nm . A...

Luminance Edge Detector Performance

Relatively few comprehensive studies of edge detector performance have been reported in the literature 15,31-35 . A performance evaluation is difficult because of the large number of methods proposed, problems in determining the optimum parameters associated with each technique and the lack of definitive performance criteria. In developing performance criteria for an edge detector, it is wise to distinguish between mandatory and auxiliary information to be obtained from the detector. Obviously,...

Image Probability Density Models

A discrete image array F n1, n2 can be completely characterized statistically by its joint probability density, written in matrix form as p F p F 1, 1 , F 2, 1 , , F Nj, N2 6.4-la p f P f 1 , f 2 , , f Q 6.4-lb where Q N1 N2 is the order of the joint density. If all pixel values are statistically independent, the joint density factors into the product P f p f 1 p f 2 Pf Q 6.4-2 of its first-order marginal densities. The most common joint probability density is the joint Gaussian, which may be...

Color Spaces

Ebu Chromaticity Diagram

It has been shown that a color C can be matched by its tristimulus values T1 C , T2 C , T3 C for a given set of primaries. Alternatively, the color may be specified by its chromaticity values f1 C , t2 C and its luminance Y C . Appendix 2 presents formulas for color coordinate conversion between tristimulus values and chromaticity coordinates for various representational combinations. A third approach in specifying a color is to represent the color by a linear or nonlinear invertible function...

I K

As can be seen from Figure 4.1-2, the spectrum of the sampled image consists of the spectrum of the ideal image infinitely repeated over the frequency plane in a grid of resolution 2rc Ax, 2rc Ay . It should be noted that if Ax and Ay are chosen too large with respect to the spatial frequency limits of FI mx, my , the individual spectra will overlap. A continuous image field may be obtained from the image samples of FP x, y by linear spatial interpolation or by linear spatial filtering of the...

Pseudoinverse Spatial Image Restoration

Wiener Filter

The matrix pseudoinverse defined in Appendix 1 can be used for spatial image restoration of digital images when it is possible to model the spatial degradation as a vector-space operation on a vector of ideal image points yielding a vector of physical observed samples obtained from the degraded image 21-23 . a Noise-free, no cutoff b Noisy, C 100 a Noise-free, no cutoff b Noisy, C 100 c Noise-free, C 200 d Noisy, C 75 c Noise-free, C 200 d Noisy, C 75 e Noise-free, C 150 f Noisy, C 50 FIGURE...

Histogram Modification

Level 171 Clarence

The luminance histogram of a typical natural scene that has been linearly quantized is usually highly skewed toward the darker levels a majority of the pixels possess a luminance less than the average. In such images, detail in the darker regions is often not perceptible. One means of enhancing these types of images is a technique called histogram modification, in which the original image is rescaled so that the histogram of the enhanced image follows some desired form. Andrews, Hall and others...

Sampled Image Superposition And Convolution

Many applications in image processing require a discretization of the superposition integral relating the input and output continuous fields of a linear system. For example, image blurring by an optical system, sampling with a finite-area aperture or imaging through atmospheric turbulence, may be modeled by the superposition integral equation G x, y r r F a, P J x, y a, P da dp 7.2-1a where F x,y and G x, y denote the input and output fields of a linear system, respectively, and the kernel J x,...

Monochrome And Color Image Quantization

This section considers the subjective and quantitative effects of the quantization of monochrome and color images. 5.3.1. Monochrome Image Quantization Monochrome images are typically input to a digital image processor as a sequence of uniform-length binary code words. In the literature, the binary code is often called a pulse code modulation PCM code. Because uniform-length code words are used for each image sample, the number of amplitude quantization levels is determined by the relationship...

Nasal Side

Rods Cones Fovea Blind Spot

Figure 2.2-1 shows the horizontal cross section of a human eyeball. The front of the eye is covered by a transparent surface called the cornea. The remaining outer cover, called the sclera, is composed of a fibrous coat that surrounds the choroid, a layer containing blood capillaries. Inside the choroid is the retina, which is composed of two types of receptors rods and cones. Nerves connecting to the retina leave the eyeball through the optic nerve bundle. Light entering the cornea is Figure...

Color Vision Model

There have been many theories postulated to explain human color vision, beginning with the experiments of Newton and Maxwell 29-32 . The classical model of human color vision, postulated by Thomas Young in 1802 31 , is the trichromatic model in which it is assumed that the eye possesses three types of sensors, each sensitive over a different wavelength band. It is interesting to note that there was no direct physiological evidence of the existence of three distinct types of sensors until about...

11i1

Mach Band Pattern

b Step chart intensity distribution Because the differential of the logarithm of intensity is equal changes in the logarithm of the intensity of a light can be related to equal just noticeable changes in its intensity over the region of intensities, for which the Weber fraction is constant. For this reason, in many image processing systems, operations are performed on the logarithm of the intensity of an image point rather than the intensity. Mach Band. Consider the set of gray scale strips...

Image Stochastic Characterization

The following presentation on the statistical characterization of images assumes general familiarity with probability theory, random variables and stochastic processes. References 2 and 4 to 7 can provide suitable background. The primary purpose of the discussion here is to introduce notation and develop stochastic image models. It is often convenient to regard an image as a sample of a stochastic process. For continuous images, the image function F x, y, t is assumed to be a member of a...

Color Matching

The basis of the trichromatic theory of color vision is that it is possible to match an arbitrary color by superimposing appropriate amounts of three primary colors 10-14 . In an additive color reproduction system such as color television, the three primaries are individual red, green and blue light sources that are projected onto a common region of space to reproduce a colored light. In a subtractive color system, which is the basis of most color photography and color printing, a white light...

Monochrome Vision Model

Human Spatial Frequency

One of the modern techniques of optical system design entails the treatment of an optical system as a two-dimensional linear system that is linear in intensity and can be characterized by a two-dimensional transfer function 17 . Consider the linear optical system of Figure 2.4-1. The system input is a spatial light distribution obtained by passing a constant-intensity light beam through a transparency with a spatial sine-wave transmittance. Because the system is linear, the spatial output...

Acknowledgments

The following is a cumulative acknowledgment of all who have contributed to the four editions of Digital Image Processing. The first edition of this book was written while I was a professor of electrical engineering at the University of Southern California USC . Image processing research at USC began in 1962 on a very modest scale, but the program increased in size and scope with the attendant international interest in the field. In 1971, Dr. Zohrab Kaprielian, then dean of engineering and vice...