## Firstorder Derivative Edge Detection

There are two fundamental methods for generating first-order derivative edge gradients. One method involves generation of gradients in two orthogonal directions in an image the second utilizes a set of directional derivatives. 15.2.1. Orthogonal Gradient Generation An edge in a continuous domain edge segment F(x, y), such as the one depicted in Figure 15.1-2a, can be detected by forming the continuous one-dimensional gradient G (x, y) along a line normal to the edge slope, which is at an angle...

## References

Green, Recent Developments in Digital Image Processing at the Image Processing Laboratory at the Jet Propulsion Laboratory, Proc. IEEE, 60, 7, July 1972, 821-828. 2. M. M. Sondhi, Image Restoration The Removal of Spatially Invariant Degradations, Proc. IEEE, 60, 7, July 1972, 842-853. 3. H. C. Andrews, Digital Image Restoration A Survey, IEEE Computer, 7, 5, May 1974, 36-45. 4. B. R. Hunt, Digital Image Processing, Proc. IEEE, 63, 4, April 1975, 693-708. 5. H. C....

## FIGURE 1213 Gain correction of a CCD camera image 1212 Display Point Nonlinearity Correction

Correction of an image display for point luminance nonlinearities is identical in principle to the correction of point luminance nonlinearities of an image sensor. The procedure illustrated in Figure 12.1-4 involves distortion of the binary coded image luminance variable B to form a corrected binary coded luminance function B so that the displayed luminance C will be linearly proportional to B. In this formulation, the display may include a photographic record of a displayed light field. The...

## Sensor Point Nonlinearity Correction

In imaging systems in which the source degradation can be separated into cascaded spatial and point effects, it is often possible directly to compensate for the point degradation 7 . Consider a physical imaging system that produces an observed image field FO x, y according to the separable model Fo x, y OQ Od C x, y,X 12.1-1 Digital Image Processing PIKS Scientific Inside, Fourth Edition, by William K. Pratt Copyright 2007 by John Wiley amp Sons, Inc.

## Image Feature Evaluation

There are two quantitative approaches to the evaluation of image features prototype performance and figure of merit. In the prototype performance approach for image classification, a prototype image with regions segments that have been independently categorized is classified by a classification procedure using various image features to be evaluated. The classification error is then measured for each feature Digital Image Processing PIKS Scientific Inside, Fourth Edition, by William K. Pratt...

## Finitearea Superposition And Convolution

Mathematical expressions for finite-area superposition and convolution are developed below for both series and vector-space formulations. 7.1.1. Finite-Area Superposition and Convolution Series Formulation Let F n1, n2 denote an image array for n , 1, 2, , N. For notational simplicity, all arrays in this chapter are assumed square. In correspondence with Eq. 1.2-6, the image array can be represented at some point m1, m2 as a sum of amplitude weighted Dirac delta functions by the discrete...

## Camera Imaging Model

The imaging model utilized in the preceding section to derive the perspective transformation assumed, for notational simplicity, that the center of the image plane was coincident with the center of the world reference coordinate system. In this section, the imaging model is generalized to handle physical cameras used in practical imaging geometries 18 . This leads to two important results a derivation of the fundamental relationship between an object and image point and a means of changing a...

## Light Perception

Light, according to Webster's Dictionary 1 , is radiant energy which, by its action on the organs of vision, enables them to perform their function of sight. Much is known about the physical properties of light, but the mechanisms by which light interacts with the organs of vision is not as well understood. Light is known to be a form of electromagnetic radiation lying in a relatively narrow region of the electromagnetic spectrum over a wavelength band of about 350 to 780 nanometers nm . A...

## Info

0 0 0 0 0 0 0 10 0 0 110 0 0 0 110 0 0 0 0 0 Wi .A Gli.k GU.k XCmTiA FU. FIGURE 14.4-3. Generalized dilation computed by Minkowski addition. With reference to Eq. 7.1-7, the spatial limits of the union combination are MAX 1, j - L 1 lt m lt MIN N, j 14.4-5a MAX 1, k - L 1 lt n lt MIN N, k 14.4-5b Equation 14.4-4 provides an output array that is justified with the upper left corner of the input array. In image processing systems, it is often convenient to center the input and output images and...

## Pseudoinverse Spatial Image Restoration

The matrix pseudoinverse defined in Appendix 1 can be used for spatial image restoration of digital images when it is possible to model the spatial degradation as a vector-space operation on a vector of ideal image points yielding a vector of physical observed samples obtained from the degraded image 21-23 . a Noise-free, no cutoff b Noisy, C 100 a Noise-free, no cutoff b Noisy, C 100 c Noise-free, C 200 d Noisy, C 75 c Noise-free, C 200 d Noisy, C 75 e Noise-free, C 150 f Noisy, C 50 FIGURE...

## Histogram Modification

The luminance histogram of a typical natural scene that has been linearly quantized is usually highly skewed toward the darker levels a majority of the pixels possess a luminance less than the average. In such images, detail in the darker regions is often not perceptible. One means of enhancing these types of images is a technique called histogram modification, in which the original image is rescaled so that the histogram of the enhanced image follows some desired form. Andrews, Hall and others...

## Sampled Image Superposition And Convolution

Many applications in image processing require a discretization of the superposition integral relating the input and output continuous fields of a linear system. For example, image blurring by an optical system, sampling with a finite-area aperture or imaging through atmospheric turbulence, may be modeled by the superposition integral equation G x, y r r F a, P J x, y a, P da dp 7.2-1a where F x,y and G x, y denote the input and output fields of a linear system, respectively, and the kernel J x,...

## Monochrome And Color Image Quantization

This section considers the subjective and quantitative effects of the quantization of monochrome and color images. 5.3.1. Monochrome Image Quantization Monochrome images are typically input to a digital image processor as a sequence of uniform-length binary code words. In the literature, the binary code is often called a pulse code modulation PCM code. Because uniform-length code words are used for each image sample, the number of amplitude quantization levels is determined by the relationship...

## Nasal Side

Figure 2.2-1 shows the horizontal cross section of a human eyeball. The front of the eye is covered by a transparent surface called the cornea. The remaining outer cover, called the sclera, is composed of a fibrous coat that surrounds the choroid, a layer containing blood capillaries. Inside the choroid is the retina, which is composed of two types of receptors rods and cones. Nerves connecting to the retina leave the eyeball through the optic nerve bundle. Light entering the cornea is Figure...

## Color Vision Model

There have been many theories postulated to explain human color vision, beginning with the experiments of Newton and Maxwell 29-32 . The classical model of human color vision, postulated by Thomas Young in 1802 31 , is the trichromatic model in which it is assumed that the eye possesses three types of sensors, each sensitive over a different wavelength band. It is interesting to note that there was no direct physiological evidence of the existence of three distinct types of sensors until about...

## 11i1

b Step chart intensity distribution Because the differential of the logarithm of intensity is equal changes in the logarithm of the intensity of a light can be related to equal just noticeable changes in its intensity over the region of intensities, for which the Weber fraction is constant. For this reason, in many image processing systems, operations are performed on the logarithm of the intensity of an image point rather than the intensity. Mach Band. Consider the set of gray scale strips...

## Image Stochastic Characterization

The following presentation on the statistical characterization of images assumes general familiarity with probability theory, random variables and stochastic processes. References 2 and 4 to 7 can provide suitable background. The primary purpose of the discussion here is to introduce notation and develop stochastic image models. It is often convenient to regard an image as a sample of a stochastic process. For continuous images, the image function F x, y, t is assumed to be a member of a...

## Color Matching

The basis of the trichromatic theory of color vision is that it is possible to match an arbitrary color by superimposing appropriate amounts of three primary colors 10-14 . In an additive color reproduction system such as color television, the three primaries are individual red, green and blue light sources that are projected onto a common region of space to reproduce a colored light. In a subtractive color system, which is the basis of most color photography and color printing, a white light...

## Monochrome Vision Model

One of the modern techniques of optical system design entails the treatment of an optical system as a two-dimensional linear system that is linear in intensity and can be characterized by a two-dimensional transfer function 17 . Consider the linear optical system of Figure 2.4-1. The system input is a spatial light distribution obtained by passing a constant-intensity light beam through a transparency with a spatial sine-wave transmittance. Because the system is linear, the spatial output...

## Acknowledgments

The following is a cumulative acknowledgment of all who have contributed to the four editions of Digital Image Processing. The first edition of this book was written while I was a professor of electrical engineering at the University of Southern California USC . Image processing research at USC began in 1962 on a very modest scale, but the program increased in size and scope with the attendant international interest in the field. In 1971, Dr. Zohrab Kaprielian, then dean of engineering and vice...