Image Classification

The pixel values in a digital or scanned analog photograph are the result of spectral reflectance measurements by the sensor or film. Thus, they carry detailed quantitative information about the spectral characteristics of the surfaces or objects depicted, usually in an image depth of 8 or 12 bits per band. No human interpreter is able to appraise the subtle differences that can be recorded with such high radiometric resolution. Also, the small patterns and fine spatial details presented by different ground-cover types often make it impractical, if not impossible, to map their distribution by hand. Image classification techniques aim at the automatic identification of cover types and features using statistical and knowledge-based decision rules (Mather, 2004; Lillesand et al., 2008).

Classical methods of image classification are based on the recognition of spectral patterns by analyzing solely the brightness values of the individual pixels across multi-spectral bands. However, while spectral reflectance— color—is the most important quality distinguishing individual ground-cover types in an image, it is by no means the only one, and shape, size, pattern, texture, and context also may play important roles for identifying features (see Chapter 10). Elements of spatial pattern recognition, knowledge-based rules, and object-oriented segmentation techniques have gained in importance during the last decade since very high-resolution satellite imagery has become available. Beyond looking at individual pixels only, these techniques analyze groups or patches of pixels in their spatial context.

Well-established methods both for unsupervised and supervised multispectral classification exist. The challenge for the analyst is to describe the object classes that are desired for the final map in terms of well-distinguished spectral classes. In spite of their reputation as uncomplicated, easy-to-use image data, this may be surprisingly difficult with small-format aerial photographs. One of the great assets of SFAP—its high spatial resolution—is also a drawback for multispectral image classifiers. The fine details of color, texture, light, and shadow that are recorded for the objects with GSDs usually in the centimeter range often bring about within-class variance that may be higher than the spectral differences between classes. In addition, reflectance values in the visible spectrum are much more highly correlated with each other than in the longer wavelengths recordable with electronic scanners operating in the infrared spectrum. And finally, brightness variations within the image (vignetting effects, multiview-angle effects, shadowing) and between images (exposure differences, viewing-angle differences) are more prominent in SFAP images than for satellite images or metric large-format cameras at high altitudes.

Accordingly, the number of studies employing automatic classification techniques for small-format images is rare compared to those using visual interpretation, manual mapping, and photogrammetric measuring. Some researchers use simple thresholding procedures to classify just two or three classes. Marani et al. (2006), for example, successfully extracted small channel structures from low-altitude helium-balloon photographs for a multi-scale approach to mapping geomorphological patterns in the intertidal zone of the Venice Lagoon. In a study that used SFAP taken from an unmanned aerial vehicle (UAV) for validating low-resolution MODIS satellite observations of Arctic surface albedo variations, Tschudi et al. (2008) also used thresholding for the digital images, as the gray-scale nature of their scenes made more complicated multispec-tral methods dispensable. They classified three categories—ice cover, melt ponds, and open water—in order to estimate accurate pond coverage in a given region and verify the results of the MODIS image analysis. For another example of this approach involving vegetation, see Chapter 15.

Classification techniques may be used to derive not only qualitative, but also quantitative information from images. Figure 11-16 illustrates the procedure employed for deriving detailed maps of percentage vegetation cover in a study on vegetation development and geomorpho-dynamics on abandoned fields in northeastern Spain (Marzolff, 1999, 2003; Ries, 2003). Film (35 mm) photographs taken from a hot-air blimp were georectified to 2.5 cm GSD (Fig. 11-16A) and classified into 15-20 spectral classes with an unsupervised ISODATA clustering algorithm. These classes were then allocated to percentage vegetation cover based on field observations and close-range reference images (Fig. 11-16B). The aim was to produce a vegetation map with distinct zones that representatively described the cover patterns in a pre-defined

FIGURE 11-16 Procedure for classification of vegetation cover shown with subset of a hot-air blimp photograph taken by IM and JBR near Maria de Huerva, Province of Zaragoza, Spain, April 1996. Field of view ~ 6 m across. (A) Georectified image. (B) Classified percentage cover (white = 0%, black = 100%). (C) Mean filtered classification. (D) Final map in five vegetation cover classes. Adapted from Marzolff (1999, figs. 6-4 and 6-5).

0 - <5% 5 -<30% 30 - <60% 60 - <90% 90- 100%

classification scheme. Therefore, the classified dataset was filtered with a 5 x 5 mean filter (Fig. 11-16C), recoded into five cover classes, and again filtered with a 7 x 7 majority filter (Fig. 11-16D; see also Marzolff, 1999).

Often, the microstructural details depicted by the high resolution of SFAP together with shadowing effects (cast shadow, internal vegetation canopy shadowing) necessitate more sophisticated classification procedures even for "simple" object classes. Lesschen et al. (2008), for example, found that the supervised maximum likelihood method was the only one that allowed distinguishing bare soil from vegetation patches in vertical SFAP of a semiarid environment and also classified the shaded areas correctly. With the resulting binary maps of distinct patterns, they calculated spatial metrics for investigating the spatial heterogeneity in vegetation and soil properties after land abandonment in southeastern Spain. The example in Figure 11-17A shows that heavily shadowed images (compare with Fig. 11-12A) may still be difficult to classify correctly. Using an additional ratio band for suppressing shadowing effects significantly improved the results in this case (Fig. 11-17B). Although the ratio-classified image still shows some misclassifications at the edges of cast shadows, shrub and grass cover in shaded areas are classified much better, and the pine tree canopy appears less frayed.

Little work has been done so far with object-oriented classification of SFAP images, although improvements can be expected from these techniques with respect to the difficulties associated with the high image resolution. Promising results with object-oriented segmentation techniques applied to digital camera images were achieved by Rango et al. (2006) and Laliberte et al. (2007) for classification of mixed rangeland vegetation at the Jornada Experimental Range in New Mexico, United States, and by

Dunford et al. (2009) for quantifying vegetation units in a Mediterranean riparian forest environment.

Champion Flash Photography

Champion Flash Photography

Here Is How You Can Use Flash Wisely! A Hands-on Guide On Flash Photography For Camera Friendly People!. Learn Flash Photography Essentials By Following Simple Tips.

Get My Free Ebook


Post a comment