Report On Basic Concepts of Digital Image Processing
Exploring the Power of Digital Image Processing
by V. Praveena Kumara*,
- Published in Journal of Advances in Science and Technology, E-ISSN: 2230-9659
Volume 12, Issue No. 25, Dec 2016, Pages 112 - 114 (3)
Published by: Ignited Minds Journals
ABSTRACT
Pictures are the most common and convenient means of conveying or transmitting information. A picture is worth a thousand words. Pictures concisely convey information about positions, sizes and inter-relationships between objects. They portray spatial information that we can recognize as objects. Human beings are good at deriving information from such images, because of our innate visual and mental abilities. About 75% of the information received by Human are in pictorial form. In the present context, the analysis of pictures that employ an overhead perspective, including the radiation not visible to human eye are considered. Thus our discussion will be focusing on analysis of remotely sensed images. These images are represented in digital form. When represented as numbers, brightness can be added, subtracted, multiplied, divided and, in general, subjected to statistical manipulations that are not possible if an image is presented only as a photograph. Although digital analysis of remotely sensed data dates from the early days of remote sensing, the launch of the first Landsat earth observation satellite in 1972 began an era of increasing interest in machine processing. Previously, digital remote sensing data could be analyzed only at specialized remote sensing laboratories. Specialized equipment and trained personnel necessary to conduct routine machine analysis of data were not widely available, in part because of limited availability of digital remote sensing data and a lack of appreciation of their qualities.
KEYWORD
digital image processing, pictures, conveying information, spatial information, remotely sensed images, brightness, statistical manipulations, remote sensing data, machine processing, digital remote sensing data
INTRODUCTION
Fusion involves the interaction of the images having different spatial resolution. These images have different pixel size, which creates problems while merging. Hence the image data is resampled to a common pixel spacing and map projection. The common pixel spacing should be the one, which is required in the desired fused image. Besides all sensor-specific corrections and enhancements of image data have to be applied prior to image fusion. After the fusion process the contribution of each sensor cannot be distinguished or quantified. Therefore as a general rule one must attempt to produce the best single sensor geometry and radiometry and then fuse the images. Any spatial enhancement performed prior to image fusion is of benefit to the resulting fused image
2. CONTENT
2.1. Digital Image
A digital remotely sensed image is typically composed of picture elements (pixels) located at the intersection of each row i and column j in each K bands of imagery. Associated with each pixel is a number known as Digital Number (DN) or Brightness value (BV), that depicts the average radiance of a relatively small area within a scene (refer fig.1). A smaller number indicates low average radiance from the area and the high number is an Indicator of high radiant properties of the area. The size of this area effects the reproduction of details within the scene. As pixel size is reduced more scene detail is presented in digital representation.
2.2. Digital Image Data Formats
The image data acquired from Remote Sensing Systems are stored in different types of formats viz. (1) band sequential (BSQ), (2) band interleaved by line (BIL), (3) band interleaved by pixel (BIP). It should be noted, however, that each of these formats is usually preceded on the digital tape by "header" and/or "trailer" information, which consists of ancillary
V. Praveena Kumara* 1
geometrically or radio metrically correcting the data. The data are normally recorded on nine-track CCTs with data density on the tape of 800, 1600, or 6250 bits per inch (bpi).
2.2-1.Band Sequential Format
The band sequential format requires that all data for a single band covering the entire scene be written as one file. Thus if one wanted the area in the center of a scene in four bands, it would be necessary to read into this location in four separate files to extract the desired information. Many researchers like this format because it is not necessary to read "serially" past unwanted information if certain bands are of no value. The number of tapes may be dependent on the number of bands provided for the scene.
2.2-2.Band Interleaved by Line Format
In this format, the data for the bands are written line by line onto the same tape (i.e. line 1 band 1, line 1 band 2, line 1 band 3, line 1 band 4, etc.). It is a useful format if all the bands are to be used in the analysis. If some bands are not of interest, the format is inefficient since it is necessary to read serially past all the unwanted data.
2.2-3.Band Interleaved by Pixel Format
In this format, the data for the pixel in all bands are written together. Taking the example of LANDSAT - MSS (Four Bands of Image Data every element in the matrix has four pixel values (one from each spectral band) placed one after the other [i.e., pixel (1,1) of band 1, pixel (1,1) of band 2, pixel (1,1) of band 3, pixel (1,1) of band 4, and then pixel (1,2) of band 1, pixel (1,2) of band 2 and so on]. Again, this is a practical data format if all bands are to be used, otherwise it would be inefficient. This format is not much popular now, but was used extensively by EROS Data Centre for Landsat scene at initial stage.
3. SOFTWARE CONSIDERATIONS
Digital Image Processing is an extremely broad subject and involves procedures which are mathematically complex. The procedure for digital image processing may be categorized into the following types of computer assisted operations.
3.1. Image Rectification:
These operations aim to correct distorted or degraded image data to create a faithful representation of the original scene. This typically involves the initial processing of raw image data to correct for geometric distortion, to calibrate the data radiometric ally and to preprocessing operations because they normally precede manipulation and analysis of image data.
3.2. Image Enhancement:
These procedures are applied to image data in order to effectively display the data for subsequent visual interpretation. It involves techniques for increasing the visual distinction between features in a scene. The objective is to create new images from original data in order to increase the amount of information that can be visually interpreted from the data. It includes level slicing, contrast stretching, spatial filtering edge enhancement, spectral rationing, principal components and intensity-hue-saturation color space transformations.
3.3. Image Classification:
The objective of these operations is to replace visual analysis of the image data with quantitative techniques for automating the identification of features in a scene. This involves the analysis of multispectral image data and the application of statistically based decision rules for determining the land cover identity of each pixel in an image. The intent of classification process is to categorize all pixels in a digital image into one of several land cover classes or themes. This classified data may be used to produce thematic maps of the land cover present in an image.
4. COLOR COMPOSITIES
While displaying the different bands of a multispectral data set, images obtained in different bands are displayed in image planes (other than their own) the color composite is regarded as False Color Composite (FCC). High spectral resolution is important when producing color components. For a true color composite an image data used in red, green and blue spectral region must be assigned bits of red, green and blue image processor frame buffer memory. A color infrared 4 composite 'standard false color composite' is displayed by placing the infrared, red, green in the red, green and blue frame buffer memory. In this healthy vegetation shows up in shades of red because vegetation absorbs most of green and red energy but reflects approximately half of incidence Infrared energy. Urban areas reflect equal problem of NIR, R & G, and therefore they appear as steel grey.
V. Praveena Kumara* 1
Figure 2: False Color Composite (FCC 4, 2, 1) of LISS II Poanta Area
5. CONCLUSION
One way that would appear to ensure adequate accuracy assessment at the pixel level of specificity would be to compare the land cover classification at every pixel in an image with reference source. While such “wall to wall” comparisons may have value in research situations, assembling reference land cover information for an entire project area is expensive and defeats the whole purpose of performing a remote sensing based classification in the first place. Random sampling of pixels circumvents the above problems, but it is plagued with its own set of limitations. First, collection of reference data for a large sample of randomly distributed points is often very difficult and costly. For e.g. travel distance and access to random sites might be prohibitive. Second, the validity of random sampling depends on the ability to precisely register the reference data to the image data. This is often difficult to do. One way to overcome this problem is to sample only pixels whose identity is not influenced by potential registration errors (for example points at least several pixels away from field boundaries). Another consideration is making certain that the randomly selected test pixels of areas are geographically representative of the data set under analysis. Simple random sampling tends to under sample small but potentially important areas. Stratified random sampling, where each land cover category may be considered a stratum, is frequently used in such cases. Clearly, the sampling approach appropriate for an agricultural inventory would from that of wetlands mapping activity. Each sample design must account for the area being studied and the Cover type being classified one common means of accomplishing random sampling is to overlay classified output data with a grid. Test cell within those grids is then selected randomly and groups of pixels within the test cells are evaluated. The cover types present are determined through ground verification (of other reference) and compared to the classification data. Several papers have been written about the proper sampling scheme to be used for accuracy assessment under various conditions and opinions vary among researchers. David Keith Todd (1980). Groundwater Hydrology, John Wiley India Pvt Ltd., second edition. Ravinder Kaur and K.G. Rosin (2009). Ground Water Vulnerability Assessment – Challenges and Opportunities. Zubair, Ayodeji Oedema (2006). Change detection in land use and land cover using remote sensing data and GIS, Department of Geography, University of Ibadan, Ibadan (Masters thesis- October, 2006)
Corresponding Author V. Praveena Kumara*
M.Sc. Applied Geology & Geo-Informatics, 3rd Sem, Department of Geology, Central University of Karnataka, Kadaganchi Village, Distt-.Kalburgi, State- Karnataka, Country - India
E-Mail – praveenakumarav@gmail.com