Image Compression Using Discrete Cosine Transform & Discrete Wavelet Transform

A hybrid algorithm for efficient image compression and reconstruction using DCT and DWT

by Dr. Soniya*,

- Published in International Journal of Information Technology and Management, E-ISSN: 2249-4510

Volume 7, Issue No. 9, Aug 2014, Pages 0 - 0 (0)

Published by: Ignited Minds Journals


ABSTRACT

This paper describes anarchitecture of DCT and DWT standard of an image compression It is usedspecially for the compression of images where tolerable degradation isrequired.. The discrete cosine transform is a fast transform and has fixedbasis images which gives good compromise between information packing abilityand computational complexity. DWT can be used to reduce the image size withoutlosing much of the resolutions computed and values less than a pre-specifiedthreshold The paper covers some background of wavelet analysis, datacompression and how DCT and DWT can be used for image compression and wepropose a hybrid DWT-DCT algorithm for image compression and reconstructiontaking benefit from the advantages of both algorithms. The algorithm performsthe Discrete Cosine Transform (DCT) on the Discrete Wavelet Transform (DWT)coefficients.

KEYWORD

image compression, discrete cosine transform, discrete wavelet transform, hybrid algorithm, resolution, wavelet analysis, data compression, computational complexity, information packing ability, DCT-DWT coefficients

1. INTRODUCTION

Image compression is very important for efficient transmission and storage of images. Images contain large amounts of information that requires much storage space, large transmission bandwidths and long transmission times. Therefore it is advantageous to compress the image by storing only the essential information needed to reconstruct the image. An image can be thought of as a matrix of pixel (or intensity) values. Image compression standards bring about many benefits, such as: (1) easier exchange of image files between different devices and applications; (2) reuse of existing hardware and software for a wider array of products; (3) existence of benchmarks and reference data sets for new and alternative developments. Digital image compression techniques can be divided into two classes: lossless and lossy compression. In lossless compression schemes, the reconstructed image, after compression, is numerically identical to the original image. Lossless image compression is particularly useful in applications such as image archiving (as in the storage of legal or medical records) and facsimile transmission. However, most of the applications today use lossy image compression because of its higher compression ratio compared with lossless image compression.

Fig(1) Image Compression model

The DCT process is applied on blocks of 8 * 8 or 16 * 16 pixels, which will convert into series of coefficients, which define spectral composition of the block. The Transformer transforms the input data into a format to reduce inter pixel redundancies in the input image. Transform coding techniques use a reversible, linear mathematical transform to map the pixel values onto a set of coefficients, which are then quantized and encoded. The key factor behind the success of transform-based coding schemes is that many of the resulting coefficients for most natural images have small magnitudes and can be quantized without causing significant distortion in the decoded image. For compression purpose, the higher the capability. of compressing information in fewer coefficients, the better the transform; for that reason, the Discrete Cosine Transform (DCT) and Discrete Wavelet Transform(DWT) have become the most widely used transform coding techniques. The 2-D DCT is a separable transform consisting of Forward Discrete Cosine Transform (FDCT) and Inverse Discrete Cosine Transform (IDCT).

1.1 Wavelets and Compression

Wavelets are useful for compressing signals but they also have far more extensive uses. They can be used to process and improve signals, in fields such as medical imaging where image degradation is not used to process and improve signals, in fields such as medical imaging where image degradation is not tolerated they are of particular use. They can be used to remove noise in an image, for example if it is

2

1.2 Essentials of wavelet-based compression

Before we look at some particular wavelet-based image compression algorithms, it is helpful to look at a special image and examine a wavelet. The goal of wavelet-transform encoding is to take advantage of redundancy in the transformed image and obtain a good reconstruction upon decompression. Transform of it. Wavelet transform shows that there is a large amount of redundancy in the wavelet transform| visible both in terms of grey background matching up at multiple resolutions (zero trees after thresholding ), and in terms of non-zero values (significant values after thresholding) also overlapping at multiple resolutions (fractal-like aspect).

2. SOME BASIC COMPRESSION METHODS

2.1 The JPEG compression: Joint Photographic Expert Group (JPEG) which is commonly used method of compression for photographic images. JPEG compression can be used in a variety of file formats:

  • EPS-files
  • EPS DCS-files
  • JFIF-files
  • PDF-files

Firstly the image is partitioned into non-overlapping 8*8 blocks. Then DCT is applied to each block to convert the spatial domain gray level of pixels into coefficients in frequency domain. After the computation of DCT coefficients, they are normalized according to a quantization table with different scales provided by the JPEG standard computed by psycho visual evidence. The quantized coefficients are rearranged in a zigzag scan order for further compressed by an efficient lossless coding algorithm such as run-length coding, Huffman coding. The process may be acquired as such given under: 1. The image first is broken into 8x8 blocks of pixels. 2. The DCT is applied to each block; it is working from left to right, top to bottom. 3. Each block is compressed using quantization table. 4. The array of compressed blocks that comprise the image is stored in a drastically reduced amount of space. 5. When desired, the image is reconstructed through decompression, known as a process 2.2 Run-Length Encoding (RLE): RLE stands for Run Length Encoding. It is a lossless algorithm that only furnishes decent compression ratios in specific types of data. It is form of data compression in which the same data value occurs in many consecutive data elements (known as Runs) are stored as a single data value and count. This is most useful on data that contains many such runs. For example, simple graphic images such as icons, line drawings, and animations. It may be increase the file size because, that doesn’t have many runs, and is not useful with files. RLE compression can be used in the following file formats:

  • TIFF files
  • PDF files

2.3 Huffman Coding: The Huffman compression algorithm is invented by David Huffman, formerly a professor at MIT. Huffman compression is a lossless compression algorithm that is apotheosis for compressing text or program files. This credibly explains why it is used a lot in compression programs like ZIP or ARJ. Huffman encoding can be further optimized in two different ways:

  • Adaptive Huffman code dynamically changes the code words concordant to the change of probabilities of the symbols.
  • Extended Huffman compression can encode groups of symbols rather than single symbols and this is crucial for many image applications. The lossy image compression techniques play a major role in systems having limited transmission bandwidth and storage capacity.

3. METHODOLOGIES USED FOR IMAGE COMPESSION:

3.1 THE DISCRETE COSINE TRANSFORM: DCT Attempts to decor relate the image data after decor relation each transform coefficient can be encoded without dropping off compression efficiency. The DCT and some of its important properties. The DCT for an N×N input sequence can be defined as where x=0, 1, ….., n-1, is the list of length n given by:

Dr. Soniya

For u= 0, 1, 2, … N-1. N is the size of the block that the DCT is applied on. The equation calculates one entry (i, jth) of the transformed image from the pixel values of the original image matrix. M (x,y) is the original data of size x* y.

3.2 DISCRETE WAVELET TRANSFORM (DWT)

The DWT represents an image as a sum of wavelet functions, known as wavelets, with different location and scale. The DWT represents the image data into a set of high pass (detail) and low pass (approximate) coefficients. The image is first divided into blocks of 32×32. Each block is then passed through the two filters: the first level decomposition is performed to decompose the input data into an approximation and detail coefficients. After obtaining the transformed matrix, the detail and approximate coefficients are separated as LL,HL, LH, and HH coefficients. All the coefficients are discarded except the LL coefficients that are transformed into the second level. The coefficients are then passed through a constant scaling factor to achieve the desired compression ratio. An illustration is shown in Fig. 2. Here, x[n] is the input signal, d[n] is the high frequency component, and a[n] is the low frequency component. For data reconstruction, the coefficients are rescaled and padded with zeros, and passed through the wavelet filters. Fig. 2 Block diagram of the 2-level DWT scheme

3.3 PROPOSED HYBRID DWT- DCT ALGORITHM:

The main objective of the presented hybrid DWT-DCT algorithm is to exploit the properties of both the DWT and the DCT. Giving consideration of the type of using the 2-D DWT. (9) Low-frequency coefficients (LL) are passed to the next stage where the high frequency coefficients (HL, LH, and HH) are discarded. The passed LL components are further decomposed using another 2-D DWT. The 8-point DCT is applied to these DWT coefficients. By discarding the majority of the high coefficients, we can achieve a high compression. To achieve further compression, a JPEG-like quantization is performed. In this stage, many of the higher frequency components are rounded to zero. The quantized coefficients are further scaled using scalar quantity known as scaling factor (SF). Finally, the image is reconstructed following the inverse procedure.

4. EXPERIMENTAL RESULT: EVALUATION CRITERIA

In this section, the performance of the algorithms using two popular measures: compression ratio (CR) and peak signal to noise ratio (PSNR) has been analyzed. Mean Square error (MSE) Image having same PSNR value may have different perceptual quality. 4.1 Peak Signal-to-Noise Ratio (PSNR): PSNR is usually expressed in terms of the logarithmic decibel scale. The PSNR is most commonly used as a measure of quality of reconstruction in image compression etc.

4.2 Compression ratio (CR)

The compression ratio is defined as follows: The resulting CR can be varied according to the image quality and the level of compression.

4.3 Mean Square Error (MSE):

MSE is called squared error loss. MSE measures the average of the square of the "error. The MSE is the second moment (about the origin) of the error, and thus incorporates both the variance of the estimator and its bias.

4

combing the DWT and the DCT algorithms under high compression ratio constraint. The algorithm performs the DCT on the lowest level DWT coefficient. DCT takes advantage of redundancies of the data by grouping pixels with similar frequencies. This paper has concentrated on development of efficient and effective algorithm for still image compression. The results of this exhaustive simulation show consistent improved performance for the hybrid scheme compared to the JPEG-based DCT and the Daubechies-based DWT(12). The new scheme performs better in a noisy environment and reduces the false contouring effects and blocking artifacts significantly. The analysis shows that for a fixed level of distortion, the number of bits required to transmit the hybrid coefficients would be less than those required for other schemes. Our future work involves improving image quality by increasing PSNR value and lowering MSE value.

6. REFERENCES

1. G. Joy and Z. Xiang, “Reducing false contours in quantized color images,” Computer and Graphics, Elsevier, vol. 20, no. 2, pp. 231–242, 1996. 2. R. Singh, V. Kumar, and H. K. Verma, “DWT-DCT hybrid scheme for medical image compression,” Medical Engineering and Technology, vol. 31, pp. 109–122, 2007. 3. A. K. Jain, Fundamentals of Digital Image Processing. Prentice Hall Inc., 1989. 4. U. S. Mohammed and W. M. Abd-elhafiez, “Image coding scheme based on object extraction and hybrid transformation technique,” Int. Journal of Engineering Science and Technology, vol. 2, no. 5, pp. 1375–1383, 2010. 5. T.-H. Yu and S. K. Mitra, “Wavelet based hybrid image coding scheme,” in Proc. IEEE In Circuits and Systems Symp, vol. 1, 1997,pp. 377–380. 6. R. K. Rao and P. Yip, Discrete Cosine Transform: Algorithms, Advantages and Applications. NY: Academic, 1990. 7. J. D. Kornblum, “Using JPEG quantization tables to identify imagery processed by software,” Digital Forensic Workshop, Elsevier, pp. 21–25, 2008. 8. Suchitra Shrestha and Khan Wahid, “Hybrid DWT-DCT Algorithm for Biomedical Image and Video Compression Applications”, Proc. of the 10th IEEE International Conference on 9. U. S. Mohammed, “Highly scalable hybrid image coding scheme, ”Digital Signal Processing, Science Direct, vol. 18, pp. 364–374, 2008. 10. S. Singh, V. Kumar, and H. K. Verma, “DWT-DCT hybrid scheme for medical image compression.” J Med Eng Technol, vol. 31, no. 2, pp. 109–122, 2007. 11. K. A. Wahid, M. A. Islam, S. S. Shimu, M. H. Lee, and S. Ko,“Hybrid architecture and VLSI implementation of the Cosine-Fourier-Haar transforms,” Circuits, Systems, and Signal Processing, vol. 29, no.6, pp. 1193–1205, 2010. 12. I. Daubechies, Ten Lectures on Wavelets. SIAM 1992. 13. Abhishek Kr. Srivastav and Swapna Subudhiray,” Implementation of hybrid DWT-DCT algorithm for image compression” Volume 2, Issue 2 (February 2012) ISSN: 2249-3905. 14. Andrew B. Watson, NASA Ames Research, “Image Compression Using the Discrete Cosine Transform”, Mathematical Journal, 4(1), 1994, p. 81-88. 15. http://en.wikipedia.org/wiki/Discrete_wavelet _ transform 16. http://en.wikipedia.org/Image_compression 17. ken cabeen and Peter Gent," Image Compression and the Discrete Cosine Transform "Math 45,College of the Redwoods.