Fast Multi-Scale Structural Patch Decomposition for Multi-Exposure Image Fusion
Exploring Edge Preservation for Multi-Exposure Image Fusion
by Ankita Pandey*, Dr. Soni Changlani,
- Published in Journal of Advances in Science and Technology, E-ISSN: 2230-9659
Volume 18, Issue No. 2, Sep 2021, Pages 88 - 93 (6)
Published by: Ignited Minds Journals
ABSTRACT
Recent computational photography techniques play a significant role in dealing with the wide dynamic range of real-world scenarios, which include brightly lit and poorly lighting areas. An important part of many of these methods is integrating data from several photographs taken at varying exposures. One such technique is high-dynamic-range (HDR) imaging, which can reconstruct radiance maps from pictures acquired with more conventional imaging hardware like a camera or scanner. For a long time, HDR picture technology struggled with the dynamic range (DR) restrictions of traditional displays and printing methods. As a result, the whole dynamic range cannot be faithfully reproduced by these gadgets. Tonemapping may reduce dynamic range (DR), but it requires more processing power from the computer. For this reason, optimizing the content of the synthesized image from a set of multi-exposure pictures without calculating HDR radiance and tone mapping is sought. The purpose of this thesis is to develop a novel approach to multi-exposure image fusion by making advantage of the edge preservation capabilities of adaptive filters. Recently, there has been a surge of interest in studying exposure fusion. None of the tactics discussed in this thesis exclude exploring the possibility of creating a brand-new algorithm. Instead, we'll be leveraging non-linear filters' edge preservation properties to identify and apply the perception of weak textures that should be considered in the context of exposure fusion, creating weight maps and investigating further possibilities for improving detail.
KEYWORD
computational photography, multi-exposure image fusion, high-dynamic-range imaging, tonemapping, adaptive filters
INTRODUCTION
It's not uncommon for there to be a wide range of luminosity in a natural setting. The luminance of sunshine is about 105 cd/m2, the luminance of an indoor environment around 102 cd/m2, and that of stars approximately 103 cd/m2 . The dynamic range of a single photograph is much less than that of a real scene due to the technical constraints of imaging technologies. Light, weather, and solar height, among other things, may all have an impact on the shooting location. Unfortunately, both over and underexposure are common. Since the scene's bright and dark areas are not captured in a single snapshot, the resultant picture quality is often subpar. Existing imaging gear, display monitors, and the human eye's dynamic reaction to genuine natural surroundings all fall short in their ability to match each other's dynamic ranges. The dynamic range of image detectors may be expanded in two main ways: via hardware design and through software technologies. The former requires a modification of the CCD or CMOS detector, and maybe the introduction of a new optical modulation device. With the use of a series of mirrors, was able to bring his camera concept to life by splitting the aperture into various sections. Rather of measuring the static intensity, the camera described measures the static gradient, and the difference between the two is quantified suitably to record HDR photos. However, this approach is costly and impractical, despite its direct potential to enhance the efficiency of exposure quantity and image quality. Some scientists use software to recreate an HDR picture from its component exposures (CRF). After that, tone mapping will allow the HDR picture to be seen on a regular screen (TM). Some people use MEF technique without any intermediate steps like camera curve calibration, HDR reconstruction, or tone mapping, and the resulting picture is still full of detail and vibrant color, as demonstrated in Figure 1. MEF technology, in contrast to the first method, offers a straightforward, low-cost, and effective approach to resolving the incompatibility between high dynamic range (HDR) photography and low dynamic range (LDR) viewing. By doing so, the device's overall size, weight, and power consumption may be decreased by avoiding the complexity of designing image hardware circuitry. Its practical importance and ability to enhance picture quality are unquestionable.
Figure 1: The illustration of the multi-exposure image fusion.
When compared to other types of image fusion tasks, such as multi-focus image fusion, visible and infrared image fusion, PET and MRI medical image fusion, multispectral and panchromatic remote sensing image fusion, hyperspectral and multispectral image fusion, MEF is a subset. Picture taken from a remote sensing platform in many colors optical and synthetic aperture radar (SAR) image fusion. By fusing data from many picture sources, they may create superior pictures showing more crucial details. When comparing and contrasting, the primary distinction between challenge with these picture fusion projects is that the individual source photos are distinct, and the individual source images are often of Photos in a MEF set have varying degrees of exposure. There are other applications as well. Improved visibility in low-light images combining or creating fake exposures. More than 30 years and hundreds of studies have gone into understanding MEF. Articles of scientific relevance have been published. Specifically, with the persistent rise Recent years have seen a rise both in the quantity and quality of novel approaches suggested. has been completed in this area. An ideal MEF technique would reliably perform in static to lively sceneries, just the right amount of exposure, clear picture clarity, and little energy use. expense of computation, particularly when dealing with photos of high quality. hence, the plan implementation of the MEF algorithm is a difficult research problem. In this article, we will examine and talk about a report on the current state of MEF research and an outlook on where the field is headed. The primary results of the key points of this analysis are briefly discussed below. First, a thorough analysis of the current MEF techniques is presented. In light of recent developments, in light of recent breakthroughs in the area, the MEF approaches may be broadly classified into three broad subfields: techniques in the spatial domain, techniques in the transform domain, and deep learning techniques The In addition, we talk about several approaches for deghosting MEFs in a live scenario. more investigation. Second, a comprehensive assessment of performance is carried out. There is a comparison of 18 different Multiple sets of representative source photos were processed with MEF utilizing nine industry-standard Fusion metrics with an objective focus. How well MEF approaches do in both steady and changing environments is analyzed. Relevant materials, such as original photos, fusion output, and associated curves, have uploaded the files to and supplied download links. It's a great way to suggestions for further investigation are made. possible outcomes are proposed. This paper's remaining sections are structured as follows.
LITERATURE REVIEW
Fan Huang (2018) A new color multi-exposure picture fusion method is presented as a solution to the issue of lost visual details and vibrant colors. In order to implement the proposed method, an image patch is broken down into its constituent components, which are then processed individually for contrast extraction, structure preservation, and intensity modification. To keep the essential structure and control the overpowering intensity, this study used three weight metrics: local weight, global weight, and saliency weight. With these weights in place, the final fused picture may be guided not only by the exposure level of a single image, but also by the relative exposure level of photographs that were taken under varying lighting conditions. As a final step, the three pieces are joined together. To include it back into the combined picture, the relevant area is reconstructed. In comparison to standard patch algorithms, the suggested method preserves more information from the original sequence of input pictures thanks to the usage of three weight maps. The proposed method outperforms the state-of-the-art exposure fusion methods in terms of the aesthetic quality of the resulting fused photos. Ying Huang (2020) This study provides a MEF method that makes use of signal decomposition to fix the issues of lost detail and shifted hues inherent to the format (multi-exposure image fusion). Through a procedure of signal decomposition based on ICA, the HybridHDR approach is improved (independent component analysis). Due to the importance of the luminance channel fusion in MEF, separate fusion algorithms are used for the luminance and chrominance channels. This study turns images with fluctuating brightness into a sequence of one-dimensional signals and utilizes ICA to conduct signal decomposition, resulting in a final image with more information recovered and preserved, since the features change depending on the brightness. Then, combine HybridHDR with ICA to extract even more characteristics from the multiple-exposure picture, allowing for a higher-quality fused image. The proposed approach has been shown experimentally to improve the quality of the resulting fusion picture, and under certain conditions it may preserve more information than competing methods while preserving the color palette of the original exposure image. Ting Nie et al (2021) There is a lot of noise in the resulting fusion picture when using existing multi-exposure fusion (MEF) techniques for gray photographs taken in low light. To deal with these latent low-rank representation (LatLRR) treatment in order to produce low-rank parts and saliency parts. Then, two parts are combined separately in Laplace's multi-scale space. On the basis of the features of low-light gray photographs, two different weight maps are generated. Concurrently, an energy equation is developed to find the optimum value for the weight component. We present an enhanced guided filtering approach using an adjustable regularization factor to maximize the weight maps and prevent artifacts. The low-rank and saliency portions are then inverted to produce a very dynamic final picture. The experimental findings show that the proposed technique provides better subjective and objective outcomes than the current state-of-the-art multi-exposure fusion methods for gray pictures in low-illumination imaging. Yuma Kinoshita (2019) We provide a new technique for adjusting brightness for use in multi-exposure picture fusion. There are also two novel methods for scene segmentation based on brightness distribution that are offered as adjustments. Multi-exposure picture fusion takes photographs with varying exposures and combines them into one. The resulting image should be more informational and visually attractive than the sum of its parts. However, when input shots do not have a significant number of varying exposure levels, existing fusion algorithms often produce hazy fused images. In this research, we demonstrate that adjusting the input photographs' brightness may significantly increase the quality of the fused images produced. Based on this understanding, a solution is suggested. The suggested approach enables us to produce high-quality photographs despite the poor quality of the inputs. By visually comparing the two, we can see that the suggested approach can provide images that accurately depict the full scene. The suggested technique for multi-exposure image fusion also outperforms state-of-the-art fusion algorithms in terms of the MEF-structural similarity index, discrete entropy, tone mapped picture quality index, and statistical naturalness. Kede Ma (2018) We introduce a fresh objective quality measure called the color MEF structural similarity (MEF-SSIM c) index, and we apply it to the field of multi-exposure image fusion (MEF). Our proposed design is a radical departure from the status quo. Instead of specifying a rigid computational framework for MEF beforehand, we instead immediately begin searching the whole image space for the picture that maximizes MEF-SSIM c. (e.g., multiresolution transformation and transform domain fusion followed by image reconstruction). We first enhance and extend the usefulness of the preexisting MEF-SSIM approach before developing the MEF-SSIM c index. Then, the gradient ascent-based method is outlined, which uses arbitrary starting positions in the image space to iteratively move in the direction that improves MEF-SSIM c until convergence. Numerical and subjective evaluations demonstrate that the suggested method seems to be independent of the source material. The proposed optimization framework may be simply modified to develop better MEF algorithms when new objective quality models for MEF become available in the future.
DATA ACQUISITION AND TWO LAYERDE COMPOSITION
Scene Data Acquisition
Traditional digital photography use a short exposure time and a high exposure level (i.e. a long exposure time) to capture the brightest component (i.e. highlights) of a high contrast picture (i.e. long exposure time). The number of exposures performed at different exposure settings is crucial to the information included in the fused LDR image, as is the case with the present ADF method. Pictures with several exposures need the use of a tripod to eliminate any blurring from spatial or global movement. To properly use the ADF approach, a sequence of exposures must be taken from a scene with both dark and bright components. All of the shots in the series were taken with the identical white balance, aperture priority, and ISO settings. A subset of the images that make up the input set are shown in Figure 1.
Figure 2: Proposed image domain fusion framework. Observation model, illustrates the conceptual framework of ADF approach. Note that for the concept simplicity; here we have generated BLs and DLs of two inputexposures.
Pyramid decomposition
Image encoding relies heavily on the pyramid decomposition. Images are decomposed into many spatial resolutions during the multi-resolution encoding process so that features may be extracted. Multi-scale edge estimation uses a pyramidal structure with each level representing a different kind of data, such as edge finer details. A hybrid of predictive and transformative techniques, pyramid decomposition combines the best of each. To this day, it is still widely utilized in many computer graphics and image processing programs. It was
Weight calculation
The number of pixels in an image may be inferred from its mass. Using quality metrics including contrast, saturation, and well-exposed photos, a multi-exposure image series is translated to a scalar-valued weight map. When thinking about HDR pictures, it is assumed that the image is correctly aligned and registered. Details should be previewed for both overexposure and underexposure in the bright and colorless sections of the picture. Saturation of Colors: Saturation refers to how intensely a color is represented in an image. Light from a given source may be measured in terms of this unit, which expresses the spectrum of wavelengths at which the light can be seen. A picture's saturation indicates the degree to which its colors are undiluted. Whether you're searching for a means to detect whether your photographs are oversaturated or undersaturated, you'll utilize the word S.
Figure 3: Framework for the proposed dynamic refined adaptive weight-map (DRAW) method
The saturation of colors in a photograph is what makes it so vivid. Light from a given source may be measured in terms of this unit, which expresses the spectrum of wavelengths at which the light can be seen. A picture's saturation indicates the degree to which its colors are undiluted. Saturation, or S, is measured by taking the traditional derivatives of the R, G, and B channels for each pixel.To differentiate one thing from another, a contrast in color or brightness is used. ' An object's distinction from the rest of the scene is formed by how its color or brightness stands out from the rest of the elements present. The emphasis placed on edges and textures denoted by the letter 'c' is due to the contrast between them. "n," and may be either zero for underexposure or one for overexposure when used to calculate the intensities. Pixel intensities may be compared to norms using a Gaussian curve. In the example in, there is one picture comprised of several different exposure shots with different weights. A better image might be achieved by adjusting the image's relative density. It is feasible to assess weight because local contrast and brightness impact the perception of load. The local contrast of each pixel may be calculated using an equation.
π΄π(π₯,π¦)=πΌπ(π₯,π¦)*βπ(π₯,π¦)
The convolution operation is indicated by the symbol *. The weighted sum of the nth source image's red, green, and blue channels is used in Equation, together with the values of I and h. By using the contrast parameter to determine whether a pixel is under- or over-exposed, the preservation of fine details is greatly facilitated after the local contrast has been acquired. The parameter is set at a fixed value determined by the application's threshold. Power functions are affected by the weighted components of linear combinations according to Equation. π€i.j.π=(πΆi.j.π)wπ*(πi.j.π)wπ (πΈi.j.π)wπ Where ws, wc, and we represent the weights of the well exposedness, contrast, and saturation, respectively. If the pixel weight w is 0 while picking the pixels for position I j) from the kth picture, then the pixel will not be picked. depicts the framework for the proposed dynamic adaptive refined weight approach.
Experimental Results Of Multi Exposure Images
In this part, we'll go through the latest best practice for adjusting the size of multiple exposure images. Objective and relative quantitative metrics of quality, such as structural fidelity, naturalness, and image quality rating, are utilized here. Many experts in the field of IQA have created models for the subjective metric. There were a total of five different sets of multiexposure images utilized in the tests, each with a minimum of ten exposures. The recommended DRAW method is contrasted with alternative methods like guided filter (GFFs) and Laplacian pyramid image fusion in published works (LAP). We also take a look at the curvelet transform (CVT) and the stationary wavelet transform (SWT) Some examples of non-subsampled contour transformations include the Generalized Random Walk (GRW), the Wavelet-based Sharpness Measure (WSSM), and the High Order Singular Value Decomposition
Figure 4: Multiple exposure images with proposed draw (a),(b),(c) are the Multiple exposure images(d) Fused Image with proposed DRAW
As can be seen in Figure 4., three images of caves are utilized as input, one of which is underexposed and the other two are overexposed. The first picture shows some texture. As can be seen in the second image, the path has been clearly marked. By adjusting the relative importance of individual pixels, it is possible to create a merged image in which the cave's natural texture is preserved. Images (a) and (b) in Figure 4.5 are examples of creative imagery, and (c) shows the consequence of combining the two.
Figure 6: Resultant output of DRAW compared with the existing method (i) Averaging method (ii) The Proposed DRAW method (iii) Mertens Method
The 15-frame multiple-exposure picture is shown in (a) of Figure 4.6. For example, consider procedure I in Figure 4.6b, in which the pixel weights are averaged. Dynamic Refined Adaptive Weight (DRAW) is shown in 4.6b (ii), whereas the outcomes of the Mertens approach are displayed in 4.6b (iii).
CONCLUSION
In this research, we use a multi-resolution projection (MRP) method in conjunction with a weighted least squares (WLS) optimization framework to provide three methods for generating a high-detail image from many exposures. While existing exposure fusion techniques make use of multi-resolution and single-resolution analysis, the proposed methods are able to provide a more seamless fusion of textures. The capacity of AD to retain edges and the responsiveness of GF near strong edges serve as inspiration for a variety of frameworks. Fine textures are extracted using a two-layer decomposition that preserves edges for improved clarity. It has been shown that by conducting comprehensive tests on a variety of example multi-exposure photographs, a higher level of performance may be achieved compared to the frequently utilized image fusion methodologies. The proposed method generally outperforms rival approaches in terms of its ease of use, low cost, robust computing, and capacity for adaptive image fusion. Accordingly, the current study provides a useful reference for a wide range of multi-exposure fusion methods that use complicated factors to calculate weighting. Combining the enhanced weighting function with the methods from results in just a coarse-grained GIF. Thus, further initiatives will aim to broaden the proposed approach to include more algorithms for enhancing fusion performance.
REFERENCE
1. YongqingHuo, Fan Yang, Vincent Brost,
https://doi.org/10.1155/2017/168564 2. Ying Huang (2020) βMulti-Exposure Image Fusion Method Based on Independent Component Analysisβ Multi-Exposure Image Fusion Method Based on Independent Component Analysis 3. Nie, T.; Huang, L.; Liu, H.; Xiansheng Li, X. Multi-exposure fusion of gray images under low illumination based on low-rank decomposition. Remote Sens. 2021, 13, 204. 4. Kinoshita, Y.; Kiya, H. Scene segmentation-based luminance adjustment for multi-exposure image fusion. IEEE Trans. Image Process. 2019, 28, 4101β4115 5. Kede Ma and Zhou Wang (2018) βMULTI-EXPOSURE IMAGE FUSION: A PATCH-WISE APPROACHβ 6. Li, H.; Ma, K.; Yong, H.; Zhang, L. Fast multi-scale structural patch decomposition for multi-exposure image fusion. IEEE Trans. Image Process. 2020, 29, 5805β5816 7. Huang, F.; Zhou, D.; Nie, R. A Color Multi-exposure image fusion approach using structural patch decomposition. IEEE Access 2018, 6, 42877β42885. 8. Ma, K.; Duanmu, Z.; Yeganeh, H.; Wang, Z. Multi-exposure image fusion by optimizing a structural similarity index. IEEE Trans. Comput. Imaging 2018, 4, 60β72. 9. . Qi, G.; Chang, L.; Luo, Y.; Chen, Y. A precise multi-exposure image fusion method based on low-level features. Sensors 2020, 20, 1597 10. . Qu, Z.; Huang, X.; Chen, K. Algorithm of multi-exposure image fusion with detail enhancement and ghosting removal. J. Electron. Imaging 2019, 28, 013022. 11. Hara, K.; Inoue, K.; Urahama, K. A differentiable approximation approach to contrast aware image fusion. IEEE Signal Process. Lett. 2017, 21, 742β745. 12. Paul, S.; Sevcenco, J.S.; Agathoklis, P. Multi-exposure and multi-focus image fusion in gradient domain. J. Circuits Syst. Comput. 2016, 25, 1650123. 13. Naila Hayat, Muhammad Imran, Ghost-free multi exposure image fusion technique using dense SIFT descriptor and guided filter,Journal of Visual Communication and Image Representation, Volume 62, 2019, Pages 295-308, ISSN 1047- S. Yao, "Semisupervised Remote Sensing Image Fusion Using Multiscale Conditional Generative Adversarial Network with Siamese Structure," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 7066-7084, 2021, doi: 10.1109/JSTARS.2021.3090958.
15. Xu K, Wang Q, Xiao H, Liu K. Multi-Exposure Image Fusion Algorithm Based on Improved Weight Function. Front Neurorobot. 2019;16:846580. Published 2019 Mar 8. doi:10.3389/fnbot.2019.846580
Corresponding Author Ankita Pandey*
Research Scholar, LNCT University