圖像去噪技術研究-外文文獻
《圖像去噪技術研究-外文文獻》由會員分享,可在線閱讀,更多相關《圖像去噪技術研究-外文文獻(11頁珍藏版)》請在裝配圖網上搜索。
Survey of Image Denoising TechniquesMurkesh C. Motwani Image Process Technology, Inc. 1776 Back Country Road Reno, NV 89521 USA (775) 448-7816 mukesh@image-process.comMurkesh C. Gadiya University of Pune, India Vishwakarma Inst. of Tech. Pune 411337, INDIA 91-9884371488 mukesh_gadiya@satyam.comRakhi C. Motwani University of Nevada, Reno Dept of Comp. Sci. & Engr.Reno, NV 89557 USA (775) 853-7897Frederick C. Harris, Jr.University of Nevada, Reno Dept of Comp. Sci. & Engr.,Reno, NV 89557 USA (775) 784-6571AbstractRemoving noise from the original signal is still a challenging problem for researchers. There have been several published algorithms and each approach has its assumptions, advantages, and limitations. This paper presents a review of some significant work in the area of image de-noising. After a brief introduction, some popular approaches are classified into different groups and an overview of various algorithms and analysis is provided. Insights and potential future trends in the area of de-noising are also discussed.1. IntroductionDigital images play an important role both in daily life applications such as satellite television, magnetic resonance imaging, computer tomography as well as in areas of research and technology such as geographical information systems and astronomy. Data sets collected by image sensors are generally contaminated by noise. Imperfect instruments, problems with the data acquisition process, and interfering natural phenomena can all degrade the data of interest. Furthermore, noise can be introduced by transmission errors and compression. Thus, de-noising is often a necessary and the first step to be taken before the images data is analyzed. It is necessary to apply an efficient de-noising technique to compensate for such data corruption.Image de-noising still remains a challenge for researchers because noise removal introduces artifacts and causes blurring of the images. This paper describes different methodologies for noise reduction (or de-noising) giving an insight as to which algorithm should be used to find the most reliable estimate of the original image data given its degraded version.Noise modeling in images is greatly affected by capturing instruments, data transmission media, image quantization and discrete sources of radiation. Different algorithms are used depending on the noise model Most of the natural images are assumed to have additive random noise which is modeled as a Gaussian. Speckle noise is observed in ultrasound images whereas Ricans noise affects MRI images. The scope of the paper is to focus on noise removal techniques for natural images.2. Evolution of Image De-noising ResearchImage De-noising has remained a fundamental problem in the field of image processing. Wavelets give a superior performance in image de-noising due to properties such as sparsest and mull-tire solution structure. With Wavelet Transform gaining popularity in the last two decades various algorithms for de-noising in wavelet domain were introduced. The focus was shifted from the Spatial and Fourier domain to the Wavelet transform domain. Ever since Donoho’s Wavelet based threshold approach was published in 1995, there was a surge in the de-noising papers being published. Although Donoho’s concept was not revolutionary, his methods did not require tracking or correlation of the wavelet maxima and minima across the different scales as proposed by Mallat. Thus, there was a renewed interest in wavelet based de-noising techniques since Donoho demonstrated a simple approach to a difficult problem. Researchers published different ways to compute the parameters for the threshold of wavelet coefficients. Data adaptive thresholds were introduced to achieve optimum value of threshold. Later efforts found that substantial improvements in perceptual quality could be obtained by translation invariant methods based on threshold of an Un-decimated Wavelet Transform. These threshold techniques were applied to the non-orthogonal wavelet coefficients to reduce artifacts. Multi-wavelets were also used to achieve similar results. Probabilistic models using the statistical properties of the wavelet coefficient seemed to outperform the thresholding techniques and gained ground. Recently, much effort has been devoted to Bayesian de-noising in Wavelet domain. Hidden Markov Models and Gaussian Scale Mixtures have also become popular and more research continues to be published. Tree Structures ordering the wavelet coefficients based on their magnitude, scale and spatial location have been researched. Data adaptive transforms such as Independent Component Analysis (ICA) have been explored for sparse shrinkage. The trend continues to focus on using different statistical models to model the statistical properties of the wavelet coefficients and its neighbors. Future trend will be towards finding more accurate probabilistic models for the distribution of non-orthogonal wavelet coefficients.3. Classification of De-noising AlgorithmsAs shown in Figure 1, there are two basic approaches to image de-noising, spatial filtering methods and transform domain filtering methods.3.1 Spatial FilteringA traditional way to remove noise from image data is to employ spatial filters. Spatial filters can be further classified into non-linear and linear filters.I. Non-Linear FiltersWith non-linear filters, the noise is removed without any attempts to explicitly identify it. Spatial filters employ a low pass filtering on groups of pixels with the assumption that the noise occupies the higher region of frequency spectrum. Generally spatial filters remove noise to a reasonable extent but at the cost of blurring images which in turn makes the edges in pictures invisible. In recent years, a variety of nonlinear median- type filters such as weighted median, rank conditioned rank selection, and relaxed median have been developed to overcome this drawback.II. Linear FiltersA mean filter is the optimal linear filter for Gaussian noise in the sense of mean square error. Linear filters too tend to blur sharp edges, destroy lines and other fine image details, and perform poorly in the presence of signal-dependent noise. The wiener filtering method requires the information about the spectra of the noise and the original signal and it works well only if the underlying signal is smooth. Wiener method implements spatial smoothing and its model complexity control correspond to choosing the window size. To overcome the weakness of the Wiener filtering, Donoho and Johnstone proposed the wavelet based denoising scheme in.3.2 Transform Domain FilteringThe transform domain filtering methods can be subdivided according to the choice of the basis functions. The basis functions can be further classified as data adaptive and non-adaptive. Non-adaptive transforms are discussed first since they are more popular.3.2.1 Spatial-Frequency FilteringSpatial-frequency filtering refers use of low pass filters using Fast Fourier Transform (FFT). In frequency smoothing methods the removal of the noise is achieved by designing a frequency domain filter and adapting a cut-off frequency when the noise components are de-correlated from the useful signal in the frequency domain. These methods are time consuming and depend on the cut-off frequency and the filter function behavior. Furthermore, they may produce artificial frequencies in the processed image.3.2.2 Wavelet domainFiltering operations in the wavelet domain can be subdivided into linear and nonlinear methods.I. Linear FiltersLinear filters such as Wiener filter in the wavelet domain yield optimal results when the signal corruption can be modeled as a Gaussian process and the accuracy criterion is the mean square error (MSE). However, designing a filter based on this assumption frequently results in a filtered image that is more visually displeasing than the original noisy signal, even though the filtering operation successfully reduces the MSE. In a wavelet-domain spatially-adaptive FIR Wiener filtering for image de-noising is proposed where wiener filtering is performed only within each scale and intra scale filtering is not allowed.II. Non-Linear Threshold FilteringThe most investigated domain in de-noising using Wavelet Transform is the non-linear coefficient thresholding based methods. The procedure exploits sparsity property of the wavelet transform and the fact that the Wavelet Transform maps white noise in the signal domain to white noise in the transform domain. Thus, while signal energy becomes more concentrated into fewer coefficients in the transform domain, noise energy does not. It is this important principle that enables the separation of signal from noise.The procedure in which small coefficients are removed while others are left untouched is called Hard Thresholding. But the method generates spurious blips, better known as artifacts, in the images as a result of unsuccessful attempts of removing moderately large noise coefficients. To overcome the demerits of hard thresholding, wavelet transform using soft thresholding was also introduced in. In this scheme, coefficients above the threshold are shrunk by the absolute value of the threshold itself. Similar to soft thresholding, other techniques of applying thresholds are semi-soft thresholding and Garrote thresholding. Most of the wavelet shrinkage literature is based on methods for choosing the optimal threshold which can be adaptive or non-adaptive to the image.a. Non-Adaptive thresholdsVISU-Shrink is non-adaptive universal threshold, which depends only on number of data points. It has asymptotic equivalence suggesting best performance in terms of MSE when the number of pixels reaches infinity. VISU-Shrink is known to yield overly smoothed images because its threshold choice can be unwarrantedly large due to its dependence on the number of pixels in the image.b. Adaptive ThresholdsSURE-Shrink uses a hybrid of the universal threshold and the SURE [Stein’s Unbiased Risk Estimator] threshold and performs better than VISU-Shrink. Bayes-Shrink minimizes the Bayes’ Risk Estimator function assuming Generalized Gaussian prior and thus yielding data adaptive threshold. Bayes-Shrink outperforms SURE-Shrink most of the times. Cross Validation replaces wavelet coefficient with the weighted average of neighborhood coefficients to minimize generalized cross validation (GCV) function providing optimum threshold for every coefficient. The assumption that one can distinguish noise from the signal solely based on coefficient magnitudes is violated when noise levels are higher than signal magnitudes. Under this high noise circumstance, the spatial configuration of neighboring wavelet coefficients can play an important role in noise-signal classifications. Signals tend to form meaningful features (e.g. straight lines, curves), while noisy coefficients often scatter randomly.III. Non-orthogonal Wavelet TransformsUn-decimated Wavelet Transform (UDWT) has also been used for decomposing the signal to provide visually better solution. Since UDWT is shift invariant it avoids visual artifacts such as pseudo-Gibbs phenomenon. Though the improvement in results is much higher, use of UDWT adds a large overhead of computations thus making it less feasible. In normal hard/soft thresholding was extended to Shift Invariant Discrete Wavelet Transform. In Shift Invariant Wavelet Packet Decomposition (SIWPD) is exploited to obtain number of basis functions. Then using Minimum Description Length principle the Best Basis Function was found out which yielded smallest code length required for description of the given data. Then, thresholding was applied to de-noise the data. In addition to UDWT, use of Multi-wavelets is explored which further enhances the performance but further increases the computation complexity. The Multi-wavelets are obtained by applying more than one mother function (scaling function) to given dataset. Multi-wavelets possess properties such as short support, symmetry, and the most importantly higher order of vanishing moments. This combination of shift invariance & Multi-wavelets is implemented in which give superior results for the Lena image in context of MSE.IV. Wavelet Coefficient ModelThis approach focuses on exploiting the multire-solution properties of Wavelet Transform. This technique identifies close correlation of signal at different resolutions by observing the signal across multiple resolutions. This method produces excellent output but is computationally much more complex and expensive. The modeling of the wavelet coefficients can either be deterministic or statistical.a. DeterministicThe Deterministic method of modeling involves creating tree structure of wavelet coefficients with every level in the tree representing each scale of transformation and nodes representing the wavelet coefficients. This approach is adopted in. The optimal tree approximation displays a hierarchical interpretation of wavelet decomposition. Wavelet coefficients of singularities have large wavelet coefficients that persist along the branches of tree. Thus if a wavelet coefficient has strong presence at particular node then in case of it being signal, its presence should be more pronounced at its parent nodes. If it is noisy coefficient, for instance spuriousblip, then such consistent presence will be missing. Luetal. [24], tracked wavelet local maxima in scale-space, by using a tree structure. Other de-noising method based on wavelet coefficient trees isproposed Donoho.b. Statistical Modeling of Wavelet CoefficientsThis approach focuses on some more interesting and appealing properties of the Wavelet Transform such as multi-scale correlation between the wavelet coefficients, local correlation between neighborhood coefficients etc. This approach has an inherent goal of perfecting the exact modeling of image data with use of Wavelet Transform. A good review of statistical properties of wavelet coefficients can be found in and . The following two techniques exploit the statistical properties of the wavelet coefficients based on a probabilistic model.i. Marginal Probabilistic ModelA number of researchers have developed homogeneous local probability models for images in the wavelet domain. Specifically, the marginal distributions of wavelet coefficients are highly kurtosis, and usually have a marked peak at zero and heavy tails. The Gaussian mixture model (GMM) and the generalized Gaussian distribution (GGD) are commonly used to model the wavelet coefficients distribution. Although GGD is more accurate, GMM is simpler to use. In, authors proposed a methodology in which the wavelet coefficients are assumed to be conditionally independent zero-mean Gaussian random variables, with variances modeled as identically distributed, highly correlated random variables. An approximate Maximum A Posteriori (MAP) Probability rule is used to estimate marginal prior distribution of wavelet coefficient variances. All these methods mentioned above require a noise estimate, which may be difficult to obtain in practical applications. Simoncell and Adelson used a two-parameter generalized Laplacian distribution for the wavelet coefficients of the image, which is estimated from the noisy observations. Chang et al. proposed the use of adaptive wavelet thresholding for image de-noising, by modeling the wavelet coefficients as a generalized Gaussian random variable, whose parameters are estimated locally (i.e., within a given neighborhood).ii. Joint Probabilistic ModelHidden Markov Models (HMM) models are efficient in capturing inter-scale dependencies, whereas Random Markov Field models are more efficient to capture intra-scale correlations. The complexity of local structures is not well described by Random Markov Gaussian densities whereas Hidden Markov Models can be used to capture higher order statistics. The correlation between coefficients at same scale but residing in a close neighborhood are modeled by Hidden Markov Chain Model where as the correlation between coefficients across the chain is modeled by Hidden Markov Trees. Once the correlation is captured by HMM, Expectation Maximization is used to estimate the required parameters and from those, de-noised signal is estimated from noisy observation using well-known MAP estimator. In, a model is described in which each neighborhood of wavelet coefficients is described as a Gaussian scale mixture (GSM) which is a product of a Gaussian random vector, and an independent hidden random scalar multiplier. Strela et al described the joint densities of clusters of wavelet coefficients as a Gaussian scale mixture, and developed a maximum likelihood solution for estimating relevant wavelet coefficients from the noisy observations. Another approach that uses a Markov random field model for wavelet coefficients was proposed by Jansen and Bulthel. A disadvantage of HMT is the computational burden of the training stage. In order to overcome this computational problem, a simplified HMT, named as UHMT, was proposed.3.2.3 Data-Adaptive TransformsRecently a new method called Independent Component Analysis (ICA) has gained wide spread attention. The ICA method was successfully implemented in in de-noising Non-Gaussian data. One exceptional merit of using ICA is it’s assumption of signal to be Non-Gaussian which helps to de-noise images with Non-Gaussian as well as Gaussian distribution. Drawbacks of ICA based methods as compared to wavelet based methods are the computational cost because it uses a sliding window and it requires sample of noise free data or at least two image frames of the same scene. In some applications, it might be difficult to obtain the noise free-training data.4. DiscussionPerformance of de-noising algorithms is measured using quantitative performance measures such as peak signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR) as well as in terms of visual quality of the images. Many of the current techniques assume the noise model to be Gaussian. In reality, this assumption may not always hold true due to the varied nature andsources of noise. An ideal de-noising procedure requires a priori knowledge of the noise, whereas a practical procedure may not have the required information about the variance of the noise or the noise-model. Thus, most of the algorithms assume known variance of the noise and the noise model to compare the performance with different algorithms. GaussianNoise with different variance values is added in the natural images to test the performance of the algorithm. Not all researchers use high value of variance to test the performance of the algorithm when the noise is comparable to the signal strength.Use of FFT in filtering has been restricted due to its limitations in providing sparse representation of data. Wavelet Transform is the best suited for performance because of its properties like sparsity, multire-solution and multi-scale nature. In addition to performance, issues of computational complexity must also be considered. Thresholding techniques used with the Discrete Wavelet Transform are the simplest to implement. Non-orthogonal wavelets such as UDWT and Multi-wavelets improve the performance at the expense of a large overhead in their computation. HMM based methods seem to be promising but are complex. When using Wavelet Transform, Nason emphasized that issue such as choice of primary resolution (the scale level at which to begin thresholding) and choice of analyzing wavelet also have a large influence on the success of the shrinkage procedure. When comparing algorithms, it is very important that researchers do not omit these comparison details. Several papers did not specify the wavelet used neither the level of decomposition of the-wavelet transform was mentioned. It is expected that the future research will focus on building robust statistical models of non-orth- 配套講稿:
如PPT文件的首頁顯示word圖標,表示該PPT已包含配套word講稿。雙擊word圖標可打開word文檔。
- 特殊限制:
部分文檔作品中含有的國旗、國徽等圖片,僅作為作品整體效果示例展示,禁止商用。設計者僅對作品中獨創(chuàng)性部分享有著作權。
- 關 鍵 詞:
- 圖像 技術研究 外文 文獻
裝配圖網所有資源均是用戶自行上傳分享,僅供網友學習交流,未經上傳用戶書面授權,請勿作他用。
鏈接地址:http://www.hcyjhs8.com/p-172941.html