同质成核英语怎么说及英文翻译
『壹』 手工英语翻译~
这篇更专业了,我尽力翻译了,希望对你有点帮助。
================================================================
To overcome this problem, the Wiener filter has been extended to multiple-bases representations for noise removal. Mihcak and Kozintsev^([1]) approached the signal estimation problem from the perspective of designing the Wiener filter in the wavelet domain. The technology indirectly yields an estimate of the signal subspace that is leveraged into the design of the filter. This paper studies the problem of nonlinear Wiener filtering in reprocing kernel Hilbert spaces via least square support vector regression, The method reflected new perspectives within the framework of kernel methods for denoising problem. Experimental results confirm a significant improvement in image denoising.
Least support squares vector regression is a new universal learning machine proposed by Suykens etal.^([2]) Let x∈R^d, y∈R, R^d represent input space, d is the dimension. By some nonlinear mapping ∅, x is mapping into some a prior chosen Hilbert space spanned by the linear combination of a set of functions.
with ∅(x): R^d→R.
Such that the following regularized risk function J is minimized:
The parameter γ is a positive regularization constant. After elimination of w,e one obtains the solution:
Where Y=[y_1…y_N], ρ_1=[1…1], α=[α_1…α_N] and Ω=K+γ^(-1)I . The resulting least support squares vector regression model for function estimation becomes:
where K(x,x_i)=∅(x) ∅(x_i)(i=1,…,N) is the kernel function and must satisfy the Mercer condition,^([3]) α are Lagrange multipliers and b almost equals the mean of y.
Consider a 2D image consisting of a matrix of M=N×N pixels, the observation image can be regarded as a function in pixel areas y=f(i.j); R^2→R^1, where input (i,j) is 2D vector equals to the row and column indices of that pixel, where output y is the approximated intensity value.^([4]) The Lagrange multipliers α_(i,j) of the observed image pixel y(i,j) can be easily calculated using Eq.(3).
where A=Ω^(-1), B=(I^T Ω^(-1))/(I^T Ω^(-1) I) and O_α is a N×N matrix defined by A(I-IB). Notice that, the Lagrange multipliers α_(i,j) of the observed image pixel y(i,j) is determined by the multiplication of the matrix O_α and the observed image Y. That is, the Lagrange multipliers are influenced by the clean image S and random noise N. As in Eq.(4), the observed image can be reconstructed by a linear combination of kernels with weights equal to the values of Lagrange multipliers and an appropriate support vector regression can concentrate the signal energy into a number of support vectors(SVs) that α_(i,j) is nonzero.
The localization of SVs is particularly appropriate for imaging applications, where it is crucial to preserve fine details like edges and textures. Pixels with positive Lagrange values try to raise the grey levels of themselves and their neighbors, while those with negative Lagrange multipliers will try to rece the grey levels and they appear darker.
Therefore, the Lagrange multipliers effectively weigh the kernel functions to estimate intensity value of image. Furthermore, random noise can be considered as forces that try to make Lagrange multipliers to oscillate above and below the standard value. The noise can be reced by smoothing the value of Lagrange multipliers, whereas sharp edges may be preserved within certain ranges which rely on a suitable kernel function possessing the capability of nonlinear representation.
----------------------------------------------------------------
翻译如下
----------------------------------------------------------------
为了解决这个问题,我们把维纳滤波法扩展到多基表述来除噪音。Mihcak和Kozintsev^([1]) ,从在小波域里设计维纳滤波器的角度来解决信号估算问题。这种技术间接地给出了一个可以补充到滤波器的设计中的信号子空间的估值。这篇论文研究非线性维纳滤波法通过最小二乘支持向量回归在再生成核希尔伯特空间中的问题,以及在降噪问题核心算法的构架内新角度下的方法。实验的结果确认了图像降噪中的显著优化。
最小二乘向量回归是一种由Suykens etal.^([2])提出的新的通用学习机器。让x∈R^d, y∈R, R^d代表输入空间,d代表维度。通过一些非线性映射∅ ,x会映射到一些事先选好的由一组函数的线性组合扩展开来的希尔伯特空间。
由∅(x): R^d→R.
于是下面的调整风险函数被最小化:
系数γ是正的调整常数。消去w,e后,解得:
这里Y=[y_1…y_N], ρ_1=[1…1], α=[α_1…α_N] ,Ω=K+γ^(-1)I
得出的用来函数估值的最小二乘向量回归模型变成了:
这里K(x,x_i)=∅(x) ∅(x_i)(i=1,…,N)成为核函数,而且必须符合Mercer条件^([3]), α是拉格朗日乘数同时b近乎等于y的平均值。
考虑一张由一个N×N像素的矩阵M组成的二维图像,这张观测用的图像可以视作一个像素面积y=f(i.j); R^2→R^1的函数,这里的输入(i,j)是等于那个像素排和列的针数的二维向量,输出y是近似的发光量值。^([4]) 观测图像像素y(i,j)的拉格朗日乘数α_(i,j)可以用等式(3)轻松计算得到。
这里A=Ω^(-1), B=(I^T Ω^(-1))/(I^T Ω^(-1) I) ,而 O_α即是用A(I-IB)定义的N×N矩阵。
注意到,
观测图像像素y(i,j)的拉格朗日乘数α_(i,j)是由矩阵O_α和观测像Y的乘积决定的。也就是说,拉格朗日乘数是会被清晰图像S和随机噪音N共同影响。如等式4所示,观测像可以通过权值等于这些拉格朗日乘数值的核函数的线性组合来重建,同时一个合适的支持向量回归可以把信号能量集中到一些α_(i,j)为非零值的支持向量中去(SVs)。
支持向量本地化对成像应用特别合适,这对保护图像边缘和图像纹理十分重要。有正的拉格朗日乘数值的像素试着去提升自己和自己周围的灰阶,而同时有负的拉格朗日乘数的像素会试着减少灰阶,他们看起来会更暗。
因此,拉格朗日乘数能有效地给核函数加权来估算图像亮度值。另外,随机噪音能够被视作是试着使拉格朗日乘数在其标准值上下震荡的外力。噪音可以通过平滑化拉格朗日乘数来减少,然而锐边可能要靠一个合适的有非线性表述能力的核函数在一定的范围内进行保边。
---------------------------------------------------------------
================================================================
以上