Re: Deconvolution of Confocal Images? (was: Airy Units)

Posted by Lutz Schaefer on
URL: http://confocal-microscopy-list.275.s1.nabble.com/Deconvolution-of-Confocal-Images-was-Airy-Units-tp6946310p6950207.html

*****
To join, leave or search the confocal microscopy listserv, go to:
http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
*****

Mark,

yes, the original RL is not regularized. Due to that it must converge to the
instable fixed-point solution of the normal equations. The fact, that you
stop the iterations can be interpreted as regularization! LR is usually good
behaving with fluorescence data, it will seldom create a pseudo convergence,
which is common to non-regularized deconvolution methods (such as Meinel or
Jansson VanCittert). Pseudo convergence occurs when the residuum (or merit
function) declines with the first few iterations and then increases, often
rendering the result useless. Thankfully, RL incorporates Poisson likelihood
in its data fit which makes it still attractive, although it requires a high
number of iterations unless you use a numerical gradient descent as seen by
David Biggs. Nevertheless, RL can be extended with regularization too. There
are many nonlinear-adaptive approaches available these days. I tend to agree
with you, the standard ones (Tikhonov Miller, Goods Roughness etc.) do not
give spectacular results.

Regards
Lutz
__________________________________
L u t z   S c h a e f e r
Sen. Scientist
Mathematical modeling / Image processing
Advanced Imaging Methodology Consultation
16-715 Doon Village Rd.
Kitchener, ON, N2P 2A2, Canada
Phone/Fax: +1 519 894 8870
Email:     [hidden email]
___________________________________


--------------------------------------------------
From: "Mark Cannell" <[hidden email]>
Sent: Monday, October 31, 2011 19:18
To: <[hidden email]>
Subject: Re: Deconvolution of Confocal Images? (was: Airy Units)

> *****
> To join, leave or search the confocal microscopy listserv, go to:
> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
> *****
>
> As I understand it  the original RL algorithm has no regularization, if
> the number of iterations is limited the noise _is_ suppressed while
> contrast _is_ improved, the improvement in resolution at this stage is
> small. However, since resolution is often limited by S/N in confocal this
> is useful -as our real results have shown. That's not to say that some
> regularization may improve convergence, but in my limited experience they
> just don't help much and for that reason I haven't bothered adding them to
> our codes -we stop the RL routine as soon as any oscillations start to
> appear. It should be noted that  our real data is more noisy than employed
> in most tests of regularization routines.
>
> my 2c and YMMV
>
> Cheers Mark
>
> On 31/10/2011, at 10:30 PM, Lutz Schaefer wrote:
>
>>
>>> 2) It effectively smooths the data out so that anything smaller than the
>>> PSF is removed from the processed result. This is good for two reasons:
>>>
>> This statement can be potentially misunderstood. Again, deconvolution is
>> the inverse of a convolution model. However, due to its ill-posed and
>> numerically instable nature, practical implementations need to employ
>> what is called regularization. Regularization can be understood as a form
>> of smoothing of the estimative solution. Depending on the actual amount
>> of noise present, improbable solutions will be excluded or made to no
>> longer be part of the set of probable solutions within the system of
>> equations. Depending on how the regularization is implemented and using
>> various a-priori information, this smoothing-part may or may not preserve
>> edges and/or intensities or other features. You can of course manually
>> overweigh the regularization term to do what you say, but normally there
>> should be a balance compromise between data fit and regularization.
>> Deconvolution typically increases uncorrelated statistical noise along
>> with resolution and contrast. This is, because the noise can not be
>> easily modeled together with a convolution operation. Maximum likelihood
>> methods tend to increase the noise lesser for which statistics they were
>> designed for than others. I do have a problem, accepting the statement
>> that a deconvolution result is equivalent to a convolution filter as you
>> say. The often better SNR compared to the observed data can only be the
>> result of regularization. Usually derivative based regularizations of
>> varying orders are used, and they therefore do not use the PSF as kernel.
>> Lets say a Laplacian is used, then the smoothing will generally be of
>> Gaussian nature with variable sigma but never confined to the PSF. And
>> then again, as commercial systems these days use a plethora of methods
>> for regularizations (and useful combinations of them), you can never say
>> for sure, how the data will be smoothed, except when looking at the
>> mathematical model that was actually used.
>>
>>> The decon process may seem to reduce image contrast (by averaging-down
>>> the bright noise peaks) but one can compensate for
>> Any deconvolution that works effectively (e.g is not over-regularized),
>> will in fact increase the contrast, as dictated by the 're-assignment'
>> mentioned above. Just imagine, simulating a microscope imaging forward
>> problem with just a simple lowpass filter. You will see, that the
>> contrast from your object declines after filtering. From a deconvolution
>> you expect the reverse, to get a good approximation to your original
>> object, which had a higher contrast to begin with.
>>
>>
>> I hope I have not stirred up more questions but there are good tutorials
>> available that describe deconvolution in greater detail.
>>
>> Regards
>> Lutz
>>
>> __________________________________
>> L u t z   S c h a e f e r
>> Sen. Scientist
>> Mathematical modeling / Image processing
>> Advanced Imaging Methodology Consultation
>> 16-715 Doon Village Rd.
>> Kitchener, ON, N2P 2A2, Canada
>> Phone/Fax: +1 519 894 8870
>> Email:     [hidden email]
>> ___________________________________
>>
>> --------------------------------------------------
>> From: "James Pawley" <[hidden email]>
>> Sent: Monday, October 31, 2011 14:22
>> To: <[hidden email]>
>> Subject: Re: Deconvolution of Confocal Images? (was: Airy Units)
>>
>>> *****
>>> To join, leave or search the confocal microscopy listserv, go to:
>>> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>>> *****
>>>
>>>> *****
>>>> To join, leave or search the confocal microscopy listserv, go to:
>>>> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>>>> *****
>>>>
>>>> An interesting point was made here by Jim Pawley:
>>>>
>>>>> I agree that sampling a bit higher than Nyquist never hurts,
>>>>> especially if you deconvolve (as you always should), but I think that
>>>>> it is a mistake to think that one can "separate" out the noise by
>>>>> decon. I think that noise is pretty fundamental.
>>>>
>>>> I had always heard that if you're doing confocal microscopy, at least
>>>> point-scanning confocal with a pinhole size of 1AU or smaller, that
>>>> deconvolution was superfluous, because you shouldn't be getting out of
>>>> focus light. So what is gained by deconvolution when one is sampling
>>>> voxel by voxel?
>>>>
>>>> Peter G. Werner
>>>> Merritt College Microscopy Program
>>>
>>> Hi Peter,
>>>
>>> This is indeed "widely assumed". However, it is beside the point. The
>>> process of decon is applied to 3D, fluorescence microscopy data sets for
>>> (at least) two major different reasons:
>>>
>>> 1) It removes much of the effect of out-of-focus light (if there is any)
>>> and therefore produces a major improvement in the visibility of
>>> structures in widefield data. For confocal data, the resolution
>>> improvement is much less significant.
>>>
>>> 2) It effectively smooths the data out so that anything smaller than the
>>> PSF is removed from the processed result. This is good for two reasons:
>>>
>>> a) The process effectively averages the intensity data of the 64-125
>>> voxels needed to sample the central peak of the PSF, improving the S/N
>>> by a factor of between 8 and 11.
>>>
>>> b) It also meets the Nyquist Reconstruction Condition:
>>>
>>> If you have 2.5 pixels between the central peak and the first zero, you
>>> have 5 pixels across the diameter of the first-zero ring and (about) 25
>>> pixels will be needed to sample this blob in 2D and (about) 125 voxels
>>> in 3D. 125 measurements just to measure the brightness and location of
>>> one point. (The "about" has to do with how may counts one must register
>>> in a pixel for it to be considered part of the PSF, a PSF that will in
>>> general not be so conveniently aligned to the pixel grid. 100 might be a
>>> good guess for a 3D PSF.)
>>>
>>> But it is possible to "know" (measure?) the PSF independently, and the
>>> PSF imposes constraints on the relative brightness of these 125 points.
>>> Decon is the best process of imposing this constraint onto the measured
>>> data (For confocal, the 3D Gaussian mentioned earlier is a good
>>> approximation to the PSF. Although it is imprecise (it has a longer
>>> tail), if the Gaussian Laser beam doesn't considerably overfill the
>>> objective aperture, the spot will in fact be more like a Gaussian than
>>> an Airy disk. More to the point, the data are usually so Poisson-Noisy,
>>> that it won't make a great difference, and it is a lot easier to do.).
>>>
>>> The decon process may seem to reduce image contrast (by averaging-down
>>> the bright noise peaks) but one can compensate for this by changing the
>>> display look-up table (contrast control) and when you do this, you will
>>> find that the processed confocal data is far less noisy than was the raw
>>> data. It is also free from "impossible" features that were really only
>>> Poisson Noise excursions usually one-pixel wide (which is about 4
>>> smaller than the width of the PSF, the smallest feature that the optical
>>> system can pass legitimately.).  Indeed, one should never make
>>> statements regarding the exact shape of small structures (near the
>>> resolution limit) recorded in confocal microscopes until the data have
>>> been deconvolved. Nyquist would not approve.
>>>
>>> Best
>>>
>>> JP
>>> --
>>> ***************************************************************************
>>> Prof. James B. Pawley,                           Ph. 608-238-3953 21. N.
>>> Prospect Ave. Madison, WI 53726 USA [hidden email]
>>> 3D Microscopy of Living Cells Course, June 10-22, 2012, UBC, Vancouver
>>> Canada
>>> Info: http://www.3dcourse.ubc.ca/ Applications accepted after 11/15/12
>>>       "If it ain't diffraction, it must be statistics." Anon.