Practical aspects - Wide field deconvolution

classic Classic list List threaded Threaded
8 messages Options
Mark Adelman (Work) Mark Adelman (Work)
Reply | Threaded
Open this post in threaded view
|

Practical aspects - Wide field deconvolution

*****
To join, leave or search the confocal microscopy listserv, go to:
http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
*****

Please excuse the cross posting, but I have previously received  
helpful responses from both lists.

I am doing wide field deconvolution of fluorescence images (z-stacks),  
primarily using ImageJ, but also some other tools.  Have reached the  
point where I can routinely get fairly nice results, but suspect I  
could do better if I knew a bit more 'down in the weeds' details.  So  
am requesting suggested readings related to the following:

1.  Are there guidelines as to how big (x,y,z) a PSF data set  
(experimental or theoretical) should be - relative to the specimen  
data set - in order to achieve best deconvolution results?  To what  
extent are such guidelines different for different deconvolution  
algorithms?

2.  If a deconvolution looks 'good' in the slices that include the  
specimen, to what extent is it 'OK' to ignore speckle/haze (generated  
during the deconvolution) in slices above/below the specimen slices?

3.  Are there 'objective' methods for deciding - between two  
deconvolutions that look 'good' - which result is 'better'?  I have  
tried using image calculator and other tools and find that two  
deconvolution results that satisfy all criteria (of which I am aware)  
as to being 'good' almost always show slight differences that look  
like 'ghosts' of the deconvolved image sets.  That is, the different  
deconvolutions look quite satisfactory but differ from one another to  
very small degrees.  Is it common to ignore such differences, or have  
people published criteria/methods for 'choosing' between/amongst them?

Please feel free to contact me 'off list' if you feel a general answer  
would be wasting people's time.

Thanks!

Mark R. Adelman
mmodel mmodel
Reply | Threaded
Open this post in threaded view
|

Re: Practical aspects - Wide field deconvolution

*****
To join, leave or search the confocal microscopy listserv, go to:
http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
*****

Hi Mark, from my limited experience, "looking good" can be a slippery thing. Some years ago we were looking for deconvolution software and I received test images deconvolved by I don't remember who, some very respectable software company, their images looked just fantastic! But after some tinkering and questioning I realized that all the improvement was achieved by playing with contrast! My take on this issue is that deconvolved images must correctly represent the sample, and this is not necessarily the same as looking good. You may apply deconvolution to some kind of test sample whose structure you know, for example, fluorescent bacteria, and see what you get.

Mike Model

-----Original Message-----
From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Mark Adelman (Work)
Sent: Wednesday, February 01, 2012 11:00 AM
To: [hidden email]
Subject: Practical aspects - Wide field deconvolution

*****
To join, leave or search the confocal microscopy listserv, go to:
http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
*****

Please excuse the cross posting, but I have previously received  
helpful responses from both lists.

I am doing wide field deconvolution of fluorescence images (z-stacks),  
primarily using ImageJ, but also some other tools.  Have reached the  
point where I can routinely get fairly nice results, but suspect I  
could do better if I knew a bit more 'down in the weeds' details.  So  
am requesting suggested readings related to the following:

1.  Are there guidelines as to how big (x,y,z) a PSF data set  
(experimental or theoretical) should be - relative to the specimen  
data set - in order to achieve best deconvolution results?  To what  
extent are such guidelines different for different deconvolution  
algorithms?

2.  If a deconvolution looks 'good' in the slices that include the  
specimen, to what extent is it 'OK' to ignore speckle/haze (generated  
during the deconvolution) in slices above/below the specimen slices?

3.  Are there 'objective' methods for deciding - between two  
deconvolutions that look 'good' - which result is 'better'?  I have  
tried using image calculator and other tools and find that two  
deconvolution results that satisfy all criteria (of which I am aware)  
as to being 'good' almost always show slight differences that look  
like 'ghosts' of the deconvolved image sets.  That is, the different  
deconvolutions look quite satisfactory but differ from one another to  
very small degrees.  Is it common to ignore such differences, or have  
people published criteria/methods for 'choosing' between/amongst them?

Please feel free to contact me 'off list' if you feel a general answer  
would be wasting people's time.

Thanks!

Mark R. Adelman
Brian Northan Brian Northan
Reply | Threaded
Open this post in threaded view
|

Re: Practical aspects - Wide field deconvolution

*****
To join, leave or search the confocal microscopy listserv, go to:
http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
*****

...  Disclaimer: I used to work for a commercial deconvolution company
but am not associated with any commercial software at the moment.

Good questions though.

1.  Ideally your PSF should be the same size as your image.
Internally the deconvolution software will make the image and the PSF
the same size anyway...  and will likely pad both of them.  You want
the PSF to be large so that the values near the edges are close to
zero.  This will minimize any artifacts from "padding".   Some
software does extensive preprocessing of a measured PSF.  So depending
on how the software is preprocessing the PSF you may be able to get
away with a smaller PSF (that is the software will pad it in a clever
way so edge effects are minimized).

3.  After deconvolution the "hourglass" like haze should be removed.
As Mike mentions the best criteria is to simply image objects of known
shape and test using those.   Judging subjectively shouldn't be done
as the image that looks best to a human is different than the image
that is most quantitative.

One of the other major complications is spherical aberration.   If
there is SA in your image there will be more haze in one of the axial
directions (the hourglass will not be symmetric).   So if you are
using a theoretical PSF model make sure it can handle SA.   (When
generating the PSF there should be the option of entering sample
refractive index and distance from coverslip.  Even if you do not know
these exactly enter approximations, or else the software may make
assumptions).

Brian

On Wed, Feb 1, 2012 at 11:42 AM, MODELCHAEL <[hidden email]> wrote:

> *****
> To join, leave or search the confocal microscopy listserv, go to:
> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
> *****
>
> Hi Mark, from my limited experience, "looking good" can be a slippery thing. Some years ago we were looking for deconvolution software and I received test images deconvolved by I don't remember who, some very respectable software company, their images looked just fantastic! But after some tinkering and questioning I realized that all the improvement was achieved by playing with contrast! My take on this issue is that deconvolved images must correctly represent the sample, and this is not necessarily the same as looking good. You may apply deconvolution to some kind of test sample whose structure you know, for example, fluorescent bacteria, and see what you get.
>
> Mike Model
>
> -----Original Message-----
> From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Mark Adelman (Work)
> Sent: Wednesday, February 01, 2012 11:00 AM
> To: [hidden email]
> Subject: Practical aspects - Wide field deconvolution
>
> *****
> To join, leave or search the confocal microscopy listserv, go to:
> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
> *****
>
> Please excuse the cross posting, but I have previously received
> helpful responses from both lists.
>
> I am doing wide field deconvolution of fluorescence images (z-stacks),
> primarily using ImageJ, but also some other tools.  Have reached the
> point where I can routinely get fairly nice results, but suspect I
> could do better if I knew a bit more 'down in the weeds' details.  So
> am requesting suggested readings related to the following:
>
> 1.  Are there guidelines as to how big (x,y,z) a PSF data set
> (experimental or theoretical) should be - relative to the specimen
> data set - in order to achieve best deconvolution results?  To what
> extent are such guidelines different for different deconvolution
> algorithms?
>
> 2.  If a deconvolution looks 'good' in the slices that include the
> specimen, to what extent is it 'OK' to ignore speckle/haze (generated
> during the deconvolution) in slices above/below the specimen slices?
>
> 3.  Are there 'objective' methods for deciding - between two
> deconvolutions that look 'good' - which result is 'better'?  I have
> tried using image calculator and other tools and find that two
> deconvolution results that satisfy all criteria (of which I am aware)
> as to being 'good' almost always show slight differences that look
> like 'ghosts' of the deconvolved image sets.  That is, the different
> deconvolutions look quite satisfactory but differ from one another to
> very small degrees.  Is it common to ignore such differences, or have
> people published criteria/methods for 'choosing' between/amongst them?
>
> Please feel free to contact me 'off list' if you feel a general answer
> would be wasting people's time.
>
> Thanks!
>
> Mark R. Adelman
Kevin Ryan Kevin Ryan
Reply | Threaded
Open this post in threaded view
|

Re: Practical aspects - Wide field deconvolution

*****
To join, leave or search the confocal microscopy listserv, go to:
http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
*****

* Disclaimer - Media Cybernetics is the supplier of AutoQuant deconvolution software.

Excellent questions! Here are my responses based upon the experience of having written such software multiple times over the years:


1. How big a PSF?

Ideally, an experimental PSF should be as large as your data set. Theoretic PSF's tend to be created at that size as well, although there may be a limit dependent on the software package where extremely low values of the PSF distant from the center may be dropped as not significant.

The real issue here is upon your _data set size_. It needs to be large enough to include the 3D blur of your sample - ideally going far enough above and below the sample that the blur is becoming uniform, lacking in detail and information. Clipping your acquisition too close to the sample makes it extremely difficult to reconstruct objects lying close to the top or bottom of the acquired volume.

Side to side can be rather closer - just keep in mind that objects close to the edges of the acquired volume will be missing some reconstruction data as well. I would recommend sampling sufficiently far out to get a reasonable portion (>80-90%) of the cumulative 'hourglass' PSF intensity within the sample volume, perhaps 3-5 microns at high magnification, and being judicious about results on the _very_ edges.


2. Good in plane, what about above/below haze/speckle?

Z is always going to be harder to reconstruct than XY resolution, due to the nature of optical imaging. Properly done, widefield deconvolutions take a point object (giving an 'hourglass' in widefield data) down to a much smaller 'football', with about 3x the Z extent compared to the XY extent.

In judging quality of the deconvolution, look for Z symmetry in the results (a lack means insufficient correction for spherical or other aberrations), no false objects, and not oversharpening the data (shown by large scale 'ringing' around objects, Moire patterns, etc.). Any remaining edge effects should be much lower (as percentage of the peak value) than the original haze.

The real question to ask is whether the deconvolution gives you data that is easier to segment, and that more clearly shows structures (particularly structures you a priori know the details of), etc.


3. Objective grounds for judging deconvolutions?

There have been several methods suggested - a quick look via Google Scholar on "deconvolution evaluation" shows a few of interest, such as:

Zuo et al 2011, "A PDE-Based Perceptual Deconvolution Image Edge Ringing Artifact Metric", http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=6079464
Crilly 1991, "A quantitative evaluation of various iterative deconvolution algorithms", http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=87019
Harijumar et al 1996, "Analysis and comparative evaluation of techniques for multichannel blind deconvolution", http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=534884

Essentially the evaluations I have seen look at speed of deconvolution, iterations required to converge, severity of ringing/aliasing/Moire effects, full-width half-max improvements, etc. There are really a number of possible criteria available.

The clearest criteria would come from taking known (artificial) objects, blurring them with the PSF of your system (which can be experimentally or theoretically derived) plus some noise, and evaluating different deconvolutions against the original for RMS errors, peak distortions, and so on. However, I am not aware of any large scale effort that does this with multiple algorithms/vendors.

---

Long story short - does the deconvolution permit a better analysis of the sample than without it? Or do artifacts or insufficient removal of blur either inhibit analysis, or worse yet lead to incorrect conclusions? Those are really the most basic criteria.


Kevin Ryan
Senior Project Manager

Media Cybernetics, Inc.
 

-----Original Message-----
From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Brian Northan
Sent: Wednesday, February 01, 2012 1:26 PM
To: [hidden email]
Subject: Re: Practical aspects - Wide field deconvolution

*****
To join, leave or search the confocal microscopy listserv, go to:
http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
*****

...  Disclaimer: I used to work for a commercial deconvolution company
but am not associated with any commercial software at the moment.

Good questions though.

1.  Ideally your PSF should be the same size as your image.
Internally the deconvolution software will make the image and the PSF
the same size anyway...  and will likely pad both of them.  You want
the PSF to be large so that the values near the edges are close to
zero.  This will minimize any artifacts from "padding".   Some
software does extensive preprocessing of a measured PSF.  So depending
on how the software is preprocessing the PSF you may be able to get
away with a smaller PSF (that is the software will pad it in a clever
way so edge effects are minimized).

3.  After deconvolution the "hourglass" like haze should be removed.
As Mike mentions the best criteria is to simply image objects of known
shape and test using those.   Judging subjectively shouldn't be done
as the image that looks best to a human is different than the image
that is most quantitative.

One of the other major complications is spherical aberration.   If
there is SA in your image there will be more haze in one of the axial
directions (the hourglass will not be symmetric).   So if you are
using a theoretical PSF model make sure it can handle SA.   (When
generating the PSF there should be the option of entering sample
refractive index and distance from coverslip.  Even if you do not know
these exactly enter approximations, or else the software may make
assumptions).

Brian

On Wed, Feb 1, 2012 at 11:42 AM, MODELCHAEL <[hidden email]> wrote:

> *****
> To join, leave or search the confocal microscopy listserv, go to:
> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
> *****
>
> Hi Mark, from my limited experience, "looking good" can be a slippery thing. Some years ago we were looking for deconvolution software and I received test images deconvolved by I don't remember who, some very respectable software company, their images looked just fantastic! But after some tinkering and questioning I realized that all the improvement was achieved by playing with contrast! My take on this issue is that deconvolved images must correctly represent the sample, and this is not necessarily the same as looking good. You may apply deconvolution to some kind of test sample whose structure you know, for example, fluorescent bacteria, and see what you get.
>
> Mike Model
>
> -----Original Message-----
> From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Mark Adelman (Work)
> Sent: Wednesday, February 01, 2012 11:00 AM
> To: [hidden email]
> Subject: Practical aspects - Wide field deconvolution
>
> *****
> To join, leave or search the confocal microscopy listserv, go to:
> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
> *****
>
> Please excuse the cross posting, but I have previously received
> helpful responses from both lists.
>
> I am doing wide field deconvolution of fluorescence images (z-stacks),
> primarily using ImageJ, but also some other tools.  Have reached the
> point where I can routinely get fairly nice results, but suspect I
> could do better if I knew a bit more 'down in the weeds' details.  So
> am requesting suggested readings related to the following:
>
> 1.  Are there guidelines as to how big (x,y,z) a PSF data set
> (experimental or theoretical) should be - relative to the specimen
> data set - in order to achieve best deconvolution results?  To what
> extent are such guidelines different for different deconvolution
> algorithms?
>
> 2.  If a deconvolution looks 'good' in the slices that include the
> specimen, to what extent is it 'OK' to ignore speckle/haze (generated
> during the deconvolution) in slices above/below the specimen slices?
>
> 3.  Are there 'objective' methods for deciding - between two
> deconvolutions that look 'good' - which result is 'better'?  I have
> tried using image calculator and other tools and find that two
> deconvolution results that satisfy all criteria (of which I am aware)
> as to being 'good' almost always show slight differences that look
> like 'ghosts' of the deconvolved image sets.  That is, the different
> deconvolutions look quite satisfactory but differ from one another to
> very small degrees.  Is it common to ignore such differences, or have
> people published criteria/methods for 'choosing' between/amongst them?
>
> Please feel free to contact me 'off list' if you feel a general answer
> would be wasting people's time.
>
> Thanks!
>
> Mark R. Adelman
######################################################################################
CONFIDENTIALITY NOTICE:
This email transmission and its attachments contain confidential and proprietary information
of Princeton Instruments, Acton Research, Media Cybernetics and their affiliates and is
intended for the exclusive and confidential use of the intended recipient. Any use, dissemination,
printing, or copying of this transmission and its attachment(s) is strictly prohibited. If you
are not the intended recipient, please do not read, print, copy, distribute or take action in
reliance upon this message.  If you have received this in error, please notify the sender immediately
by telephone or return email and promptly delete all copies of the original transmission and its
attachments from your computer system.
#######################################################################################
Brian Northan Brian Northan
Reply | Threaded
Open this post in threaded view
|

Re: Practical aspects - Wide field deconvolution

*****
To join, leave or search the confocal microscopy listserv, go to:
http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
*****

Disclaimer: Kevin used to be my manager when I worked on deconvolution
in the commercial world so my knowledge of deconvolution have been
greatly influenced by him...  Kevin and I spent many hours looking
over images and going over the nuts and bolts of the algorithms to
ensure quantitative results.

Kevin brings up what seems to be the trickiest concepts in
deconvolution for beginners to grasp.  That is even after
deconvolution the apparent size of objects are still subject to the
resolution limits.   Thus due to anisotropic resolution of the
microscope a point becomes an hourglass in the image and the hourglass
becomes a football after deconvolution instead of a perfect point.

Often people ask why not perfect points back??...  In fact it is easy
enough to run the deconvolution in such a way that you can get perfect
spheres.  There are many simple tweaks and strategies that would
accomplish that.  The reason not to is that other artifacts can occur
and small features can be wiped out.

The resolution limits are limits meaning no information was passed
below the limit.  So while by tweaking deconvolution you can get
"apparent" features with a FWHM of x where x is below the limit...
this is not real resolution and two points x apart would not be
distinguished.

Brian


On Fri, Feb 3, 2012 at 12:43 PM, Ryan, Kevin <[hidden email]> wrote:

> *****
> To join, leave or search the confocal microscopy listserv, go to:
> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
> *****
>
> * Disclaimer - Media Cybernetics is the supplier of AutoQuant deconvolution software.
>
> Excellent questions! Here are my responses based upon the experience of having written such software multiple times over the years:
>
>
> 1. How big a PSF?
>
> Ideally, an experimental PSF should be as large as your data set. Theoretic PSF's tend to be created at that size as well, although there may be a limit dependent on the software package where extremely low values of the PSF distant from the center may be dropped as not significant.
>
> The real issue here is upon your _data set size_. It needs to be large enough to include the 3D blur of your sample - ideally going far enough above and below the sample that the blur is becoming uniform, lacking in detail and information. Clipping your acquisition too close to the sample makes it extremely difficult to reconstruct objects lying close to the top or bottom of the acquired volume.
>
> Side to side can be rather closer - just keep in mind that objects close to the edges of the acquired volume will be missing some reconstruction data as well. I would recommend sampling sufficiently far out to get a reasonable portion (>80-90%) of the cumulative 'hourglass' PSF intensity within the sample volume, perhaps 3-5 microns at high magnification, and being judicious about results on the _very_ edges.
>
>
> 2. Good in plane, what about above/below haze/speckle?
>
> Z is always going to be harder to reconstruct than XY resolution, due to the nature of optical imaging. Properly done, widefield deconvolutions take a point object (giving an 'hourglass' in widefield data) down to a much smaller 'football', with about 3x the Z extent compared to the XY extent.
>
> In judging quality of the deconvolution, look for Z symmetry in the results (a lack means insufficient correction for spherical or other aberrations), no false objects, and not oversharpening the data (shown by large scale 'ringing' around objects, Moire patterns, etc.). Any remaining edge effects should be much lower (as percentage of the peak value) than the original haze.
>
> The real question to ask is whether the deconvolution gives you data that is easier to segment, and that more clearly shows structures (particularly structures you a priori know the details of), etc.
>
>
> 3. Objective grounds for judging deconvolutions?
>
> There have been several methods suggested - a quick look via Google Scholar on "deconvolution evaluation" shows a few of interest, such as:
>
> Zuo et al 2011, "A PDE-Based Perceptual Deconvolution Image Edge Ringing Artifact Metric", http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=6079464
> Crilly 1991, "A quantitative evaluation of various iterative deconvolution algorithms", http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=87019
> Harijumar et al 1996, "Analysis and comparative evaluation of techniques for multichannel blind deconvolution", http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=534884
>
> Essentially the evaluations I have seen look at speed of deconvolution, iterations required to converge, severity of ringing/aliasing/Moire effects, full-width half-max improvements, etc. There are really a number of possible criteria available.
>
> The clearest criteria would come from taking known (artificial) objects, blurring them with the PSF of your system (which can be experimentally or theoretically derived) plus some noise, and evaluating different deconvolutions against the original for RMS errors, peak distortions, and so on. However, I am not aware of any large scale effort that does this with multiple algorithms/vendors.
>
> ---
>
> Long story short - does the deconvolution permit a better analysis of the sample than without it? Or do artifacts or insufficient removal of blur either inhibit analysis, or worse yet lead to incorrect conclusions? Those are really the most basic criteria.
>
>
> Kevin Ryan
> Senior Project Manager
>
> Media Cybernetics, Inc.
>
>
> -----Original Message-----
> From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Brian Northan
> Sent: Wednesday, February 01, 2012 1:26 PM
> To: [hidden email]
> Subject: Re: Practical aspects - Wide field deconvolution
>
> *****
> To join, leave or search the confocal microscopy listserv, go to:
> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
> *****
>
> ...  Disclaimer: I used to work for a commercial deconvolution company
> but am not associated with any commercial software at the moment.
>
> Good questions though.
>
> 1.  Ideally your PSF should be the same size as your image.
> Internally the deconvolution software will make the image and the PSF
> the same size anyway...  and will likely pad both of them.  You want
> the PSF to be large so that the values near the edges are close to
> zero.  This will minimize any artifacts from "padding".   Some
> software does extensive preprocessing of a measured PSF.  So depending
> on how the software is preprocessing the PSF you may be able to get
> away with a smaller PSF (that is the software will pad it in a clever
> way so edge effects are minimized).
>
> 3.  After deconvolution the "hourglass" like haze should be removed.
> As Mike mentions the best criteria is to simply image objects of known
> shape and test using those.   Judging subjectively shouldn't be done
> as the image that looks best to a human is different than the image
> that is most quantitative.
>
> One of the other major complications is spherical aberration.   If
> there is SA in your image there will be more haze in one of the axial
> directions (the hourglass will not be symmetric).   So if you are
> using a theoretical PSF model make sure it can handle SA.   (When
> generating the PSF there should be the option of entering sample
> refractive index and distance from coverslip.  Even if you do not know
> these exactly enter approximations, or else the software may make
> assumptions).
>
> Brian
>
> On Wed, Feb 1, 2012 at 11:42 AM, MODELCHAEL <[hidden email]> wrote:
>> *****
>> To join, leave or search the confocal microscopy listserv, go to:
>> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>> *****
>>
>> Hi Mark, from my limited experience, "looking good" can be a slippery thing. Some years ago we were looking for deconvolution software and I received test images deconvolved by I don't remember who, some very respectable software company, their images looked just fantastic! But after some tinkering and questioning I realized that all the improvement was achieved by playing with contrast! My take on this issue is that deconvolved images must correctly represent the sample, and this is not necessarily the same as looking good. You may apply deconvolution to some kind of test sample whose structure you know, for example, fluorescent bacteria, and see what you get.
>>
>> Mike Model
>>
>> -----Original Message-----
>> From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Mark Adelman (Work)
>> Sent: Wednesday, February 01, 2012 11:00 AM
>> To: [hidden email]
>> Subject: Practical aspects - Wide field deconvolution
>>
>> *****
>> To join, leave or search the confocal microscopy listserv, go to:
>> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>> *****
>>
>> Please excuse the cross posting, but I have previously received
>> helpful responses from both lists.
>>
>> I am doing wide field deconvolution of fluorescence images (z-stacks),
>> primarily using ImageJ, but also some other tools.  Have reached the
>> point where I can routinely get fairly nice results, but suspect I
>> could do better if I knew a bit more 'down in the weeds' details.  So
>> am requesting suggested readings related to the following:
>>
>> 1.  Are there guidelines as to how big (x,y,z) a PSF data set
>> (experimental or theoretical) should be - relative to the specimen
>> data set - in order to achieve best deconvolution results?  To what
>> extent are such guidelines different for different deconvolution
>> algorithms?
>>
>> 2.  If a deconvolution looks 'good' in the slices that include the
>> specimen, to what extent is it 'OK' to ignore speckle/haze (generated
>> during the deconvolution) in slices above/below the specimen slices?
>>
>> 3.  Are there 'objective' methods for deciding - between two
>> deconvolutions that look 'good' - which result is 'better'?  I have
>> tried using image calculator and other tools and find that two
>> deconvolution results that satisfy all criteria (of which I am aware)
>> as to being 'good' almost always show slight differences that look
>> like 'ghosts' of the deconvolved image sets.  That is, the different
>> deconvolutions look quite satisfactory but differ from one another to
>> very small degrees.  Is it common to ignore such differences, or have
>> people published criteria/methods for 'choosing' between/amongst them?
>>
>> Please feel free to contact me 'off list' if you feel a general answer
>> would be wasting people's time.
>>
>> Thanks!
>>
>> Mark R. Adelman
> ######################################################################################
> CONFIDENTIALITY NOTICE:
> This email transmission and its attachments contain confidential and proprietary information
> of Princeton Instruments, Acton Research, Media Cybernetics and their affiliates and is
> intended for the exclusive and confidential use of the intended recipient. Any use, dissemination,
> printing, or copying of this transmission and its attachment(s) is strictly prohibited. If you
> are not the intended recipient, please do not read, print, copy, distribute or take action in
> reliance upon this message.  If you have received this in error, please notify the sender immediately
> by telephone or return email and promptly delete all copies of the original transmission and its
> attachments from your computer system.
> #######################################################################################
James Pawley James Pawley
Reply | Threaded
Open this post in threaded view
|

Re: Practical aspects - Wide field deconvolution

In reply to this post by Kevin Ryan
*****
To join, leave or search the confocal microscopy listserv, go to:
http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
*****

Good list Brian,

I might add, that the reason that Tim Holmes has chapters in the
handbook is that he was very keep on making just the the kind of
comparisons that you note. Because of the difficulty of a creating a
3D fluorescent object of known shape and size, Tim "created" such an
object in the computer and then made  from it a 3D set of simulated
image data, assuming various optical criteria. In the process he was
careful to include an appropriate level of camera-and-Poisson noise.

He would then feed this data into various algorithms and see which of
them came closest to "extracting" the original 3D structure from the
noise data. While the processed results always had less noise and a
more "convincing" (Appealing?) appearance, I was particularly
impressed that, at meetings, he would point out major discrepancies
between what went in and what was "reconstructed".

I don't have any other specific references, but I am sure that they
can be found if you look at Tim Holmes.

Best,

Jim P.

***************************************************************************
Prof. James B. Pawley,                          Ph.
608-238-3953                        
21. N. Prospect Ave. Madison, WI 53726 USA
[hidden email]
3D Microscopy of Living Cells Course, June 9-21, 2012, UBC, Vancouver Canada
Info: http://www.3dcourse.ubc.ca/                  Application
deadline 3/16/2012
               "If it ain't diffraction, it must be statistics." Anon. 11/16/12

>*****
>To join, leave or search the confocal microscopy listserv, go to:
>http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>*****
>
>* Disclaimer - Media Cybernetics is the supplier of AutoQuant
>deconvolution software.
>
>Excellent questions! Here are my responses based upon the experience
>of having written such software multiple times over the years:
>
>
>1. How big a PSF?
>
>Ideally, an experimental PSF should be as large as your data set.
>Theoretic PSF's tend to be created at that size as well, although
>there may be a limit dependent on the software package where
>extremely low values of the PSF distant from the center may be
>dropped as not significant.
>
>The real issue here is upon your _data set size_. It needs to be
>large enough to include the 3D blur of your sample - ideally going
>far enough above and below the sample that the blur is becoming
>uniform, lacking in detail and information. Clipping your
>acquisition too close to the sample makes it extremely difficult to
>reconstruct objects lying close to the top or bottom of the acquired
>volume.
>
>Side to side can be rather closer - just keep in mind that objects
>close to the edges of the acquired volume will be missing some
>reconstruction data as well. I would recommend sampling sufficiently
>far out to get a reasonable portion (>80-90%) of the cumulative
>'hourglass' PSF intensity within the sample volume, perhaps 3-5
>microns at high magnification, and being judicious about results on
>the _very_ edges.
>
>
>2. Good in plane, what about above/below haze/speckle?
>
>Z is always going to be harder to reconstruct than XY resolution,
>due to the nature of optical imaging. Properly done, widefield
>deconvolutions take a point object (giving an 'hourglass' in
>widefield data) down to a much smaller 'football', with about 3x the
>Z extent compared to the XY extent.
>
>In judging quality of the deconvolution, look for Z symmetry in the
>results (a lack means insufficient correction for spherical or other
>aberrations), no false objects, and not oversharpening the data
>(shown by large scale 'ringing' around objects, Moire patterns,
>etc.). Any remaining edge effects should be much lower (as
>percentage of the peak value) than the original haze.
>
>The real question to ask is whether the deconvolution gives you data
>that is easier to segment, and that more clearly shows structures
>(particularly structures you a priori know the details of), etc.
>
>
>3. Objective grounds for judging deconvolutions?
>
>There have been several methods suggested - a quick look via Google
>Scholar on "deconvolution evaluation" shows a few of interest, such
>as:
>
>Zuo et al 2011, "A PDE-Based Perceptual Deconvolution Image Edge
>Ringing Artifact Metric",
>http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=6079464
>Crilly 1991, "A quantitative evaluation of various iterative
>deconvolution algorithms",
>http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=87019
>Harijumar et al 1996, "Analysis and comparative evaluation of
>techniques for multichannel blind deconvolution",
>http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=534884
>
>Essentially the evaluations I have seen look at speed of
>deconvolution, iterations required to converge, severity of
>ringing/aliasing/Moire effects, full-width half-max improvements,
>etc. There are really a number of possible criteria available.
>
>The clearest criteria would come from taking known (artificial)
>objects, blurring them with the PSF of your system (which can be
>experimentally or theoretically derived) plus some noise, and
>evaluating different deconvolutions against the original for RMS
>errors, peak distortions, and so on. However, I am not aware of any
>large scale effort that does this with multiple algorithms/vendors.
>
>---
>
>Long story short - does the deconvolution permit a better analysis
>of the sample than without it? Or do artifacts or insufficient
>removal of blur either inhibit analysis, or worse yet lead to
>incorrect conclusions? Those are really the most basic criteria.
>
>
>Kevin Ryan
>Senior Project Manager
>
>Media Cybernetics, Inc.
>
>
>-----Original Message-----
>From: Confocal Microscopy List
>[mailto:[hidden email]] On Behalf Of Brian Northan
>Sent: Wednesday, February 01, 2012 1:26 PM
>To: [hidden email]
>Subject: Re: Practical aspects - Wide field deconvolution
>
>*****
>To join, leave or search the confocal microscopy listserv, go to:
>http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>*****
>
>...  Disclaimer: I used to work for a commercial deconvolution company
>but am not associated with any commercial software at the moment.
>
>Good questions though.
>
>1.  Ideally your PSF should be the same size as your image.
>Internally the deconvolution software will make the image and the PSF
>the same size anyway...  and will likely pad both of them.  You want
>the PSF to be large so that the values near the edges are close to
>zero.  This will minimize any artifacts from "padding".   Some
>software does extensive preprocessing of a measured PSF.  So depending
>on how the software is preprocessing the PSF you may be able to get
>away with a smaller PSF (that is the software will pad it in a clever
>way so edge effects are minimized).
>
>3.  After deconvolution the "hourglass" like haze should be removed.
>As Mike mentions the best criteria is to simply image objects of known
>shape and test using those.   Judging subjectively shouldn't be done
>as the image that looks best to a human is different than the image
>that is most quantitative.
>
>One of the other major complications is spherical aberration.   If
>there is SA in your image there will be more haze in one of the axial
>directions (the hourglass will not be symmetric).   So if you are
>using a theoretical PSF model make sure it can handle SA.   (When
>generating the PSF there should be the option of entering sample
>refractive index and distance from coverslip.  Even if you do not know
>these exactly enter approximations, or else the software may make
>assumptions).
>
>Brian
>
>On Wed, Feb 1, 2012 at 11:42 AM, MODELCHAEL <[hidden email]> wrote:
>>  *****
>>  To join, leave or search the confocal microscopy listserv, go to:
>>  http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>>  *****
>>
>>  Hi Mark, from my limited experience, "looking good" can be a
>>slippery thing. Some years ago we were looking for deconvolution
>>software and I received test images deconvolved by I don't remember
>>who, some very respectable software company, their images looked
>>just fantastic! But after some tinkering and questioning I realized
>>that all the improvement was achieved by playing with contrast! My
>>take on this issue is that deconvolved images must correctly
>>represent the sample, and this is not necessarily the same as
>>looking good. You may apply deconvolution to some kind of test
>>sample whose structure you know, for example, fluorescent bacteria,
>>and see what you get.
>  >
>>  Mike Model
>>
>>  -----Original Message-----
>>  From: Confocal Microscopy List
>>[mailto:[hidden email]] On Behalf Of Mark Adelman
>>(Work)
>  > Sent: Wednesday, February 01, 2012 11:00 AM
>>  To: [hidden email]
>>  Subject: Practical aspects - Wide field deconvolution
>>
>>  *****
>>  To join, leave or search the confocal microscopy listserv, go to:
>>  http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>>  *****
>>
>>  Please excuse the cross posting, but I have previously received
>>  helpful responses from both lists.
>>
>>  I am doing wide field deconvolution of fluorescence images (z-stacks),
>>  primarily using ImageJ, but also some other tools.  Have reached the
>>  point where I can routinely get fairly nice results, but suspect I
>>  could do better if I knew a bit more 'down in the weeds' details.  So
>>  am requesting suggested readings related to the following:
>>
>>  1.  Are there guidelines as to how big (x,y,z) a PSF data set
>>  (experimental or theoretical) should be - relative to the specimen
>>  data set - in order to achieve best deconvolution results?  To what
>>  extent are such guidelines different for different deconvolution
>>  algorithms?
>>
>>  2.  If a deconvolution looks 'good' in the slices that include the
>>  specimen, to what extent is it 'OK' to ignore speckle/haze (generated
>>  during the deconvolution) in slices above/below the specimen slices?
>>
>>  3.  Are there 'objective' methods for deciding - between two
>>  deconvolutions that look 'good' - which result is 'better'?  I have
>>  tried using image calculator and other tools and find that two
>>  deconvolution results that satisfy all criteria (of which I am aware)
>>  as to being 'good' almost always show slight differences that look
>>  like 'ghosts' of the deconvolved image sets.  That is, the different
>>  deconvolutions look quite satisfactory but differ from one another to
>>  very small degrees.  Is it common to ignore such differences, or have
>>  people published criteria/methods for 'choosing' between/amongst them?
>>
>>  Please feel free to contact me 'off list' if you feel a general answer
>>  would be wasting people's time.
>>
>>  Thanks!
>>
>>  Mark R. Adelman
>######################################################################################
>CONFIDENTIALITY NOTICE:
>This email transmission and its attachments contain confidential and
>proprietary information
>of Princeton Instruments, Acton Research, Media Cybernetics and
>their affiliates and is
>intended for the exclusive and confidential use of the intended
>recipient. Any use, dissemination,
>printing, or copying of this transmission and its attachment(s) is
>strictly prohibited. If you
>are not the intended recipient, please do not read, print, copy,
>distribute or take action in
>reliance upon this message.  If you have received this in error,
>please notify the sender immediately
>by telephone or return email and promptly delete all copies of the
>original transmission and its
>attachments from your computer system.
>#######################################################################################


--
***************************************************************************
Prof. James B. Pawley,                          Ph.
608-238-3953                        
21. N. Prospect Ave. Madison, WI 53726 USA
[hidden email]
3D Microscopy of Living Cells Course, June 9-21, 2012, UBC, Vancouver Canada
Info: http://www.3dcourse.ubc.ca/                  Application
deadline 3/16/2012
               "If it ain't diffraction, it must be statistics." Anon. 11/16/12
Brian Northan Brian Northan
Reply | Threaded
Open this post in threaded view
|

Re: Practical aspects - Wide field deconvolution

In reply to this post by Kevin Ryan
*****
To join, leave or search the confocal microscopy listserv, go to:
http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
*****

Chapter 24 of your 2006 confocal handbook is a good one.   For my home
library I only have the 2006 version, but I believe there were good
and different articles by Tim and collaborators in every version.

The other thing that Tim Holmes (and David Biggs too) were very good
at was reaching out to people in the scientific community and
maintaining many active collaborations.  Deconvolution (and other
inverse imaging problems in bioscience) are really hard... not so much
because of the actual dynamics of solving the equations but because of
the uncertainty in our knowledge of the physical models (psf
modelling, spherical aberrations and such).  No product is perfect and
this knowledge is always evolving.

Tim and Dave did an amazing job having the algorithms guys on the
phone every single week with collaborators and getting us out to labs
and conferences talking to other researchers, establishing important
connections between the optics, bioscience and computer science.

Brian






On Sun, Feb 5, 2012 at 1:35 PM, James Pawley <[hidden email]> wrote:

> *****
> To join, leave or search the confocal microscopy listserv, go to:
> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
> *****
>
> Good list Brian,
>
> I might add, that the reason that Tim Holmes has chapters in the handbook is
> that he was very keep on making just the the kind of comparisons that you
> note. Because of the difficulty of a creating a 3D fluorescent object of
> known shape and size, Tim "created" such an object in the computer and then
> made  from it a 3D set of simulated image data, assuming various optical
> criteria. In the process he was careful to include an appropriate level of
> camera-and-Poisson noise.
>
> He would then feed this data into various algorithms and see which of them
> came closest to "extracting" the original 3D structure from the noise data.
> While the processed results always had less noise and a more "convincing"
> (Appealing?) appearance, I was particularly impressed that, at meetings, he
> would point out major discrepancies between what went in and what was
> "reconstructed".
>
> I don't have any other specific references, but I am sure that they can be
> found if you look at Tim Holmes.
>
> Best,
>
> Jim P.
>
> ***************************************************************************
> Prof. James B. Pawley,                                      Ph. 608-238-3953
>                           21. N. Prospect Ave. Madison, WI 53726 USA
> [hidden email]
> 3D Microscopy of Living Cells Course, June 9-21, 2012, UBC, Vancouver Canada
> Info: http://www.3dcourse.ubc.ca/                 Application deadline
> 3/16/2012
>               "If it ain't diffraction, it must be statistics." Anon.
> 11/16/12
>
>
>> *****
>> To join, leave or search the confocal microscopy listserv, go to:
>> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>> *****
>>
>> * Disclaimer - Media Cybernetics is the supplier of AutoQuant
>> deconvolution software.
>>
>> Excellent questions! Here are my responses based upon the experience of
>> having written such software multiple times over the years:
>>
>>
>> 1. How big a PSF?
>>
>> Ideally, an experimental PSF should be as large as your data set.
>> Theoretic PSF's tend to be created at that size as well, although there may
>> be a limit dependent on the software package where extremely low values of
>> the PSF distant from the center may be dropped as not significant.
>>
>> The real issue here is upon your _data set size_. It needs to be large
>> enough to include the 3D blur of your sample - ideally going far enough
>> above and below the sample that the blur is becoming uniform, lacking in
>> detail and information. Clipping your acquisition too close to the sample
>> makes it extremely difficult to reconstruct objects lying close to the top
>> or bottom of the acquired volume.
>>
>> Side to side can be rather closer - just keep in mind that objects close
>> to the edges of the acquired volume will be missing some reconstruction data
>> as well. I would recommend sampling sufficiently far out to get a reasonable
>> portion (>80-90%) of the cumulative 'hourglass' PSF intensity within the
>> sample volume, perhaps 3-5 microns at high magnification, and being
>> judicious about results on the _very_ edges.
>>
>>
>> 2. Good in plane, what about above/below haze/speckle?
>>
>> Z is always going to be harder to reconstruct than XY resolution, due to
>> the nature of optical imaging. Properly done, widefield deconvolutions take
>> a point object (giving an 'hourglass' in widefield data) down to a much
>> smaller 'football', with about 3x the Z extent compared to the XY extent.
>>
>> In judging quality of the deconvolution, look for Z symmetry in the
>> results (a lack means insufficient correction for spherical or other
>> aberrations), no false objects, and not oversharpening the data (shown by
>> large scale 'ringing' around objects, Moire patterns, etc.). Any remaining
>> edge effects should be much lower (as percentage of the peak value) than the
>> original haze.
>>
>> The real question to ask is whether the deconvolution gives you data that
>> is easier to segment, and that more clearly shows structures (particularly
>> structures you a priori know the details of), etc.
>>
>>
>> 3. Objective grounds for judging deconvolutions?
>>
>> There have been several methods suggested - a quick look via Google
>> Scholar on "deconvolution evaluation" shows a few of interest, such as:
>>
>> Zuo et al 2011, "A PDE-Based Perceptual Deconvolution Image Edge Ringing
>> Artifact Metric",
>> http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=6079464
>> Crilly 1991, "A quantitative evaluation of various iterative deconvolution
>> algorithms", http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=87019
>> Harijumar et al 1996, "Analysis and comparative evaluation of techniques
>> for multichannel blind deconvolution",
>> http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=534884
>>
>> Essentially the evaluations I have seen look at speed of deconvolution,
>> iterations required to converge, severity of ringing/aliasing/Moire effects,
>> full-width half-max improvements, etc. There are really a number of possible
>> criteria available.
>>
>> The clearest criteria would come from taking known (artificial) objects,
>> blurring them with the PSF of your system (which can be experimentally or
>> theoretically derived) plus some noise, and evaluating different
>> deconvolutions against the original for RMS errors, peak distortions, and so
>> on. However, I am not aware of any large scale effort that does this with
>> multiple algorithms/vendors.
>>
>> ---
>>
>> Long story short - does the deconvolution permit a better analysis of the
>> sample than without it? Or do artifacts or insufficient removal of blur
>> either inhibit analysis, or worse yet lead to incorrect conclusions? Those
>> are really the most basic criteria.
>>
>>
>> Kevin Ryan
>> Senior Project Manager
>>
>> Media Cybernetics, Inc.
>>
>>
>> -----Original Message-----
>> From: Confocal Microscopy List [mailto:[hidden email]]
>> On Behalf Of Brian Northan
>> Sent: Wednesday, February 01, 2012 1:26 PM
>> To: [hidden email]
>> Subject: Re: Practical aspects - Wide field deconvolution
>>
>> *****
>> To join, leave or search the confocal microscopy listserv, go to:
>> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>> *****
>>
>> ...  Disclaimer: I used to work for a commercial deconvolution company
>> but am not associated with any commercial software at the moment.
>>
>> Good questions though.
>>
>> 1.  Ideally your PSF should be the same size as your image.
>> Internally the deconvolution software will make the image and the PSF
>> the same size anyway...  and will likely pad both of them.  You want
>> the PSF to be large so that the values near the edges are close to
>> zero.  This will minimize any artifacts from "padding".   Some
>> software does extensive preprocessing of a measured PSF.  So depending
>> on how the software is preprocessing the PSF you may be able to get
>> away with a smaller PSF (that is the software will pad it in a clever
>> way so edge effects are minimized).
>>
>> 3.  After deconvolution the "hourglass" like haze should be removed.
>> As Mike mentions the best criteria is to simply image objects of known
>> shape and test using those.   Judging subjectively shouldn't be done
>> as the image that looks best to a human is different than the image
>> that is most quantitative.
>>
>> One of the other major complications is spherical aberration.   If
>> there is SA in your image there will be more haze in one of the axial
>> directions (the hourglass will not be symmetric).   So if you are
>> using a theoretical PSF model make sure it can handle SA.   (When
>> generating the PSF there should be the option of entering sample
>> refractive index and distance from coverslip.  Even if you do not know
>> these exactly enter approximations, or else the software may make
>> assumptions).
>>
>> Brian
>>
>> On Wed, Feb 1, 2012 at 11:42 AM, MODELCHAEL <[hidden email]> wrote:
>>>
>>>  *****
>>>  To join, leave or search the confocal microscopy listserv, go to:
>>>  http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>>>  *****
>>>
>>>  Hi Mark, from my limited experience, "looking good" can be a slippery
>>> thing. Some years ago we were looking for deconvolution software and I
>>> received test images deconvolved by I don't remember who, some very
>>> respectable software company, their images looked just fantastic! But after
>>> some tinkering and questioning I realized that all the improvement was
>>> achieved by playing with contrast! My take on this issue is that deconvolved
>>> images must correctly represent the sample, and this is not necessarily the
>>> same as looking good. You may apply deconvolution to some kind of test
>>> sample whose structure you know, for example, fluorescent bacteria, and see
>>> what you get.
>>
>>  >
>>>
>>>  Mike Model
>>>
>>>  -----Original Message-----
>>>  From: Confocal Microscopy List [mailto:[hidden email]]
>>> On Behalf Of Mark Adelman (Work)
>>
>>  > Sent: Wednesday, February 01, 2012 11:00 AM
>>>
>>>  To: [hidden email]
>>>  Subject: Practical aspects - Wide field deconvolution
>>>
>>>  *****
>>>  To join, leave or search the confocal microscopy listserv, go to:
>>>  http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>>>  *****
>>>
>>>  Please excuse the cross posting, but I have previously received
>>>  helpful responses from both lists.
>>>
>>>  I am doing wide field deconvolution of fluorescence images (z-stacks),
>>>  primarily using ImageJ, but also some other tools.  Have reached the
>>>  point where I can routinely get fairly nice results, but suspect I
>>>  could do better if I knew a bit more 'down in the weeds' details.  So
>>>  am requesting suggested readings related to the following:
>>>
>>>  1.  Are there guidelines as to how big (x,y,z) a PSF data set
>>>  (experimental or theoretical) should be - relative to the specimen
>>>  data set - in order to achieve best deconvolution results?  To what
>>>  extent are such guidelines different for different deconvolution
>>>  algorithms?
>>>
>>>  2.  If a deconvolution looks 'good' in the slices that include the
>>>  specimen, to what extent is it 'OK' to ignore speckle/haze (generated
>>>  during the deconvolution) in slices above/below the specimen slices?
>>>
>>>  3.  Are there 'objective' methods for deciding - between two
>>>  deconvolutions that look 'good' - which result is 'better'?  I have
>>>  tried using image calculator and other tools and find that two
>>>  deconvolution results that satisfy all criteria (of which I am aware)
>>>  as to being 'good' almost always show slight differences that look
>>>  like 'ghosts' of the deconvolved image sets.  That is, the different
>>>  deconvolutions look quite satisfactory but differ from one another to
>>>  very small degrees.  Is it common to ignore such differences, or have
>>>  people published criteria/methods for 'choosing' between/amongst them?
>>>
>>>  Please feel free to contact me 'off list' if you feel a general answer
>>>  would be wasting people's time.
>>>
>>>  Thanks!
>>>
>>>  Mark R. Adelman
>>
>>
>> ######################################################################################
>> CONFIDENTIALITY NOTICE:
>> This email transmission and its attachments contain confidential and
>> proprietary information
>> of Princeton Instruments, Acton Research, Media Cybernetics and their
>> affiliates and is
>> intended for the exclusive and confidential use of the intended recipient.
>> Any use, dissemination,
>> printing, or copying of this transmission and its attachment(s) is
>> strictly prohibited. If you
>> are not the intended recipient, please do not read, print, copy,
>> distribute or take action in
>> reliance upon this message.  If you have received this in error, please
>> notify the sender immediately
>> by telephone or return email and promptly delete all copies of the
>> original transmission and its
>> attachments from your computer system.
>>
>> #######################################################################################
>
>
>
> --
> ***************************************************************************
> Prof. James B. Pawley,                                      Ph. 608-238-3953
>                           21. N. Prospect Ave. Madison, WI 53726 USA
> [hidden email]
> 3D Microscopy of Living Cells Course, June 9-21, 2012, UBC, Vancouver Canada
> Info: http://www.3dcourse.ubc.ca/                 Application deadline
> 3/16/2012
>               "If it ain't diffraction, it must be statistics." Anon.
> 11/16/12
Brian Northan Brian Northan
Reply | Threaded
Open this post in threaded view
|

Re: Practical aspects - Wide field deconvolution

*****
To join, leave or search the confocal microscopy listserv, go to:
http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
*****

Actually I double checked and the 2006 handbook article is mostly real
examples.   Some of the simulations were much earlier.  There is a
nice series of papers between 88 and 92 that go over all that.

http://www.opticsinfobase.org/abstract.cfm?uri=josaa-9-7-1052

On Mon, Feb 6, 2012 at 5:32 AM, Brian Northan <[hidden email]> wrote:

> Chapter 24 of your 2006 confocal handbook is a good one.   For my home
> library I only have the 2006 version, but I believe there were good
> and different articles by Tim and collaborators in every version.
>
> The other thing that Tim Holmes (and David Biggs too) were very good
> at was reaching out to people in the scientific community and
> maintaining many active collaborations.  Deconvolution (and other
> inverse imaging problems in bioscience) are really hard... not so much
> because of the actual dynamics of solving the equations but because of
> the uncertainty in our knowledge of the physical models (psf
> modelling, spherical aberrations and such).  No product is perfect and
> this knowledge is always evolving.
>
> Tim and Dave did an amazing job having the algorithms guys on the
> phone every single week with collaborators and getting us out to labs
> and conferences talking to other researchers, establishing important
> connections between the optics, bioscience and computer science.
>
> Brian
>
>
>
>
>
>
> On Sun, Feb 5, 2012 at 1:35 PM, James Pawley <[hidden email]> wrote:
>> *****
>> To join, leave or search the confocal microscopy listserv, go to:
>> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>> *****
>>
>> Good list Brian,
>>
>> I might add, that the reason that Tim Holmes has chapters in the handbook is
>> that he was very keep on making just the the kind of comparisons that you
>> note. Because of the difficulty of a creating a 3D fluorescent object of
>> known shape and size, Tim "created" such an object in the computer and then
>> made  from it a 3D set of simulated image data, assuming various optical
>> criteria. In the process he was careful to include an appropriate level of
>> camera-and-Poisson noise.
>>
>> He would then feed this data into various algorithms and see which of them
>> came closest to "extracting" the original 3D structure from the noise data.
>> While the processed results always had less noise and a more "convincing"
>> (Appealing?) appearance, I was particularly impressed that, at meetings, he
>> would point out major discrepancies between what went in and what was
>> "reconstructed".
>>
>> I don't have any other specific references, but I am sure that they can be
>> found if you look at Tim Holmes.
>>
>> Best,
>>
>> Jim P.
>>
>> ***************************************************************************
>> Prof. James B. Pawley,                                      Ph. 608-238-3953
>>                           21. N. Prospect Ave. Madison, WI 53726 USA
>> [hidden email]
>> 3D Microscopy of Living Cells Course, June 9-21, 2012, UBC, Vancouver Canada
>> Info: http://www.3dcourse.ubc.ca/                 Application deadline
>> 3/16/2012
>>               "If it ain't diffraction, it must be statistics." Anon.
>> 11/16/12
>>
>>
>>> *****
>>> To join, leave or search the confocal microscopy listserv, go to:
>>> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>>> *****
>>>
>>> * Disclaimer - Media Cybernetics is the supplier of AutoQuant
>>> deconvolution software.
>>>
>>> Excellent questions! Here are my responses based upon the experience of
>>> having written such software multiple times over the years:
>>>
>>>
>>> 1. How big a PSF?
>>>
>>> Ideally, an experimental PSF should be as large as your data set.
>>> Theoretic PSF's tend to be created at that size as well, although there may
>>> be a limit dependent on the software package where extremely low values of
>>> the PSF distant from the center may be dropped as not significant.
>>>
>>> The real issue here is upon your _data set size_. It needs to be large
>>> enough to include the 3D blur of your sample - ideally going far enough
>>> above and below the sample that the blur is becoming uniform, lacking in
>>> detail and information. Clipping your acquisition too close to the sample
>>> makes it extremely difficult to reconstruct objects lying close to the top
>>> or bottom of the acquired volume.
>>>
>>> Side to side can be rather closer - just keep in mind that objects close
>>> to the edges of the acquired volume will be missing some reconstruction data
>>> as well. I would recommend sampling sufficiently far out to get a reasonable
>>> portion (>80-90%) of the cumulative 'hourglass' PSF intensity within the
>>> sample volume, perhaps 3-5 microns at high magnification, and being
>>> judicious about results on the _very_ edges.
>>>
>>>
>>> 2. Good in plane, what about above/below haze/speckle?
>>>
>>> Z is always going to be harder to reconstruct than XY resolution, due to
>>> the nature of optical imaging. Properly done, widefield deconvolutions take
>>> a point object (giving an 'hourglass' in widefield data) down to a much
>>> smaller 'football', with about 3x the Z extent compared to the XY extent.
>>>
>>> In judging quality of the deconvolution, look for Z symmetry in the
>>> results (a lack means insufficient correction for spherical or other
>>> aberrations), no false objects, and not oversharpening the data (shown by
>>> large scale 'ringing' around objects, Moire patterns, etc.). Any remaining
>>> edge effects should be much lower (as percentage of the peak value) than the
>>> original haze.
>>>
>>> The real question to ask is whether the deconvolution gives you data that
>>> is easier to segment, and that more clearly shows structures (particularly
>>> structures you a priori know the details of), etc.
>>>
>>>
>>> 3. Objective grounds for judging deconvolutions?
>>>
>>> There have been several methods suggested - a quick look via Google
>>> Scholar on "deconvolution evaluation" shows a few of interest, such as:
>>>
>>> Zuo et al 2011, "A PDE-Based Perceptual Deconvolution Image Edge Ringing
>>> Artifact Metric",
>>> http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=6079464
>>> Crilly 1991, "A quantitative evaluation of various iterative deconvolution
>>> algorithms", http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=87019
>>> Harijumar et al 1996, "Analysis and comparative evaluation of techniques
>>> for multichannel blind deconvolution",
>>> http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=534884
>>>
>>> Essentially the evaluations I have seen look at speed of deconvolution,
>>> iterations required to converge, severity of ringing/aliasing/Moire effects,
>>> full-width half-max improvements, etc. There are really a number of possible
>>> criteria available.
>>>
>>> The clearest criteria would come from taking known (artificial) objects,
>>> blurring them with the PSF of your system (which can be experimentally or
>>> theoretically derived) plus some noise, and evaluating different
>>> deconvolutions against the original for RMS errors, peak distortions, and so
>>> on. However, I am not aware of any large scale effort that does this with
>>> multiple algorithms/vendors.
>>>
>>> ---
>>>
>>> Long story short - does the deconvolution permit a better analysis of the
>>> sample than without it? Or do artifacts or insufficient removal of blur
>>> either inhibit analysis, or worse yet lead to incorrect conclusions? Those
>>> are really the most basic criteria.
>>>
>>>
>>> Kevin Ryan
>>> Senior Project Manager
>>>
>>> Media Cybernetics, Inc.
>>>
>>>
>>> -----Original Message-----
>>> From: Confocal Microscopy List [mailto:[hidden email]]
>>> On Behalf Of Brian Northan
>>> Sent: Wednesday, February 01, 2012 1:26 PM
>>> To: [hidden email]
>>> Subject: Re: Practical aspects - Wide field deconvolution
>>>
>>> *****
>>> To join, leave or search the confocal microscopy listserv, go to:
>>> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>>> *****
>>>
>>> ...  Disclaimer: I used to work for a commercial deconvolution company
>>> but am not associated with any commercial software at the moment.
>>>
>>> Good questions though.
>>>
>>> 1.  Ideally your PSF should be the same size as your image.
>>> Internally the deconvolution software will make the image and the PSF
>>> the same size anyway...  and will likely pad both of them.  You want
>>> the PSF to be large so that the values near the edges are close to
>>> zero.  This will minimize any artifacts from "padding".   Some
>>> software does extensive preprocessing of a measured PSF.  So depending
>>> on how the software is preprocessing the PSF you may be able to get
>>> away with a smaller PSF (that is the software will pad it in a clever
>>> way so edge effects are minimized).
>>>
>>> 3.  After deconvolution the "hourglass" like haze should be removed.
>>> As Mike mentions the best criteria is to simply image objects of known
>>> shape and test using those.   Judging subjectively shouldn't be done
>>> as the image that looks best to a human is different than the image
>>> that is most quantitative.
>>>
>>> One of the other major complications is spherical aberration.   If
>>> there is SA in your image there will be more haze in one of the axial
>>> directions (the hourglass will not be symmetric).   So if you are
>>> using a theoretical PSF model make sure it can handle SA.   (When
>>> generating the PSF there should be the option of entering sample
>>> refractive index and distance from coverslip.  Even if you do not know
>>> these exactly enter approximations, or else the software may make
>>> assumptions).
>>>
>>> Brian
>>>
>>> On Wed, Feb 1, 2012 at 11:42 AM, MODELCHAEL <[hidden email]> wrote:
>>>>
>>>>  *****
>>>>  To join, leave or search the confocal microscopy listserv, go to:
>>>>  http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>>>>  *****
>>>>
>>>>  Hi Mark, from my limited experience, "looking good" can be a slippery
>>>> thing. Some years ago we were looking for deconvolution software and I
>>>> received test images deconvolved by I don't remember who, some very
>>>> respectable software company, their images looked just fantastic! But after
>>>> some tinkering and questioning I realized that all the improvement was
>>>> achieved by playing with contrast! My take on this issue is that deconvolved
>>>> images must correctly represent the sample, and this is not necessarily the
>>>> same as looking good. You may apply deconvolution to some kind of test
>>>> sample whose structure you know, for example, fluorescent bacteria, and see
>>>> what you get.
>>>
>>>  >
>>>>
>>>>  Mike Model
>>>>
>>>>  -----Original Message-----
>>>>  From: Confocal Microscopy List [mailto:[hidden email]]
>>>> On Behalf Of Mark Adelman (Work)
>>>
>>>  > Sent: Wednesday, February 01, 2012 11:00 AM
>>>>
>>>>  To: [hidden email]
>>>>  Subject: Practical aspects - Wide field deconvolution
>>>>
>>>>  *****
>>>>  To join, leave or search the confocal microscopy listserv, go to:
>>>>  http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy
>>>>  *****
>>>>
>>>>  Please excuse the cross posting, but I have previously received
>>>>  helpful responses from both lists.
>>>>
>>>>  I am doing wide field deconvolution of fluorescence images (z-stacks),
>>>>  primarily using ImageJ, but also some other tools.  Have reached the
>>>>  point where I can routinely get fairly nice results, but suspect I
>>>>  could do better if I knew a bit more 'down in the weeds' details.  So
>>>>  am requesting suggested readings related to the following:
>>>>
>>>>  1.  Are there guidelines as to how big (x,y,z) a PSF data set
>>>>  (experimental or theoretical) should be - relative to the specimen
>>>>  data set - in order to achieve best deconvolution results?  To what
>>>>  extent are such guidelines different for different deconvolution
>>>>  algorithms?
>>>>
>>>>  2.  If a deconvolution looks 'good' in the slices that include the
>>>>  specimen, to what extent is it 'OK' to ignore speckle/haze (generated
>>>>  during the deconvolution) in slices above/below the specimen slices?
>>>>
>>>>  3.  Are there 'objective' methods for deciding - between two
>>>>  deconvolutions that look 'good' - which result is 'better'?  I have
>>>>  tried using image calculator and other tools and find that two
>>>>  deconvolution results that satisfy all criteria (of which I am aware)
>>>>  as to being 'good' almost always show slight differences that look
>>>>  like 'ghosts' of the deconvolved image sets.  That is, the different
>>>>  deconvolutions look quite satisfactory but differ from one another to
>>>>  very small degrees.  Is it common to ignore such differences, or have
>>>>  people published criteria/methods for 'choosing' between/amongst them?
>>>>
>>>>  Please feel free to contact me 'off list' if you feel a general answer
>>>>  would be wasting people's time.
>>>>
>>>>  Thanks!
>>>>
>>>>  Mark R. Adelman
>>>
>>>
>>> ######################################################################################
>>> CONFIDENTIALITY NOTICE:
>>> This email transmission and its attachments contain confidential and
>>> proprietary information
>>> of Princeton Instruments, Acton Research, Media Cybernetics and their
>>> affiliates and is
>>> intended for the exclusive and confidential use of the intended recipient.
>>> Any use, dissemination,
>>> printing, or copying of this transmission and its attachment(s) is
>>> strictly prohibited. If you
>>> are not the intended recipient, please do not read, print, copy,
>>> distribute or take action in
>>> reliance upon this message.  If you have received this in error, please
>>> notify the sender immediately
>>> by telephone or return email and promptly delete all copies of the
>>> original transmission and its
>>> attachments from your computer system.
>>>
>>> #######################################################################################
>>
>>
>>
>> --
>> ***************************************************************************
>> Prof. James B. Pawley,                                      Ph. 608-238-3953
>>                           21. N. Prospect Ave. Madison, WI 53726 USA
>> [hidden email]
>> 3D Microscopy of Living Cells Course, June 9-21, 2012, UBC, Vancouver Canada
>> Info: http://www.3dcourse.ubc.ca/                 Application deadline
>> 3/16/2012
>>               "If it ain't diffraction, it must be statistics." Anon.
>> 11/16/12