Peter Werner |
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** An interesting point was made here by Jim Pawley: > I agree that sampling a bit higher than Nyquist never hurts, > especially if you deconvolve (as you always should), but I think > that it is a mistake to think that one can "separate" out the noise > by decon. I think that noise is pretty fundamental. I had always heard that if you're doing confocal microscopy, at least point-scanning confocal with a pinhole size of 1AU or smaller, that deconvolution was superfluous, because you shouldn't be getting out of focus light. So what is gained by deconvolution when one is sampling voxel by voxel? Peter G. Werner Merritt College Microscopy Program |
John Oreopoulos |
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Peter, there are some advantages to deconvolving even confocal imaging data. Until I took Pawley's course, I too had always seen these two imaging techniques as mutually exclusive, but that's not the case. Here is a passage from Chapter 2 of Pawley's Hanbook: "Although deconvolution and confocal imaging are often seen as competing methods aimed at the same goal, namely producing images of 3D stain distributions, in fact, they are not exclusive, and there is much to be said for combining them. No only does deconvolution suppress the "single-pixel" features created by Poisson noise, it also effectively averages the signal in the Nyquist-sampled image of a point object. In other words, it has the same effect of reducing the uncertainty in the estimate of the brightness in individual voxels as Kalman averaging for 64 to 125 frames. This point is so important that the present edition of this volume devotes an entire chapter to it: Chapter 25." Now, having repeated that passage, can I say that I deconvolve my confocal data all the time? No, mainly due to lack of access to deconvolution software, and partly due to time constraints usually. But I can assure you that your images will look a bit better after deconvolving them. Apparently all the best confocalists do this, but I seldom see it practiced in the literature. Have a look at Chapter 25 of his book. It's quite interesting. John Oreopoulos Research Assistant Spectral Applied Research Richmond Hill, Ontario Canada www.spectral.ca On 2011-10-30, at 4:09 PM, Peter Werner wrote: > ***** > To join, leave or search the confocal microscopy listserv, go to: > http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy > ***** > > An interesting point was made here by Jim Pawley: > >> I agree that sampling a bit higher than Nyquist never hurts, especially if you deconvolve (as you always should), but I think that it is a mistake to think that one can "separate" out the noise by decon. I think that noise is pretty fundamental. > > I had always heard that if you're doing confocal microscopy, at least point-scanning confocal with a pinhole size of 1AU or smaller, that deconvolution was superfluous, because you shouldn't be getting out of focus light. So what is gained by deconvolution when one is sampling voxel by voxel? > > Peter G. Werner > Merritt College Microscopy Program |
Straatman, Kees (Dr.) |
In reply to this post by Peter Werner
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Dear Peter, John Oreopoulos made it already clear that deconvolution and clsm are not mutually exclusive. I would like to add that each microscope system has a PSF, including the clsm. As deconvolution tries to restore this, you can see why you can do deconvolution on a confocal data set. Best wishes Kees -----Original Message----- From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Peter Werner Sent: 30 October 2011 20:09 To: [hidden email] Subject: Deconvolution of Confocal Images? (was: Airy Units) ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** An interesting point was made here by Jim Pawley: > I agree that sampling a bit higher than Nyquist never hurts, > especially if you deconvolve (as you always should), but I think > that it is a mistake to think that one can "separate" out the noise > by decon. I think that noise is pretty fundamental. I had always heard that if you're doing confocal microscopy, at least point-scanning confocal with a pinhole size of 1AU or smaller, that deconvolution was superfluous, because you shouldn't be getting out of focus light. So what is gained by deconvolution when one is sampling voxel by voxel? Peter G. Werner Merritt College Microscopy Program |
James Pawley |
In reply to this post by Peter Werner
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** >***** >To join, leave or search the confocal microscopy listserv, go to: >http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy >***** > >An interesting point was made here by Jim Pawley: > >>I agree that sampling a bit higher than Nyquist never hurts, >>especially if you deconvolve (as you always should), but I think >>that it is a mistake to think that one can "separate" out the noise >>by decon. I think that noise is pretty fundamental. > >I had always heard that if you're doing confocal microscopy, at >least point-scanning confocal with a pinhole size of 1AU or smaller, >that deconvolution was superfluous, because you shouldn't be getting >out of focus light. So what is gained by deconvolution when one is >sampling voxel by voxel? > >Peter G. Werner >Merritt College Microscopy Program Hi Peter, This is indeed "widely assumed". However, it is beside the point. The process of decon is applied to 3D, fluorescence microscopy data sets for (at least) two major different reasons: 1) It removes much of the effect of out-of-focus light (if there is any) and therefore produces a major improvement in the visibility of structures in widefield data. For confocal data, the resolution improvement is much less significant. 2) It effectively smooths the data out so that anything smaller than the PSF is removed from the processed result. This is good for two reasons: a) The process effectively averages the intensity data of the 64-125 voxels needed to sample the central peak of the PSF, improving the S/N by a factor of between 8 and 11. b) It also meets the Nyquist Reconstruction Condition: If you have 2.5 pixels between the central peak and the first zero, you have 5 pixels across the diameter of the first-zero ring and (about) 25 pixels will be needed to sample this blob in 2D and (about) 125 voxels in 3D. 125 measurements just to measure the brightness and location of one point. (The "about" has to do with how may counts one must register in a pixel for it to be considered part of the PSF, a PSF that will in general not be so conveniently aligned to the pixel grid. 100 might be a good guess for a 3D PSF.) But it is possible to "know" (measure?) the PSF independently, and the PSF imposes constraints on the relative brightness of these 125 points. Decon is the best process of imposing this constraint onto the measured data (For confocal, the 3D Gaussian mentioned earlier is a good approximation to the PSF. Although it is imprecise (it has a longer tail), if the Gaussian Laser beam doesn't considerably overfill the objective aperture, the spot will in fact be more like a Gaussian than an Airy disk. More to the point, the data are usually so Poisson-Noisy, that it won't make a great difference, and it is a lot easier to do.). The decon process may seem to reduce image contrast (by averaging-down the bright noise peaks) but one can compensate for this by changing the display look-up table (contrast control) and when you do this, you will find that the processed confocal data is far less noisy than was the raw data. It is also free from "impossible" features that were really only Poisson Noise excursions usually one-pixel wide (which is about 4 smaller than the width of the PSF, the smallest feature that the optical system can pass legitimately.). Indeed, one should never make statements regarding the exact shape of small structures (near the resolution limit) recorded in confocal microscopes until the data have been deconvolved. Nyquist would not approve. Best JP -- *************************************************************************** Prof. James B. Pawley, Ph. 608-238-3953 21. N. Prospect Ave. Madison, WI 53726 USA [hidden email] 3D Microscopy of Living Cells Course, June 10-22, 2012, UBC, Vancouver Canada Info: http://www.3dcourse.ubc.ca/ Applications accepted after 11/15/12 "If it ain't diffraction, it must be statistics." Anon. |
Lutz Schaefer |
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Dear Jim, I do have a few problems with your recent statements: > 1) It removes much of the effect of out-of-focus light (if there is any) > and therefore produces a major improvement in the visibility of structures > in widefield data. For confocal data, the resolution improvement is much > less significant. > As deconvolution uses some inverse model of the forward problem (convolution) it will not remove but re-assign intensities back to the originating source. I am not sure if you remember the discussion with Walter Carrington some time ago addressing this issue (or so he told me). It is in fact even possible to use only out-of-focus information to reconstruct its re-assigned in-focus contribution. Although this is not a preferable way to do the processing, Walter demonstrated this case in his paper. > 2) It effectively smooths the data out so that anything smaller than the > PSF is removed from the processed result. This is good for two reasons: > This statement can be potentially misunderstood. Again, deconvolution is the inverse of a convolution model. However, due to its ill-posed and numerically instable nature, practical implementations need to employ what is called regularization. Regularization can be understood as a form of smoothing of the estimative solution. Depending on the actual amount of noise present, improbable solutions will be excluded or made to no longer be part of the set of probable solutions within the system of equations. Depending on how the regularization is implemented and using various a-priori information, this smoothing-part may or may not preserve edges and/or intensities or other features. You can of course manually overweigh the regularization term to do what you say, but normally there should be a balance compromise between data fit and regularization. Deconvolution typically increases uncorrelated statistical noise along with resolution and contrast. This is, because the noise can not be easily modeled together with a convolution operation. Maximum likelihood methods tend to increase the noise lesser for which statistics they were designed for than others. I do have a problem, accepting the statement that a deconvolution result is equivalent to a convolution filter as you say. The often better SNR compared to the observed data can only be the result of regularization. Usually derivative based regularizations of varying orders are used, and they therefore do not use the PSF as kernel. Lets say a Laplacian is used, then the smoothing will generally be of Gaussian nature with variable sigma but never confined to the PSF. And then again, as commercial systems these days use a plethora of methods for regularizations (and useful combinations of them), you can never say for sure, how the data will be smoothed, except when looking at the mathematical model that was actually used. > The decon process may seem to reduce image contrast (by averaging-down the > bright noise peaks) but one can compensate for Any deconvolution that works effectively (e.g is not over-regularized), will in fact increase the contrast, as dictated by the 're-assignment' mentioned above. Just imagine, simulating a microscope imaging forward problem with just a simple lowpass filter. You will see, that the contrast from your object declines after filtering. From a deconvolution you expect the reverse, to get a good approximation to your original object, which had a higher contrast to begin with. I hope I have not stirred up more questions but there are good tutorials available that describe deconvolution in greater detail. Regards Lutz __________________________________ L u t z S c h a e f e r Sen. Scientist Mathematical modeling / Image processing Advanced Imaging Methodology Consultation 16-715 Doon Village Rd. Kitchener, ON, N2P 2A2, Canada Phone/Fax: +1 519 894 8870 Email: [hidden email] ___________________________________ -------------------------------------------------- From: "James Pawley" <[hidden email]> Sent: Monday, October 31, 2011 14:22 To: <[hidden email]> Subject: Re: Deconvolution of Confocal Images? (was: Airy Units) > ***** > To join, leave or search the confocal microscopy listserv, go to: > http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy > ***** > >>***** >>To join, leave or search the confocal microscopy listserv, go to: >>http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy >>***** >> >>An interesting point was made here by Jim Pawley: >> >>>I agree that sampling a bit higher than Nyquist never hurts, especially >>>if you deconvolve (as you always should), but I think that it is a >>>mistake to think that one can "separate" out the noise by decon. I think >>>that noise is pretty fundamental. >> >>I had always heard that if you're doing confocal microscopy, at least >>point-scanning confocal with a pinhole size of 1AU or smaller, that >>deconvolution was superfluous, because you shouldn't be getting out of >>focus light. So what is gained by deconvolution when one is sampling voxel >>by voxel? >> >>Peter G. Werner >>Merritt College Microscopy Program > > Hi Peter, > > This is indeed "widely assumed". However, it is beside the point. The > process of decon is applied to 3D, fluorescence microscopy data sets for > (at least) two major different reasons: > > 1) It removes much of the effect of out-of-focus light (if there is any) > and therefore produces a major improvement in the visibility of structures > in widefield data. For confocal data, the resolution improvement is much > less significant. > > 2) It effectively smooths the data out so that anything smaller than the > PSF is removed from the processed result. This is good for two reasons: > > a) The process effectively averages the intensity data of the 64-125 > voxels needed to sample the central peak of the PSF, improving the S/N by > a factor of between 8 and 11. > > b) It also meets the Nyquist Reconstruction Condition: > > If you have 2.5 pixels between the central peak and the first zero, you > have 5 pixels across the diameter of the first-zero ring and (about) 25 > pixels will be needed to sample this blob in 2D and (about) 125 voxels in > 3D. 125 measurements just to measure the brightness and location of one > point. (The "about" has to do with how may counts one must register in a > pixel for it to be considered part of the PSF, a PSF that will in general > not be so conveniently aligned to the pixel grid. 100 might be a good > guess for a 3D PSF.) > > But it is possible to "know" (measure?) the PSF independently, and the PSF > imposes constraints on the relative brightness of these 125 points. Decon > is the best process of imposing this constraint onto the measured data > (For confocal, the 3D Gaussian mentioned earlier is a good approximation > to the PSF. Although it is imprecise (it has a longer tail), if the > Gaussian Laser beam doesn't considerably overfill the objective aperture, > the spot will in fact be more like a Gaussian than an Airy disk. More to > the point, the data are usually so Poisson-Noisy, that it won't make a > great difference, and it is a lot easier to do.). > > The decon process may seem to reduce image contrast (by averaging-down the > bright noise peaks) but one can compensate for this by changing the > display look-up table (contrast control) and when you do this, you will > find that the processed confocal data is far less noisy than was the raw > data. It is also free from "impossible" features that were really only > Poisson Noise excursions usually one-pixel wide (which is about 4 smaller > than the width of the PSF, the smallest feature that the optical system > can pass legitimately.). Indeed, one should never make statements > regarding the exact shape of small structures (near the resolution limit) > recorded in confocal microscopes until the data have been deconvolved. > Nyquist would not approve. > > Best > > JP > -- > *************************************************************************** > Prof. James B. Pawley, Ph. 608-238-3953 > 21. N. Prospect Ave. Madison, WI 53726 USA [hidden email] > 3D Microscopy of Living Cells Course, June 10-22, 2012, UBC, Vancouver > Canada > Info: http://www.3dcourse.ubc.ca/ Applications accepted after 11/15/12 > "If it ain't diffraction, it must be statistics." Anon. |
Mark Cannell |
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** As I understand it the original RL algorithm has no regularization, if the number of iterations is limited the noise _is_ suppressed while contrast _is_ improved, the improvement in resolution at this stage is small. However, since resolution is often limited by S/N in confocal this is useful -as our real results have shown. That's not to say that some regularization may improve convergence, but in my limited experience they just don't help much and for that reason I haven't bothered adding them to our codes -we stop the RL routine as soon as any oscillations start to appear. It should be noted that our real data is more noisy than employed in most tests of regularization routines. my 2c and YMMV Cheers Mark On 31/10/2011, at 10:30 PM, Lutz Schaefer wrote: > >> 2) It effectively smooths the data out so that anything smaller than the PSF is removed from the processed result. This is good for two reasons: >> > This statement can be potentially misunderstood. Again, deconvolution is the inverse of a convolution model. However, due to its ill-posed and numerically instable nature, practical implementations need to employ what is called regularization. Regularization can be understood as a form of smoothing of the estimative solution. Depending on the actual amount of noise present, improbable solutions will be excluded or made to no longer be part of the set of probable solutions within the system of equations. Depending on how the regularization is implemented and using various a-priori information, this smoothing-part may or may not preserve edges and/or intensities or other features. You can of course manually overweigh the regularization term to do what you say, but normally there should be a balance compromise between data fit and regularization. Deconvolution typically increases uncorrelated statistical noise along with resolution and contrast. This is, because the noise can not be easily modeled together with a convolution operation. Maximum likelihood methods tend to increase the noise lesser for which statistics they were designed for than others. I do have a problem, accepting the statement that a deconvolution result is equivalent to a convolution filter as you say. The often better SNR compared to the observed data can only be the result of regularization. Usually derivative based regularizations of varying orders are used, and they therefore do not use the PSF as kernel. Lets say a Laplacian is used, then the smoothing will generally be of Gaussian nature with variable sigma but never confined to the PSF. And then again, as commercial systems these days use a plethora of methods for regularizations (and useful combinations of them), you can never say for sure, how the data will be smoothed, except when looking at the mathematical model that was actually used. > >> The decon process may seem to reduce image contrast (by averaging-down the bright noise peaks) but one can compensate for > Any deconvolution that works effectively (e.g is not over-regularized), will in fact increase the contrast, as dictated by the 're-assignment' mentioned above. Just imagine, simulating a microscope imaging forward problem with just a simple lowpass filter. You will see, that the contrast from your object declines after filtering. From a deconvolution you expect the reverse, to get a good approximation to your original object, which had a higher contrast to begin with. > > > I hope I have not stirred up more questions but there are good tutorials available that describe deconvolution in greater detail. > > Regards > Lutz > > __________________________________ > L u t z S c h a e f e r > Sen. Scientist > Mathematical modeling / Image processing > Advanced Imaging Methodology Consultation > 16-715 Doon Village Rd. > Kitchener, ON, N2P 2A2, Canada > Phone/Fax: +1 519 894 8870 > Email: [hidden email] > ___________________________________ > > -------------------------------------------------- > From: "James Pawley" <[hidden email]> > Sent: Monday, October 31, 2011 14:22 > To: <[hidden email]> > Subject: Re: Deconvolution of Confocal Images? (was: Airy Units) > >> ***** >> To join, leave or search the confocal microscopy listserv, go to: >> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy >> ***** >> >>> ***** >>> To join, leave or search the confocal microscopy listserv, go to: >>> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy >>> ***** >>> >>> An interesting point was made here by Jim Pawley: >>> >>>> I agree that sampling a bit higher than Nyquist never hurts, especially if you deconvolve (as you always should), but I think that it is a mistake to think that one can "separate" out the noise by decon. I think that noise is pretty fundamental. >>> >>> I had always heard that if you're doing confocal microscopy, at least point-scanning confocal with a pinhole size of 1AU or smaller, that deconvolution was superfluous, because you shouldn't be getting out of focus light. So what is gained by deconvolution when one is sampling voxel by voxel? >>> >>> Peter G. Werner >>> Merritt College Microscopy Program >> >> Hi Peter, >> >> This is indeed "widely assumed". However, it is beside the point. The process of decon is applied to 3D, fluorescence microscopy data sets for (at least) two major different reasons: >> >> 1) It removes much of the effect of out-of-focus light (if there is any) and therefore produces a major improvement in the visibility of structures in widefield data. For confocal data, the resolution improvement is much less significant. >> >> 2) It effectively smooths the data out so that anything smaller than the PSF is removed from the processed result. This is good for two reasons: >> >> a) The process effectively averages the intensity data of the 64-125 voxels needed to sample the central peak of the PSF, improving the S/N by a factor of between 8 and 11. >> >> b) It also meets the Nyquist Reconstruction Condition: >> >> If you have 2.5 pixels between the central peak and the first zero, you have 5 pixels across the diameter of the first-zero ring and (about) 25 pixels will be needed to sample this blob in 2D and (about) 125 voxels in 3D. 125 measurements just to measure the brightness and location of one point. (The "about" has to do with how may counts one must register in a pixel for it to be considered part of the PSF, a PSF that will in general not be so conveniently aligned to the pixel grid. 100 might be a good guess for a 3D PSF.) >> >> But it is possible to "know" (measure?) the PSF independently, and the PSF imposes constraints on the relative brightness of these 125 points. Decon is the best process of imposing this constraint onto the measured data (For confocal, the 3D Gaussian mentioned earlier is a good approximation to the PSF. Although it is imprecise (it has a longer tail), if the Gaussian Laser beam doesn't considerably overfill the objective aperture, the spot will in fact be more like a Gaussian than an Airy disk. More to the point, the data are usually so Poisson-Noisy, that it won't make a great difference, and it is a lot easier to do.). >> >> The decon process may seem to reduce image contrast (by averaging-down the bright noise peaks) but one can compensate for this by changing the display look-up table (contrast control) and when you do this, you will find that the processed confocal data is far less noisy than was the raw data. It is also free from "impossible" features that were really only Poisson Noise excursions usually one-pixel wide (which is about 4 smaller than the width of the PSF, the smallest feature that the optical system can pass legitimately.). Indeed, one should never make statements regarding the exact shape of small structures (near the resolution limit) recorded in confocal microscopes until the data have been deconvolved. Nyquist would not approve. >> >> Best >> >> JP >> -- >> *************************************************************************** >> Prof. James B. Pawley, Ph. 608-238-3953 21. N. Prospect Ave. Madison, WI 53726 USA [hidden email] >> 3D Microscopy of Living Cells Course, June 10-22, 2012, UBC, Vancouver Canada >> Info: http://www.3dcourse.ubc.ca/ Applications accepted after 11/15/12 >> "If it ain't diffraction, it must be statistics." Anon. |
Lutz Schaefer |
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Mark, yes, the original RL is not regularized. Due to that it must converge to the instable fixed-point solution of the normal equations. The fact, that you stop the iterations can be interpreted as regularization! LR is usually good behaving with fluorescence data, it will seldom create a pseudo convergence, which is common to non-regularized deconvolution methods (such as Meinel or Jansson VanCittert). Pseudo convergence occurs when the residuum (or merit function) declines with the first few iterations and then increases, often rendering the result useless. Thankfully, RL incorporates Poisson likelihood in its data fit which makes it still attractive, although it requires a high number of iterations unless you use a numerical gradient descent as seen by David Biggs. Nevertheless, RL can be extended with regularization too. There are many nonlinear-adaptive approaches available these days. I tend to agree with you, the standard ones (Tikhonov Miller, Goods Roughness etc.) do not give spectacular results. Regards Lutz __________________________________ L u t z S c h a e f e r Sen. Scientist Mathematical modeling / Image processing Advanced Imaging Methodology Consultation 16-715 Doon Village Rd. Kitchener, ON, N2P 2A2, Canada Phone/Fax: +1 519 894 8870 Email: [hidden email] ___________________________________ -------------------------------------------------- From: "Mark Cannell" <[hidden email]> Sent: Monday, October 31, 2011 19:18 To: <[hidden email]> Subject: Re: Deconvolution of Confocal Images? (was: Airy Units) > ***** > To join, leave or search the confocal microscopy listserv, go to: > http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy > ***** > > As I understand it the original RL algorithm has no regularization, if > the number of iterations is limited the noise _is_ suppressed while > contrast _is_ improved, the improvement in resolution at this stage is > small. However, since resolution is often limited by S/N in confocal this > is useful -as our real results have shown. That's not to say that some > regularization may improve convergence, but in my limited experience they > just don't help much and for that reason I haven't bothered adding them to > our codes -we stop the RL routine as soon as any oscillations start to > appear. It should be noted that our real data is more noisy than employed > in most tests of regularization routines. > > my 2c and YMMV > > Cheers Mark > > On 31/10/2011, at 10:30 PM, Lutz Schaefer wrote: > >> >>> 2) It effectively smooths the data out so that anything smaller than the >>> PSF is removed from the processed result. This is good for two reasons: >>> >> This statement can be potentially misunderstood. Again, deconvolution is >> the inverse of a convolution model. However, due to its ill-posed and >> numerically instable nature, practical implementations need to employ >> what is called regularization. Regularization can be understood as a form >> of smoothing of the estimative solution. Depending on the actual amount >> of noise present, improbable solutions will be excluded or made to no >> longer be part of the set of probable solutions within the system of >> equations. Depending on how the regularization is implemented and using >> various a-priori information, this smoothing-part may or may not preserve >> edges and/or intensities or other features. You can of course manually >> overweigh the regularization term to do what you say, but normally there >> should be a balance compromise between data fit and regularization. >> Deconvolution typically increases uncorrelated statistical noise along >> with resolution and contrast. This is, because the noise can not be >> easily modeled together with a convolution operation. Maximum likelihood >> methods tend to increase the noise lesser for which statistics they were >> designed for than others. I do have a problem, accepting the statement >> that a deconvolution result is equivalent to a convolution filter as you >> say. The often better SNR compared to the observed data can only be the >> result of regularization. Usually derivative based regularizations of >> varying orders are used, and they therefore do not use the PSF as kernel. >> Lets say a Laplacian is used, then the smoothing will generally be of >> Gaussian nature with variable sigma but never confined to the PSF. And >> then again, as commercial systems these days use a plethora of methods >> for regularizations (and useful combinations of them), you can never say >> for sure, how the data will be smoothed, except when looking at the >> mathematical model that was actually used. >> >>> The decon process may seem to reduce image contrast (by averaging-down >>> the bright noise peaks) but one can compensate for >> Any deconvolution that works effectively (e.g is not over-regularized), >> will in fact increase the contrast, as dictated by the 're-assignment' >> mentioned above. Just imagine, simulating a microscope imaging forward >> problem with just a simple lowpass filter. You will see, that the >> contrast from your object declines after filtering. From a deconvolution >> you expect the reverse, to get a good approximation to your original >> object, which had a higher contrast to begin with. >> >> >> I hope I have not stirred up more questions but there are good tutorials >> available that describe deconvolution in greater detail. >> >> Regards >> Lutz >> >> __________________________________ >> L u t z S c h a e f e r >> Sen. Scientist >> Mathematical modeling / Image processing >> Advanced Imaging Methodology Consultation >> 16-715 Doon Village Rd. >> Kitchener, ON, N2P 2A2, Canada >> Phone/Fax: +1 519 894 8870 >> Email: [hidden email] >> ___________________________________ >> >> -------------------------------------------------- >> From: "James Pawley" <[hidden email]> >> Sent: Monday, October 31, 2011 14:22 >> To: <[hidden email]> >> Subject: Re: Deconvolution of Confocal Images? (was: Airy Units) >> >>> ***** >>> To join, leave or search the confocal microscopy listserv, go to: >>> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy >>> ***** >>> >>>> ***** >>>> To join, leave or search the confocal microscopy listserv, go to: >>>> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy >>>> ***** >>>> >>>> An interesting point was made here by Jim Pawley: >>>> >>>>> I agree that sampling a bit higher than Nyquist never hurts, >>>>> especially if you deconvolve (as you always should), but I think that >>>>> it is a mistake to think that one can "separate" out the noise by >>>>> decon. I think that noise is pretty fundamental. >>>> >>>> I had always heard that if you're doing confocal microscopy, at least >>>> point-scanning confocal with a pinhole size of 1AU or smaller, that >>>> deconvolution was superfluous, because you shouldn't be getting out of >>>> focus light. So what is gained by deconvolution when one is sampling >>>> voxel by voxel? >>>> >>>> Peter G. Werner >>>> Merritt College Microscopy Program >>> >>> Hi Peter, >>> >>> This is indeed "widely assumed". However, it is beside the point. The >>> process of decon is applied to 3D, fluorescence microscopy data sets for >>> (at least) two major different reasons: >>> >>> 1) It removes much of the effect of out-of-focus light (if there is any) >>> and therefore produces a major improvement in the visibility of >>> structures in widefield data. For confocal data, the resolution >>> improvement is much less significant. >>> >>> 2) It effectively smooths the data out so that anything smaller than the >>> PSF is removed from the processed result. This is good for two reasons: >>> >>> a) The process effectively averages the intensity data of the 64-125 >>> voxels needed to sample the central peak of the PSF, improving the S/N >>> by a factor of between 8 and 11. >>> >>> b) It also meets the Nyquist Reconstruction Condition: >>> >>> If you have 2.5 pixels between the central peak and the first zero, you >>> have 5 pixels across the diameter of the first-zero ring and (about) 25 >>> pixels will be needed to sample this blob in 2D and (about) 125 voxels >>> in 3D. 125 measurements just to measure the brightness and location of >>> one point. (The "about" has to do with how may counts one must register >>> in a pixel for it to be considered part of the PSF, a PSF that will in >>> general not be so conveniently aligned to the pixel grid. 100 might be a >>> good guess for a 3D PSF.) >>> >>> But it is possible to "know" (measure?) the PSF independently, and the >>> PSF imposes constraints on the relative brightness of these 125 points. >>> Decon is the best process of imposing this constraint onto the measured >>> data (For confocal, the 3D Gaussian mentioned earlier is a good >>> approximation to the PSF. Although it is imprecise (it has a longer >>> tail), if the Gaussian Laser beam doesn't considerably overfill the >>> objective aperture, the spot will in fact be more like a Gaussian than >>> an Airy disk. More to the point, the data are usually so Poisson-Noisy, >>> that it won't make a great difference, and it is a lot easier to do.). >>> >>> The decon process may seem to reduce image contrast (by averaging-down >>> the bright noise peaks) but one can compensate for this by changing the >>> display look-up table (contrast control) and when you do this, you will >>> find that the processed confocal data is far less noisy than was the raw >>> data. It is also free from "impossible" features that were really only >>> Poisson Noise excursions usually one-pixel wide (which is about 4 >>> smaller than the width of the PSF, the smallest feature that the optical >>> system can pass legitimately.). Indeed, one should never make >>> statements regarding the exact shape of small structures (near the >>> resolution limit) recorded in confocal microscopes until the data have >>> been deconvolved. Nyquist would not approve. >>> >>> Best >>> >>> JP >>> -- >>> *************************************************************************** >>> Prof. James B. Pawley, Ph. 608-238-3953 21. N. >>> Prospect Ave. Madison, WI 53726 USA [hidden email] >>> 3D Microscopy of Living Cells Course, June 10-22, 2012, UBC, Vancouver >>> Canada >>> Info: http://www.3dcourse.ubc.ca/ Applications accepted after 11/15/12 >>> "If it ain't diffraction, it must be statistics." Anon. |
Keith Morris |
In reply to this post by John Oreopoulos
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** On a practical note regarding whether to bother with deconvolving a confocal z stack using our Zeiss 510 Metahead confocal and our Volocity 3D deconvolution [Restoration] software: The advice from one of our Perkin Elmer's Volocity software engineers was that in practice you don't see much improvement, if any, using Volocity's Restoration module to deconvolve a standard confocal z-stack. Traditionally using our Zeiss 510 we double over-sample a z-stack, i.e. for a 0.9um optical slice [63x objective, Airy=1] we take a z-slice every 0.45um in the z direction. In order to get a significant improvement over that with Volocity's Restoration module, Perkin Elmer support recommended oversampling by at least 10 [perhaps even up to 100] times rather than just 2. In practice the confocal is limited to a minimum focus z step of around 0.05um, giving about 20 Z scans when oversampling through each '0.9um optical slice' [although you can push up the numbers, and scan time, by applying image averaging as well]. This increases the scan time by about 10x for each z stack, to typically over twenty minutes with 1024x1024 image sizes. Plus you have to make time to apply Volocity's Fast Restoration or the slower Iterative Restoration to the image z-stack. Although time consuming, z-stack oversampling with deconvolution does get a little bit more detail from our 3D z-stacks - it's just whether you want to take another half hour to go though the acquisition and deconvolution process for every z-stack, as the extra detail may not provide anything more informative. Volocity Support added that Volocity's Restoration deconvolution module predictably looks it's best when dealing with a brighter and fuzzier z-stack from a normal wide-field microscope - where the out of focus light can be put to use rather than be simply excluded by the confocal iris. There is talk on the listserver of confocal deconvolution also producing superior 3D spatial information for quantification of sub-micron structures and well as improving resolution. Without something like serial TEM section corroboration of the structure you can't really allay the fear that the extra detail might include processing artefacts, although most of this extra detail can be seen in the original fuzzier confocal z-stack once the deconvolution has highlighted it. These days though I expect you'd also look towards a colleagues super-resolution STED/PALM system. For those in the UK with an interest in using Volocity 3D software, next week Perkin Elmer are running a two day Volocity user training course in London on the 8th and 9th November 2011 [price £300 a day, £550 both days], see: http://now.eloqua.com/e/es.aspx?s=643&e=123818&elq=9c93259d524640798ee7b233e bf0246c Day 1: Volocity Essentials, Day 2: Volocity Quantitation With our Volocity 3D software it tends to be the Quantitation module we use the most, with standard Zeiss [non-de]convolved 2x over-sampled confocal z-stacks, as it can measure intracellular structure volume and track objects. We have the old Volocity v4.2 [latest is v6.0], although it's used less frequently here than our 2D image analysis options, MetaMorph v7.7 and ImageJ. Regards Keith I have placed a deconvolved confocal ~20x oversampled 45 slice z-stack 3D image at http://www.well.ox.ac.uk/cytogenetics/deconvolved.jpg It's a slice from the top of a couple of Invitrogen FocalCheck fluorescent microspheres [Volocity Fast Restoration, see right image], where the diameter of the slice shown is about 10 um and the depth of the ring about 2 um. --------------------------------------------------------------------------- Dr Keith J. Morris, Molecular Cytogenetics and Microscopy Core, Laboratory 00/069 and 00/070, The Wellcome Trust Centre for Human Genetics, Roosevelt Drive, Oxford OX3 7BN, United Kingdom. Telephone: +44 (0)1865 287568 Email: [hidden email] Web-pages: http://www.well.ox.ac.uk/molecular-cytogenetics-and-microscopy -----Original Message----- From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of John Oreopoulos Sent: 30 October 2011 20:42 To: [hidden email] Subject: Re: Deconvolution of Confocal Images? (was: Airy Units) ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Peter, there are some advantages to deconvolving even confocal imaging data. Until I took Pawley's course, I too had always seen these two imaging techniques as mutually exclusive, but that's not the case. Here is a passage from Chapter 2 of Pawley's Handbook: "Although deconvolution and confocal imaging are often seen as competing methods aimed at the same goal, namely producing images of 3D stain distributions, in fact, they are not exclusive, and there is much to be said for combining them. No only does deconvolution suppress the "single-pixel" features created by Poisson noise, it also effectively averages the signal in the Nyquist-sampled image of a point object. In other words, it has the same effect of reducing the uncertainty in the estimate of the brightness in individual voxels as Kalman averaging for 64 to 125 frames. This point is so important that the present edition of this volume devotes an entire chapter to it: Chapter 25." Now, having repeated that passage, can I say that I deconvolve my confocal data all the time? No, mainly due to lack of access to deconvolution software, and partly due to time constraints usually. But I can assure you that your images will look a bit better after deconvolving them. Apparently all the best confocalists do this, but I seldom see it practiced in the literature. Have a look at Chapter 25 of his book. It's quite interesting. John Oreopoulos Research Assistant Spectral Applied Research Richmond Hill, Ontario Canada www.spectral.ca On 2011-10-30, at 4:09 PM, Peter Werner wrote: > ***** > To join, leave or search the confocal microscopy listserv, go to: > http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy > ***** > > An interesting point was made here by Jim Pawley: > >> I agree that sampling a bit higher than Nyquist never hurts, especially if you deconvolve (as you always should), but I think that it is a mistake to think that one can "separate" out the noise by decon. I think that noise is pretty fundamental. > > I had always heard that if you're doing confocal microscopy, at least point-scanning confocal with a pinhole size of 1AU or smaller, that deconvolution was superfluous, because you shouldn't be getting out of focus light. So what is gained by deconvolution when one is sampling voxel by voxel? > > Peter G. Werner > Merritt College Microscopy Program |
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** I'm a bit puzzled by this "0.9µm optical slice". If this is an NA 1.4 lens Z resolution should be 400-500nm, I've measured this FWHM on 100nm beads, so this is practical not theoretical. So you should be sampling at 200nm or less in Z to meet Nyquist. This is before thinking about oversampling for decon. Guy Optical Imaging Techniques in Cell Biology by Guy Cox CRC Press / Taylor & Francis http://www.guycox.com/optical.htm ______________________________________________ Associate Professor Guy Cox, MA, DPhil(Oxon) Australian Centre for Microscopy & Microanalysis, Madsen Building F09, University of Sydney, NSW 2006 Phone +61 2 9351 3176 Fax +61 2 9351 7682 Mobile 0413 281 861 ______________________________________________ http://www.guycox.net -----Original Message----- From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Keith Morris Sent: Thursday, 3 November 2011 12:59 AM To: [hidden email] Subject: Re: Deconvolution of Confocal Images? Volocity Course London ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** On a practical note regarding whether to bother with deconvolving a confocal z stack using our Zeiss 510 Metahead confocal and our Volocity 3D deconvolution [Restoration] software: The advice from one of our Perkin Elmer's Volocity software engineers was that in practice you don't see much improvement, if any, using Volocity's Restoration module to deconvolve a standard confocal z-stack. Traditionally using our Zeiss 510 we double over-sample a z-stack, i.e. for a 0.9um optical slice [63x objective, Airy=1] we take a z-slice every 0.45um in the z direction. In order to get a significant improvement over that with Volocity's Restoration module, Perkin Elmer support recommended oversampling by at least 10 [perhaps even up to 100] times rather than just 2. In practice the confocal is limited to a minimum focus z step of around 0.05um, giving about 20 Z scans when oversampling through each '0.9um optical slice' [although you can push up the numbers, and scan time, by applying image averaging as well]. This increases the scan time by about 10x for each z stack, to typically over twenty minutes with 1024x1024 image sizes. Plus you have to make time to apply Volocity's Fast Restoration or the slower Iterative Restoration to the image z-stack. Although time consuming, z-stack oversampling with deconvolution does get a little bit more detail from our 3D z-stacks - it's just whether you want to take another half hour to go though the acquisition and deconvolution process for every z-stack, as the extra detail may not provide anything more informative. Volocity Support added that Volocity's Restoration deconvolution module predictably looks it's best when dealing with a brighter and fuzzier z-stack from a normal wide-field microscope - where the out of focus light can be put to use rather than be simply excluded by the confocal iris. There is talk on the listserver of confocal deconvolution also producing superior 3D spatial information for quantification of sub-micron structures and well as improving resolution. Without something like serial TEM section corroboration of the structure you can't really allay the fear that the extra detail might include processing artefacts, although most of this extra detail can be seen in the original fuzzier confocal z-stack once the deconvolution has highlighted it. These days though I expect you'd also look towards a colleagues super-resolution STED/PALM system. For those in the UK with an interest in using Volocity 3D software, next week Perkin Elmer are running a two day Volocity user training course in London on the 8th and 9th November 2011 [price £300 a day, £550 both days], see: http://now.eloqua.com/e/es.aspx?s=643&e=123818&elq=9c93259d524640798ee7b233e bf0246c Day 1: Volocity Essentials, Day 2: Volocity Quantitation With our Volocity 3D software it tends to be the Quantitation module we use the most, with standard Zeiss [non-de]convolved 2x over-sampled confocal z-stacks, as it can measure intracellular structure volume and track objects. We have the old Volocity v4.2 [latest is v6.0], although it's used less frequently here than our 2D image analysis options, MetaMorph v7.7 and ImageJ. Regards Keith I have placed a deconvolved confocal ~20x oversampled 45 slice z-stack 3D image at http://www.well.ox.ac.uk/cytogenetics/deconvolved.jpg It's a slice from the top of a couple of Invitrogen FocalCheck fluorescent microspheres [Volocity Fast Restoration, see right image], where the diameter of the slice shown is about 10 um and the depth of the ring about 2 um. --------------------------------------------------------------------------- Dr Keith J. Morris, Molecular Cytogenetics and Microscopy Core, Laboratory 00/069 and 00/070, The Wellcome Trust Centre for Human Genetics, Roosevelt Drive, Oxford OX3 7BN, United Kingdom. Telephone: +44 (0)1865 287568 Email: [hidden email] Web-pages: http://www.well.ox.ac.uk/molecular-cytogenetics-and-microscopy -----Original Message----- From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of John Oreopoulos Sent: 30 October 2011 20:42 To: [hidden email] Subject: Re: Deconvolution of Confocal Images? (was: Airy Units) ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Peter, there are some advantages to deconvolving even confocal imaging data. Until I took Pawley's course, I too had always seen these two imaging techniques as mutually exclusive, but that's not the case. Here is a passage from Chapter 2 of Pawley's Handbook: "Although deconvolution and confocal imaging are often seen as competing methods aimed at the same goal, namely producing images of 3D stain distributions, in fact, they are not exclusive, and there is much to be said for combining them. No only does deconvolution suppress the "single-pixel" features created by Poisson noise, it also effectively averages the signal in the Nyquist-sampled image of a point object. In other words, it has the same effect of reducing the uncertainty in the estimate of the brightness in individual voxels as Kalman averaging for 64 to 125 frames. This point is so important that the present edition of this volume devotes an entire chapter to it: Chapter 25." Now, having repeated that passage, can I say that I deconvolve my confocal data all the time? No, mainly due to lack of access to deconvolution software, and partly due to time constraints usually. But I can assure you that your images will look a bit better after deconvolving them. Apparently all the best confocalists do this, but I seldom see it practiced in the literature. Have a look at Chapter 25 of his book. It's quite interesting. John Oreopoulos Research Assistant Spectral Applied Research Richmond Hill, Ontario Canada www.spectral.ca On 2011-10-30, at 4:09 PM, Peter Werner wrote: > ***** > To join, leave or search the confocal microscopy listserv, go to: > http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy > ***** > > An interesting point was made here by Jim Pawley: > >> I agree that sampling a bit higher than Nyquist never hurts, especially if you deconvolve (as you always should), but I think that it is a mistake to think that one can "separate" out the noise by decon. I think that noise is pretty fundamental. > > I had always heard that if you're doing confocal microscopy, at least point-scanning confocal with a pinhole size of 1AU or smaller, that deconvolution was superfluous, because you shouldn't be getting out of focus light. So what is gained by deconvolution when one is sampling voxel by voxel? > > Peter G. Werner > Merritt College Microscopy Program ----- No virus found in this message. Checked by AVG - www.avg.com Version: 2012.0.1834 / Virus Database: 2092/4590 - Release Date: 11/01/11 |
Cameron Nowell |
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Hi All, There is an "optical slice thickness" calculation in each confocal system isn't there? Ziess use this www.fileden.com/files/2011/10/11/3207952/Zeiss%20Optical%20Thickness.png and Olympus use this www.fileden.com/files/2011/10/11/3207952/Olympus%20Optical%20Thickness.png I assume the other companies use some variation of them as well. But of course as Guy points out you should sample based on the measured (or theoretical) resolution of your system Cheers Cam Cameron J. Nowell Microscopy Manager Centre for Advanced Microscopy Ludwig Institute for Cancer Research Melbourne - Parkville Branch PO Box 2008 Royal Melbourne Hospital Victoria, 3050 AUSTRALIA Office: +61 3 9341 3158 Mobile: +61 422882700 Fax: +61 3 9341 3104 Facility Website -----Original Message----- From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Guy Cox Sent: Thursday, 3 November 2011 9:27 AM To: [hidden email] Subject: Re: Deconvolution of Confocal Images? Volocity Course London ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** I'm a bit puzzled by this "0.9µm optical slice". If this is an NA 1.4 lens Z resolution should be 400-500nm, I've measured this FWHM on 100nm beads, so this is practical not theoretical. So you should be sampling at 200nm or less in Z to meet Nyquist. This is before thinking about oversampling for decon. Guy Optical Imaging Techniques in Cell Biology by Guy Cox CRC Press / Taylor & Francis http://www.guycox.com/optical.htm ______________________________________________ Associate Professor Guy Cox, MA, DPhil(Oxon) Australian Centre for Microscopy & Microanalysis, Madsen Building F09, University of Sydney, NSW 2006 Phone +61 2 9351 3176 Fax +61 2 9351 7682 Mobile 0413 281 861 ______________________________________________ http://www.guycox.net -----Original Message----- From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Keith Morris Sent: Thursday, 3 November 2011 12:59 AM To: [hidden email] Subject: Re: Deconvolution of Confocal Images? Volocity Course London ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** On a practical note regarding whether to bother with deconvolving a confocal z stack using our Zeiss 510 Metahead confocal and our Volocity 3D deconvolution [Restoration] software: The advice from one of our Perkin Elmer's Volocity software engineers was that in practice you don't see much improvement, if any, using Volocity's Restoration module to deconvolve a standard confocal z-stack. Traditionally using our Zeiss 510 we double over-sample a z-stack, i.e. for a 0.9um optical slice [63x objective, Airy=1] we take a z-slice every 0.45um in the z direction. In order to get a significant improvement over that with Volocity's Restoration module, Perkin Elmer support recommended oversampling by at least 10 [perhaps even up to 100] times rather than just 2. In practice the confocal is limited to a minimum focus z step of around 0.05um, giving about 20 Z scans when oversampling through each '0.9um optical slice' [although you can push up the numbers, and scan time, by applying image averaging as well]. This increases the scan time by about 10x for each z stack, to typically over twenty minutes with 1024x1024 image sizes. Plus you have to make time to apply Volocity's Fast Restoration or the slower Iterative Restoration to the image z-stack. Although time consuming, z-stack oversampling with deconvolution does get a little bit more detail from our 3D z-stacks - it's just whether you want to take another half hour to go though the acquisition and deconvolution process for every z-stack, as the extra detail may not provide anything more informative. Volocity Support added that Volocity's Restoration deconvolution module predictably looks it's best when dealing with a brighter and fuzzier z-stack from a normal wide-field microscope - where the out of focus light can be put to use rather than be simply excluded by the confocal iris. There is talk on the listserver of confocal deconvolution also producing superior 3D spatial information for quantification of sub-micron structures and well as improving resolution. Without something like serial TEM section corroboration of the structure you can't really allay the fear that the extra detail might include processing artefacts, although most of this extra detail can be seen in the original fuzzier confocal z-stack once the deconvolution has highlighted it. These days though I expect you'd also look towards a colleagues super-resolution STED/PALM system. For those in the UK with an interest in using Volocity 3D software, next week Perkin Elmer are running a two day Volocity user training course in London on the 8th and 9th November 2011 [price £300 a day, £550 both days], see: http://now.eloqua.com/e/es.aspx?s=643&e=123818&elq=9c93259d524640798ee7b233e bf0246c Day 1: Volocity Essentials, Day 2: Volocity Quantitation With our Volocity 3D software it tends to be the Quantitation module we use the most, with standard Zeiss [non-de]convolved 2x over-sampled confocal z-stacks, as it can measure intracellular structure volume and track objects. We have the old Volocity v4.2 [latest is v6.0], although it's used less frequently here than our 2D image analysis options, MetaMorph v7.7 and ImageJ. Regards Keith I have placed a deconvolved confocal ~20x oversampled 45 slice z-stack 3D image at http://www.well.ox.ac.uk/cytogenetics/deconvolved.jpg It's a slice from the top of a couple of Invitrogen FocalCheck fluorescent microspheres [Volocity Fast Restoration, see right image], where the diameter of the slice shown is about 10 um and the depth of the ring about 2 um. --------------------------------------------------------------------------- Dr Keith J. Morris, Molecular Cytogenetics and Microscopy Core, Laboratory 00/069 and 00/070, The Wellcome Trust Centre for Human Genetics, Roosevelt Drive, Oxford OX3 7BN, United Kingdom. Telephone: +44 (0)1865 287568 Email: [hidden email] Web-pages: http://www.well.ox.ac.uk/molecular-cytogenetics-and-microscopy -----Original Message----- From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of John Oreopoulos Sent: 30 October 2011 20:42 To: [hidden email] Subject: Re: Deconvolution of Confocal Images? (was: Airy Units) ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Peter, there are some advantages to deconvolving even confocal imaging data. Until I took Pawley's course, I too had always seen these two imaging techniques as mutually exclusive, but that's not the case. Here is a passage from Chapter 2 of Pawley's Handbook: "Although deconvolution and confocal imaging are often seen as competing methods aimed at the same goal, namely producing images of 3D stain distributions, in fact, they are not exclusive, and there is much to be said for combining them. No only does deconvolution suppress the "single-pixel" features created by Poisson noise, it also effectively averages the signal in the Nyquist-sampled image of a point object. In other words, it has the same effect of reducing the uncertainty in the estimate of the brightness in individual voxels as Kalman averaging for 64 to 125 frames. This point is so important that the present edition of this volume devotes an entire chapter to it: Chapter 25." Now, having repeated that passage, can I say that I deconvolve my confocal data all the time? No, mainly due to lack of access to deconvolution software, and partly due to time constraints usually. But I can assure you that your images will look a bit better after deconvolving them. Apparently all the best confocalists do this, but I seldom see it practiced in the literature. Have a look at Chapter 25 of his book. It's quite interesting. John Oreopoulos Research Assistant Spectral Applied Research Richmond Hill, Ontario Canada www.spectral.ca On 2011-10-30, at 4:09 PM, Peter Werner wrote: > ***** > To join, leave or search the confocal microscopy listserv, go to: > http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy > ***** > > An interesting point was made here by Jim Pawley: > >> I agree that sampling a bit higher than Nyquist never hurts, >> especially if you deconvolve (as you always should), but I think that it is a mistake to think that one can "separate" out the noise by decon. I think that noise is pretty fundamental. > > I had always heard that if you're doing confocal microscopy, at least point-scanning confocal with a pinhole size of 1AU or smaller, that deconvolution was superfluous, because you shouldn't be getting out of focus light. So what is gained by deconvolution when one is sampling voxel by voxel? > > Peter G. Werner > Merritt College Microscopy Program ----- No virus found in this message. Checked by AVG - www.avg.com Version: 2012.0.1834 / Virus Database: 2092/4590 - Release Date: 11/01/11 This communication is intended only for the named recipient and may contain information that is confidential, legally privileged or subject to copyright; the Ludwig Institute for Cancer Research Ltd does not waive any rights if you have received this communication in error. The views expressed in this communication are those of the sender and do not necessarily reflect the views of the Ludwig Institute for Cancer Research Ltd. |
Well, I can't make much sense out of these. The Zeiss one has a pinhole diameter in µm but no magnification figure, so I can't see how that computes. The Olympus one doesn't say what the pinhole size is measured in, but it also contains m and M which are not explained, but I assume one of these is the mag at the pinhole.
The formula I have (from Colin Sheppard) dz = 0.4 lambda / n.sin^2 (alpha/2) where dz is FWHM in Z, lambda is the wavelength, n is the refractive index and alpha is the half-angle of the objective. It assumes the pinhole is open no wider than the Airy disk, and works out to 390nm for lambda 500, alpha 72 and n 1.5. This seems to be achievable in practice. Opening the pinhole wider is also opening a can of worms - you are getting some mix of confocal and widefield which will be a bit of a dog's breakfast. (How's that for a glorious mixed metaphor!) Guy Optical Imaging Techniques in Cell Biology by Guy Cox CRC Press / Taylor & Francis http://www.guycox.com/optical.htm ______________________________________________ Associate Professor Guy Cox, MA, DPhil(Oxon) Australian Centre for Microscopy & Microanalysis, Madsen Building F09, University of Sydney, NSW 2006 Phone +61 2 9351 3176 Fax +61 2 9351 7682 Mobile 0413 281 861 ______________________________________________ http://www.guycox.net -----Original Message----- From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Cameron Nowell Sent: Thursday, 3 November 2011 11:10 AM To: [hidden email] Subject: Re: Deconvolution of Confocal Images? Volocity Course London ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Hi All, There is an "optical slice thickness" calculation in each confocal system isn't there? Ziess use this www.fileden.com/files/2011/10/11/3207952/Zeiss%20Optical%20Thickness.png and Olympus use this www.fileden.com/files/2011/10/11/3207952/Olympus%20Optical%20Thickness.png I assume the other companies use some variation of them as well. But of course as Guy points out you should sample based on the measured (or theoretical) resolution of your system Cheers Cam Cameron J. Nowell Microscopy Manager Centre for Advanced Microscopy Ludwig Institute for Cancer Research Melbourne - Parkville Branch PO Box 2008 Royal Melbourne Hospital Victoria, 3050 AUSTRALIA Office: +61 3 9341 3158 Mobile: +61 422882700 Fax: +61 3 9341 3104 Facility Website -----Original Message----- From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Guy Cox Sent: Thursday, 3 November 2011 9:27 AM To: [hidden email] Subject: Re: Deconvolution of Confocal Images? Volocity Course London ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** I'm a bit puzzled by this "0.9µm optical slice". If this is an NA 1.4 lens Z resolution should be 400-500nm, I've measured this FWHM on 100nm beads, so this is practical not theoretical. So you should be sampling at 200nm or less in Z to meet Nyquist. This is before thinking about oversampling for decon. Guy Optical Imaging Techniques in Cell Biology by Guy Cox CRC Press / Taylor & Francis http://www.guycox.com/optical.htm ______________________________________________ Associate Professor Guy Cox, MA, DPhil(Oxon) Australian Centre for Microscopy & Microanalysis, Madsen Building F09, University of Sydney, NSW 2006 Phone +61 2 9351 3176 Fax +61 2 9351 7682 Mobile 0413 281 861 ______________________________________________ http://www.guycox.net -----Original Message----- From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Keith Morris Sent: Thursday, 3 November 2011 12:59 AM To: [hidden email] Subject: Re: Deconvolution of Confocal Images? Volocity Course London ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** On a practical note regarding whether to bother with deconvolving a confocal z stack using our Zeiss 510 Metahead confocal and our Volocity 3D deconvolution [Restoration] software: The advice from one of our Perkin Elmer's Volocity software engineers was that in practice you don't see much improvement, if any, using Volocity's Restoration module to deconvolve a standard confocal z-stack. Traditionally using our Zeiss 510 we double over-sample a z-stack, i.e. for a 0.9um optical slice [63x objective, Airy=1] we take a z-slice every 0.45um in the z direction. In order to get a significant improvement over that with Volocity's Restoration module, Perkin Elmer support recommended oversampling by at least 10 [perhaps even up to 100] times rather than just 2. In practice the confocal is limited to a minimum focus z step of around 0.05um, giving about 20 Z scans when oversampling through each '0.9um optical slice' [although you can push up the numbers, and scan time, by applying image averaging as well]. This increases the scan time by about 10x for each z stack, to typically over twenty minutes with 1024x1024 image sizes. Plus you have to make time to apply Volocity's Fast Restoration or the slower Iterative Restoration to the image z-stack. Although time consuming, z-stack oversampling with deconvolution does get a little bit more detail from our 3D z-stacks - it's just whether you want to take another half hour to go though the acquisition and deconvolution process for every z-stack, as the extra detail may not provide anything more informative. Volocity Support added that Volocity's Restoration deconvolution module predictably looks it's best when dealing with a brighter and fuzzier z-stack from a normal wide-field microscope - where the out of focus light can be put to use rather than be simply excluded by the confocal iris. There is talk on the listserver of confocal deconvolution also producing superior 3D spatial information for quantification of sub-micron structures and well as improving resolution. Without something like serial TEM section corroboration of the structure you can't really allay the fear that the extra detail might include processing artefacts, although most of this extra detail can be seen in the original fuzzier confocal z-stack once the deconvolution has highlighted it. These days though I expect you'd also look towards a colleagues super-resolution STED/PALM system. For those in the UK with an interest in using Volocity 3D software, next week Perkin Elmer are running a two day Volocity user training course in London on the 8th and 9th November 2011 [price £300 a day, £550 both days], see: http://now.eloqua.com/e/es.aspx?s=643&e=123818&elq=9c93259d524640798ee7b233e bf0246c Day 1: Volocity Essentials, Day 2: Volocity Quantitation With our Volocity 3D software it tends to be the Quantitation module we use the most, with standard Zeiss [non-de]convolved 2x over-sampled confocal z-stacks, as it can measure intracellular structure volume and track objects. We have the old Volocity v4.2 [latest is v6.0], although it's used less frequently here than our 2D image analysis options, MetaMorph v7.7 and ImageJ. Regards Keith I have placed a deconvolved confocal ~20x oversampled 45 slice z-stack 3D image at http://www.well.ox.ac.uk/cytogenetics/deconvolved.jpg It's a slice from the top of a couple of Invitrogen FocalCheck fluorescent microspheres [Volocity Fast Restoration, see right image], where the diameter of the slice shown is about 10 um and the depth of the ring about 2 um. --------------------------------------------------------------------------- Dr Keith J. Morris, Molecular Cytogenetics and Microscopy Core, Laboratory 00/069 and 00/070, The Wellcome Trust Centre for Human Genetics, Roosevelt Drive, Oxford OX3 7BN, United Kingdom. Telephone: +44 (0)1865 287568 Email: [hidden email] Web-pages: http://www.well.ox.ac.uk/molecular-cytogenetics-and-microscopy -----Original Message----- From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of John Oreopoulos Sent: 30 October 2011 20:42 To: [hidden email] Subject: Re: Deconvolution of Confocal Images? (was: Airy Units) ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Peter, there are some advantages to deconvolving even confocal imaging data. Until I took Pawley's course, I too had always seen these two imaging techniques as mutually exclusive, but that's not the case. Here is a passage from Chapter 2 of Pawley's Handbook: "Although deconvolution and confocal imaging are often seen as competing methods aimed at the same goal, namely producing images of 3D stain distributions, in fact, they are not exclusive, and there is much to be said for combining them. No only does deconvolution suppress the "single-pixel" features created by Poisson noise, it also effectively averages the signal in the Nyquist-sampled image of a point object. In other words, it has the same effect of reducing the uncertainty in the estimate of the brightness in individual voxels as Kalman averaging for 64 to 125 frames. This point is so important that the present edition of this volume devotes an entire chapter to it: Chapter 25." Now, having repeated that passage, can I say that I deconvolve my confocal data all the time? No, mainly due to lack of access to deconvolution software, and partly due to time constraints usually. But I can assure you that your images will look a bit better after deconvolving them. Apparently all the best confocalists do this, but I seldom see it practiced in the literature. Have a look at Chapter 25 of his book. It's quite interesting. John Oreopoulos Research Assistant Spectral Applied Research Richmond Hill, Ontario Canada www.spectral.ca On 2011-10-30, at 4:09 PM, Peter Werner wrote: > ***** > To join, leave or search the confocal microscopy listserv, go to: > http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy > ***** > > An interesting point was made here by Jim Pawley: > >> I agree that sampling a bit higher than Nyquist never hurts, >> especially if you deconvolve (as you always should), but I think that it is a mistake to think that one can "separate" out the noise by decon. I think that noise is pretty fundamental. > > I had always heard that if you're doing confocal microscopy, at least point-scanning confocal with a pinhole size of 1AU or smaller, that deconvolution was superfluous, because you shouldn't be getting out of focus light. So what is gained by deconvolution when one is sampling voxel by voxel? > > Peter G. Werner > Merritt College Microscopy Program ----- No virus found in this message. Checked by AVG - www.avg.com Version: 2012.0.1834 / Virus Database: 2092/4590 - Release Date: 11/01/11 This communication is intended only for the named recipient and may contain information that is confidential, legally privileged or subject to copyright; the Ludwig Institute for Cancer Research Ltd does not waive any rights if you have received this communication in error. The views expressed in this communication are those of the sender and do not necessarily reflect the views of the Ludwig Institute for Cancer Research Ltd. |
Keith Morris |
In reply to this post by Guy Cox-2
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Hi Guy, The Zeiss 510, unlike the new Zeiss 710 confocal, has three pinholes so you can adjust the confocal pinhole for each fluorochrome/channel, assuming you aren't using the MetaHead for a couple of channels when you are forced to share detector gain and pinhole. The LMS-510 software reports the Airy diameter and the optical slice thickness for the objective in use, i.e. with Airy=1: the optical slice is reported as 0.9um for TRITC, 0.7um for FITC and 0.6um for DAPI when using our 63x objective [Zeiss Plan Apochromat NA 1.4 oil]. As you say the optical slice varies for the wavelength, so as we have adjustable pinholes on the Zeiss we can adjust the DAPI and FITC channels confocal pinholes to a slightly higher Airy number [Airy=1.25 and 1.5] to give the same optical slice thickness for each fluorochrome [so all fluorochrome configs are set to a 0.9um optical slice]. We don't have to think about it too deeply as the LMS-510 software tells us all the numbers, as it knows the objective, fluorochrome etc... The fluorescent beads mentioned in my posting were scanned in the TRITC red channel. As I say the lowest limit of our Axiovert 200M z-motor step is a 0.05um step, around 50nm. In practice for 'quality' z stack scans most Zeiss 510 users generally select the standard Zeiss LMS-510 2x oversampling option in Z-stacks with fixed cells [as it's an easy software button], which sets the steps to half the LMS-510 calculated optical slice while keeping the same first/last z slice location. Regards Keith --------------------------------------------------------------------------- Dr Keith J. Morris, Molecular Cytogenetics and Microscopy Core, Laboratory 00/069 and 00/070, The Wellcome Trust Centre for Human Genetics, Roosevelt Drive, Oxford OX3 7BN, United Kingdom. Telephone: +44 (0)1865 287568 Email: [hidden email] Web-pages: http://www.well.ox.ac.uk/molecular-cytogenetics-and-microscopy -----Original Message----- From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Guy Cox Sent: 02 November 2011 22:27 To: [hidden email] Subject: Re: Deconvolution of Confocal Images? Volocity Course London ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** I'm a bit puzzled by this "0.9µm optical slice". If this is an NA 1.4 lens Z resolution should be 400-500nm, I've measured this FWHM on 100nm beads, so this is practical not theoretical. So you should be sampling at 200nm or less in Z to meet Nyquist. This is before thinking about oversampling for decon. Guy Optical Imaging Techniques in Cell Biology by Guy Cox CRC Press / Taylor & Francis http://www.guycox.com/optical.htm ______________________________________________ Associate Professor Guy Cox, MA, DPhil(Oxon) Australian Centre for Microscopy & Microanalysis, Madsen Building F09, University of Sydney, NSW 2006 Phone +61 2 9351 3176 Fax +61 2 9351 7682 Mobile 0413 281 861 ______________________________________________ http://www.guycox.net -----Original Message----- From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Keith Morris Sent: Thursday, 3 November 2011 12:59 AM To: [hidden email] Subject: Re: Deconvolution of Confocal Images? Volocity Course London ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** On a practical note regarding whether to bother with deconvolving a confocal z stack using our Zeiss 510 Metahead confocal and our Volocity 3D deconvolution [Restoration] software: The advice from one of our Perkin Elmer's Volocity software engineers was that in practice you don't see much improvement, if any, using Volocity's Restoration module to deconvolve a standard confocal z-stack. Traditionally using our Zeiss 510 we double over-sample a z-stack, i.e. for a 0.9um optical slice [63x objective, Airy=1] we take a z-slice every 0.45um in the z direction. In order to get a significant improvement over that with Volocity's Restoration module, Perkin Elmer support recommended oversampling by at least 10 [perhaps even up to 100] times rather than just 2. In practice the confocal is limited to a minimum focus z step of around 0.05um, giving about 20 Z scans when oversampling through each '0.9um optical slice' [although you can push up the numbers, and scan time, by applying image averaging as well]. This increases the scan time by about 10x for each z stack, to typically over twenty minutes with 1024x1024 image sizes. Plus you have to make time to apply Volocity's Fast Restoration or the slower Iterative Restoration to the image z-stack. Although time consuming, z-stack oversampling with deconvolution does get a little bit more detail from our 3D z-stacks - it's just whether you want to take another half hour to go though the acquisition and deconvolution process for every z-stack, as the extra detail may not provide anything more informative. Volocity Support added that Volocity's Restoration deconvolution module predictably looks it's best when dealing with a brighter and fuzzier z-stack from a normal wide-field microscope - where the out of focus light can be put to use rather than be simply excluded by the confocal iris. There is talk on the listserver of confocal deconvolution also producing superior 3D spatial information for quantification of sub-micron structures and well as improving resolution. Without something like serial TEM section corroboration of the structure you can't really allay the fear that the extra detail might include processing artefacts, although most of this extra detail can be seen in the original fuzzier confocal z-stack once the deconvolution has highlighted it. These days though I expect you'd also look towards a colleagues super-resolution STED/PALM system. For those in the UK with an interest in using Volocity 3D software, next week Perkin Elmer are running a two day Volocity user training course in London on the 8th and 9th November 2011 [price £300 a day, £550 both days], see: http://now.eloqua.com/e/es.aspx?s=643&e=123818&elq=9c93259d524640798ee7b233e bf0246c Day 1: Volocity Essentials, Day 2: Volocity Quantitation With our Volocity 3D software it tends to be the Quantitation module we use the most, with standard Zeiss [non-de]convolved 2x over-sampled confocal z-stacks, as it can measure intracellular structure volume and track objects. We have the old Volocity v4.2 [latest is v6.0], although it's used less frequently here than our 2D image analysis options, MetaMorph v7.7 and ImageJ. Regards Keith I have placed a deconvolved confocal ~20x oversampled 45 slice z-stack 3D image at http://www.well.ox.ac.uk/cytogenetics/deconvolved.jpg It's a slice from the top of a couple of Invitrogen FocalCheck fluorescent microspheres [Volocity Fast Restoration, see right image], where the diameter of the slice shown is about 10 um and the depth of the ring about 2 um. --------------------------------------------------------------------------- Dr Keith J. Morris, Molecular Cytogenetics and Microscopy Core, Laboratory 00/069 and 00/070, The Wellcome Trust Centre for Human Genetics, Roosevelt Drive, Oxford OX3 7BN, United Kingdom. Telephone: +44 (0)1865 287568 Email: [hidden email] Web-pages: http://www.well.ox.ac.uk/molecular-cytogenetics-and-microscopy -----Original Message----- From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of John Oreopoulos Sent: 30 October 2011 20:42 To: [hidden email] Subject: Re: Deconvolution of Confocal Images? (was: Airy Units) ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Peter, there are some advantages to deconvolving even confocal imaging data. Until I took Pawley's course, I too had always seen these two imaging techniques as mutually exclusive, but that's not the case. Here is a passage from Chapter 2 of Pawley's Handbook: "Although deconvolution and confocal imaging are often seen as competing methods aimed at the same goal, namely producing images of 3D stain distributions, in fact, they are not exclusive, and there is much to be said for combining them. No only does deconvolution suppress the "single-pixel" features created by Poisson noise, it also effectively averages the signal in the Nyquist-sampled image of a point object. In other words, it has the same effect of reducing the uncertainty in the estimate of the brightness in individual voxels as Kalman averaging for 64 to 125 frames. This point is so important that the present edition of this volume devotes an entire chapter to it: Chapter 25." Now, having repeated that passage, can I say that I deconvolve my confocal data all the time? No, mainly due to lack of access to deconvolution software, and partly due to time constraints usually. But I can assure you that your images will look a bit better after deconvolving them. Apparently all the best confocalists do this, but I seldom see it practiced in the literature. Have a look at Chapter 25 of his book. It's quite interesting. John Oreopoulos Research Assistant Spectral Applied Research Richmond Hill, Ontario Canada www.spectral.ca On 2011-10-30, at 4:09 PM, Peter Werner wrote: > ***** > To join, leave or search the confocal microscopy listserv, go to: > http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy > ***** > > An interesting point was made here by Jim Pawley: > >> I agree that sampling a bit higher than Nyquist never hurts, especially if you deconvolve (as you always should), but I think that it is a mistake to think that one can "separate" out the noise by decon. I think that noise is pretty fundamental. > > I had always heard that if you're doing confocal microscopy, at least point-scanning confocal with a pinhole size of 1AU or smaller, that deconvolution was superfluous, because you shouldn't be getting out of focus light. So what is gained by deconvolution when one is sampling voxel by voxel? > > Peter G. Werner > Merritt College Microscopy Program ----- No virus found in this message. Checked by AVG - www.avg.com Version: 2012.0.1834 / Virus Database: 2092/4590 - Release Date: 11/01/11 |
Cameron Nowell |
In reply to this post by Guy Cox-2
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Hi Guy, Sorry forgot to include the other bits of the Olympus formula here is the full thing www.fileden.com/files/2011/10/11/3207952/Olympus%20Optical%20Thickness%20v2.png Olympus measure their pinhole in um Cheers Cam -----Original Message----- From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Guy Cox Sent: Thursday, 3 November 2011 6:04 PM To: [hidden email] Subject: Re: Deconvolution of Confocal Images? Volocity Course London Well, I can't make much sense out of these. The Zeiss one has a pinhole diameter in µm but no magnification figure, so I can't see how that computes. The Olympus one doesn't say what the pinhole size is measured in, but it also contains m and M which are not explained, but I assume one of these is the mag at the pinhole. The formula I have (from Colin Sheppard) dz = 0.4 lambda / n.sin^2 (alpha/2) where dz is FWHM in Z, lambda is the wavelength, n is the refractive index and alpha is the half-angle of the objective. It assumes the pinhole is open no wider than the Airy disk, and works out to 390nm for lambda 500, alpha 72 and n 1.5. This seems to be achievable in practice. Opening the pinhole wider is also opening a can of worms - you are getting some mix of confocal and widefield which will be a bit of a dog's breakfast. (How's that for a glorious mixed metaphor!) Guy Optical Imaging Techniques in Cell Biology by Guy Cox CRC Press / Taylor & Francis http://www.guycox.com/optical.htm ______________________________________________ Associate Professor Guy Cox, MA, DPhil(Oxon) Australian Centre for Microscopy & Microanalysis, Madsen Building F09, University of Sydney, NSW 2006 Phone +61 2 9351 3176 Fax +61 2 9351 7682 Mobile 0413 281 861 ______________________________________________ http://www.guycox.net -----Original Message----- From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Cameron Nowell Sent: Thursday, 3 November 2011 11:10 AM To: [hidden email] Subject: Re: Deconvolution of Confocal Images? Volocity Course London ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Hi All, There is an "optical slice thickness" calculation in each confocal system isn't there? Ziess use this www.fileden.com/files/2011/10/11/3207952/Zeiss%20Optical%20Thickness.png and Olympus use this www.fileden.com/files/2011/10/11/3207952/Olympus%20Optical%20Thickness.png I assume the other companies use some variation of them as well. But of course as Guy points out you should sample based on the measured (or theoretical) resolution of your system Cheers Cam Cameron J. Nowell Microscopy Manager Centre for Advanced Microscopy Ludwig Institute for Cancer Research Melbourne - Parkville Branch PO Box 2008 Royal Melbourne Hospital Victoria, 3050 AUSTRALIA Office: +61 3 9341 3158 Mobile: +61 422882700 Fax: +61 3 9341 3104 Facility Website -----Original Message----- From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Guy Cox Sent: Thursday, 3 November 2011 9:27 AM To: [hidden email] Subject: Re: Deconvolution of Confocal Images? Volocity Course London ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** I'm a bit puzzled by this "0.9µm optical slice". If this is an NA 1.4 lens Z resolution should be 400-500nm, I've measured this FWHM on 100nm beads, so this is practical not theoretical. So you should be sampling at 200nm or less in Z to meet Nyquist. This is before thinking about oversampling for decon. Guy Optical Imaging Techniques in Cell Biology by Guy Cox CRC Press / Taylor & Francis http://www.guycox.com/optical.htm ______________________________________________ Associate Professor Guy Cox, MA, DPhil(Oxon) Australian Centre for Microscopy & Microanalysis, Madsen Building F09, University of Sydney, NSW 2006 Phone +61 2 9351 3176 Fax +61 2 9351 7682 Mobile 0413 281 861 ______________________________________________ http://www.guycox.net -----Original Message----- From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Keith Morris Sent: Thursday, 3 November 2011 12:59 AM To: [hidden email] Subject: Re: Deconvolution of Confocal Images? Volocity Course London ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** On a practical note regarding whether to bother with deconvolving a confocal z stack using our Zeiss 510 Metahead confocal and our Volocity 3D deconvolution [Restoration] software: The advice from one of our Perkin Elmer's Volocity software engineers was that in practice you don't see much improvement, if any, using Volocity's Restoration module to deconvolve a standard confocal z-stack. Traditionally using our Zeiss 510 we double over-sample a z-stack, i.e. for a 0.9um optical slice [63x objective, Airy=1] we take a z-slice every 0.45um in the z direction. In order to get a significant improvement over that with Volocity's Restoration module, Perkin Elmer support recommended oversampling by at least 10 [perhaps even up to 100] times rather than just 2. In practice the confocal is limited to a minimum focus z step of around 0.05um, giving about 20 Z scans when oversampling through each '0.9um optical slice' [although you can push up the numbers, and scan time, by applying image averaging as well]. This increases the scan time by about 10x for each z stack, to typically over twenty minutes with 1024x1024 image sizes. Plus you have to make time to apply Volocity's Fast Restoration or the slower Iterative Restoration to the image z-stack. Although time consuming, z-stack oversampling with deconvolution does get a little bit more detail from our 3D z-stacks - it's just whether you want to take another half hour to go though the acquisition and deconvolution process for every z-stack, as the extra detail may not provide anything more informative. Volocity Support added that Volocity's Restoration deconvolution module predictably looks it's best when dealing with a brighter and fuzzier z-stack from a normal wide-field microscope - where the out of focus light can be put to use rather than be simply excluded by the confocal iris. There is talk on the listserver of confocal deconvolution also producing superior 3D spatial information for quantification of sub-micron structures and well as improving resolution. Without something like serial TEM section corroboration of the structure you can't really allay the fear that the extra detail might include processing artefacts, although most of this extra detail can be seen in the original fuzzier confocal z-stack once the deconvolution has highlighted it. These days though I expect you'd also look towards a colleagues super-resolution STED/PALM system. For those in the UK with an interest in using Volocity 3D software, next week Perkin Elmer are running a two day Volocity user training course in London on the 8th and 9th November 2011 [price £300 a day, £550 both days], see: http://now.eloqua.com/e/es.aspx?s=643&e=123818&elq=9c93259d524640798ee7b233e bf0246c Day 1: Volocity Essentials, Day 2: Volocity Quantitation With our Volocity 3D software it tends to be the Quantitation module we use the most, with standard Zeiss [non-de]convolved 2x over-sampled confocal z-stacks, as it can measure intracellular structure volume and track objects. We have the old Volocity v4.2 [latest is v6.0], although it's used less frequently here than our 2D image analysis options, MetaMorph v7.7 and ImageJ. Regards Keith I have placed a deconvolved confocal ~20x oversampled 45 slice z-stack 3D image at http://www.well.ox.ac.uk/cytogenetics/deconvolved.jpg It's a slice from the top of a couple of Invitrogen FocalCheck fluorescent microspheres [Volocity Fast Restoration, see right image], where the diameter of the slice shown is about 10 um and the depth of the ring about 2 um. --------------------------------------------------------------------------- Dr Keith J. Morris, Molecular Cytogenetics and Microscopy Core, Laboratory 00/069 and 00/070, The Wellcome Trust Centre for Human Genetics, Roosevelt Drive, Oxford OX3 7BN, United Kingdom. Telephone: +44 (0)1865 287568 Email: [hidden email] Web-pages: http://www.well.ox.ac.uk/molecular-cytogenetics-and-microscopy -----Original Message----- From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of John Oreopoulos Sent: 30 October 2011 20:42 To: [hidden email] Subject: Re: Deconvolution of Confocal Images? (was: Airy Units) ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Peter, there are some advantages to deconvolving even confocal imaging data. Until I took Pawley's course, I too had always seen these two imaging techniques as mutually exclusive, but that's not the case. Here is a passage from Chapter 2 of Pawley's Handbook: "Although deconvolution and confocal imaging are often seen as competing methods aimed at the same goal, namely producing images of 3D stain distributions, in fact, they are not exclusive, and there is much to be said for combining them. No only does deconvolution suppress the "single-pixel" features created by Poisson noise, it also effectively averages the signal in the Nyquist-sampled image of a point object. In other words, it has the same effect of reducing the uncertainty in the estimate of the brightness in individual voxels as Kalman averaging for 64 to 125 frames. This point is so important that the present edition of this volume devotes an entire chapter to it: Chapter 25." Now, having repeated that passage, can I say that I deconvolve my confocal data all the time? No, mainly due to lack of access to deconvolution software, and partly due to time constraints usually. But I can assure you that your images will look a bit better after deconvolving them. Apparently all the best confocalists do this, but I seldom see it practiced in the literature. Have a look at Chapter 25 of his book. It's quite interesting. John Oreopoulos Research Assistant Spectral Applied Research Richmond Hill, Ontario Canada www.spectral.ca On 2011-10-30, at 4:09 PM, Peter Werner wrote: > ***** > To join, leave or search the confocal microscopy listserv, go to: > http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy > ***** > > An interesting point was made here by Jim Pawley: > >> I agree that sampling a bit higher than Nyquist never hurts, >> especially if you deconvolve (as you always should), but I think that it is a mistake to think that one can "separate" out the noise by decon. I think that noise is pretty fundamental. > > I had always heard that if you're doing confocal microscopy, at least point-scanning confocal with a pinhole size of 1AU or smaller, that deconvolution was superfluous, because you shouldn't be getting out of focus light. So what is gained by deconvolution when one is sampling voxel by voxel? > > Peter G. Werner > Merritt College Microscopy Program ----- No virus found in this message. Checked by AVG - www.avg.com Version: 2012.0.1834 / Virus Database: 2092/4590 - Release Date: 11/01/11 This communication is intended only for the named recipient and may contain information that is confidential, legally privileged or subject to copyright; the Ludwig Institute for Cancer Research Ltd does not waive any rights if you have received this communication in error. The views expressed in this communication are those of the sender and do not necessarily reflect the views of the Ludwig Institute for Cancer Research Ltd. This communication is intended only for the named recipient and may contain information that is confidential, legally privileged or subject to copyright; the Ludwig Institute for Cancer Research Ltd does not waive any rights if you have received this communication in error. The views expressed in this communication are those of the sender and do not necessarily reflect the views of the Ludwig Institute for Cancer Research Ltd. |
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** The real resolution probably depends on the particular objective much stronger than on some of the terms of this long formula Mike ________________________________________ From: Confocal Microscopy List [[hidden email]] On Behalf Of Cameron Nowell [[hidden email]] Sent: Thursday, November 03, 2011 5:22 PM To: [hidden email] Subject: Re: Deconvolution of Confocal Images? Volocity Course London ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Hi Guy, Sorry forgot to include the other bits of the Olympus formula here is the full thing www.fileden.com/files/2011/10/11/3207952/Olympus%20Optical%20Thickness%20v2.png Olympus measure their pinhole in um Cheers Cam |
Free forum by Nabble | Edit this page |