Andy Molnar |
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** **COMMERCIAL RESPONSE, MEDIA CYBERNETICS** Hi Andrew, As an early disclosure, I work on AutoQuant X by Media Cybernetics, which is a competing product to Huygens. Thus, I'm not in the best position to answer the first question about how their software works. But I'll try to cover what I can. To answer your second question, the short answer is that there really isn't one end-all be-all approach. Some work more quickly, but may not get as good of a result. Some work especially well in some situations, but not as well in others. A significant stumbling block tends to be in how algorithms deal with noise. Without emphatically declaring it to be the "clear winner", one popular approach among constrained iterative methods is maximum-likelihood estimation (MLE), in part because of its relative robustness against noise. It also has the benefit of working well in a wide variety of situations. Aside from its merits, I also single MLE out because AutoQuant X and, from what information I could find*, Huygens both use this method as the basis of their primary deconvolution algorithms. Another algorithm aspect to consider is blind techniques vs. non-blind techniques. A blind deconvolution will allow the point-spread function (PSF) to be refined iteratively along with the image estimate. A non-blind deconvolution refines only the image estimate iteratively. As preferences between them tend to be subjective, I encourage you to try both to determine which produces the best results for you. In all cases, it's important to start with the best PSF possible. If you're measuring your PSF (as opposed to using a theoretically-calculated one), you want your sub-resolution bead to be collected under as close to identical conditions to your specimen as possible. If you're working with a blind deconvolution, the PSF refinements can allow some latitude there, but the best starting points will still produce the best results. As for the third question, if you are interested in taking a look at AutoQuant X, there's a bibliography on our site of papers that describe the algorithms we use: http://www.mediacy.com/index.aspx?page=AutoQuant_Bibliography A couple of particular note there are: Holmes, T. J. (1992). "Blind Deconvolution of Quantum-Limited Incoherent Imagery." Journal of the Optical Society of America A. 9(7): 1052-1061. Holmes, T. J. and Y. H. Liu. (1991). "Acceleration of Maximum-Likelihood Image-Restoration for Fluorescence Microscopy and Other Noncoherent Imagery." Journal of the Optical Society of America A. 8(6): 893-907. Media Cybernetics, Inc. Manual & Tutorials January, 2009 Page 146 Best Regards, Andy * - I'm basing my assertion about Huygens' approach (and the extent of my ability to answer the first question) on this guide: http://nic.ucsf.edu/dokuwiki/lib/exe/fetch.php?media=huygens:essentialworkshopguide_3.7-1.pdf ------------------- Media Cybernetics 401 North Washington Street, Suite 350 Rockville, MD 20850 tel +1 301.495.3305 fax +1 301.656.2387 www.mediacy.com |
David Baddeley |
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** I find MLE to be a singularly uninformative description of a deconvolution algorithm. The majority of useful deconvolution algorithms (a notable exception being the van-cittert algorithm) are maximum likelihood estimators for some set of assumptions and constraints. e.g. - the Weiner filter is an MLE in the case of a known amount of Gaussian distributed noise and no constraints on object positivity or smoothness - Tikhonov-Miller is an MLE under Gaussian noise and a smoothness constraint - ICTM (iterative constrained Tikhonov-Miller) is an MLE for Gaussian Noise, a smoothness constraint, and positivity - Richardson-Lucy is an MLE for Poisson noise, and no smoothness constraint (Poisson noise implies a positivity constraint) - Huygens and Autoquant are, as far as I can infer, MLEs for Poisson noise and some unknown smoothness, total-variation, or similar constraint. In general for light microscopy you want an MLE for Poisson noise and positivity. Further constraints are often also useful, particularly if the input data is noisy. This is however, not necessarily the case for SIM images in which the noise model will no longer be Poissonian. The MLE designation also says nothing about the method of solving the MLE problem. There are a number of different methods for finding a numerical solution to the maximum-likelihood system, and each have different properties/benefits. ICTM, for example is commonly solved using a conjugate-gradient solver which has a high per-step cost, but converges in a relatively small number of steps. The R-L algorithm uses it's own simpler iteration scheme with a lower per-step cost, but requires many more steps to converge. I have no idea what iteration scheme the commercial efforts use as this tends to be fairly opaque (I know some offerings use a modified R-L solver, but don't think this is the case for e.g. Huygens). For your particular case (SIM) you'd probably need a custom algorithm if you wanted to do blind deconvolution as the PSF models in current packages are unlikely to describe the effective PSF you get in a reconstruction. cheers, David ________________________________ From: Andy Molnar <[hidden email]> To: [hidden email] Sent: Tuesday, 2 October 2012 8:19 AM Subject: Re: Deconvolution advice (commercial response) ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** **COMMERCIAL RESPONSE, MEDIA CYBERNETICS** Hi Andrew, As an early disclosure, I work on AutoQuant X by Media Cybernetics, which is a competing product to Huygens. Thus, I'm not in the best position to answer the first question about how their software works. But I'll try to cover what I can. To answer your second question, the short answer is that there really isn't one end-all be-all approach. Some work more quickly, but may not get as good of a result. Some work especially well in some situations, but not as well in others. A significant stumbling block tends to be in how algorithms deal with noise. Without emphatically declaring it to be the "clear winner", one popular approach among constrained iterative methods is maximum-likelihood estimation (MLE), in part because of its relative robustness against noise. It also has the benefit of working well in a wide variety of situations. Aside from its merits, I also single MLE out because AutoQuant X and, from what information I could find*, Huygens both use this method as the basis of their primary deconvolution algorithms. Another algorithm aspect to consider is blind techniques vs. non-blind techniques. A blind deconvolution will allow the point-spread function (PSF) to be refined iteratively along with the image estimate. A non-blind deconvolution refines only the image estimate iteratively. As preferences between them tend to be subjective, I encourage you to try both to determine which produces the best results for you. In all cases, it's important to start with the best PSF possible. If you're measuring your PSF (as opposed to using a theoretically-calculated one), you want your sub-resolution bead to be collected under as close to identical conditions to your specimen as possible. If you're working with a blind deconvolution, the PSF refinements can allow some latitude there, but the best starting points will still produce the best results. As for the third question, if you are interested in taking a look at AutoQuant X, there's a bibliography on our site of papers that describe the algorithms we use: http://www.mediacy.com/index.aspx?page=AutoQuant_Bibliography A couple of particular note there are: Holmes, T. J. (1992). "Blind Deconvolution of Quantum-Limited Incoherent Imagery." Journal of the Optical Society of America A. 9(7): 1052-1061. Holmes, T. J. and Y. H. Liu. (1991). "Acceleration of Maximum-Likelihood Image-Restoration for Fluorescence Microscopy and Other Noncoherent Imagery." Journal of the Optical Society of America A. 8(6): 893-907. Media Cybernetics, Inc. Manual & Tutorials January, 2009 Page 146 Best Regards, Andy * - I'm basing my assertion about Huygens' approach (and the extent of my ability to answer the first question) on this guide: http://nic.ucsf.edu/dokuwiki/lib/exe/fetch.php?media=huygens:essentialworkshopguide_3.7-1.pdf ------------------- Media Cybernetics 401 North Washington Street, Suite 350 Rockville, MD 20850 tel +1 301.495.3305 fax +1 301.656.2387 www.mediacy.com |
Free forum by Nabble | Edit this page |