Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal Hi everyone, After reading many articles, books and lots of playing around with algorithms, the unknowns about 2D-Deconvolution seem to accumulate rather than to disappear. Since I´m not sure whether I´m still on the right track, I thought I drop a few lines in this excellent forum, hopefully someone can help. Leaving aside other fancy features and for the sake of simpleness; I set up a widefield scope for live-imaging with pulsed dual laser excitation by AOMs, an EM-CCD camera, no piezo driven stage and a 60x water objective. Before seriously starting to investigate biological matters, I remembered the statement, which I also came across with in James Pawley´s recent book: "Deconvolve everything!!". I´m not an expert in Deconvolution and never ran it myself before, so would you experienced "Deconvolutionists" agree that it´s worthy for 2D-images? 2D-Deconvolution would clearly be limited compared to "ideal" 3D-Deconvolution, since one does not have the information from adjacent planes. Thus, it´s quite inaccurate if not even impossible to remove out of focus light, isn´t it? On the other hand, one could still recover high spatial frequencies which were attenuated by the OTF (actually by the NA) and not to forget removing Poisson noise. Is the noise removal due to the fact that features smaller than the PSF size will be neglected during Deconvolution or because the spatial frequency of noise is most likely to be out of the OTF limit (2NA/lambda and NA²/(2*n*lambda)? Is there a reference at what range of frequencies noise would be expected or is it rather evenly distributed (i.e. when looking at the Fourier-transform of an image)? Due to the missing automated stage I can´t record the PSF of my system. Normally one records the PSF/OTF at the actual system, basically to include eventual misalignment, underfilling of the back aperture etc. If I remember correctly, I think the Deltavision guys measure the objectives separately and use this measured PSF for Deconvolution (leaving blind-decon aside for now). Is this because of the high precision of their scope, hence eliminating eventual misalignment effects on the OTF? What are general experiences in terms of measuring an objective´s PSF at a different system compared to the "real" one in the actual scope (spherical abberations, asymmetric shapes)? As to deconvolution results. How can one judge whether an algorithm produced a correct result? Obviously, an experienced eye will see most of the artifacts, but what are objective, reliable and measurable characteristics in order to say that this image has been improved or that one is clearly ruined? Signal-to-noise-Ratio, Fourier analysis, Image properties (i.e. speckles, ringing effects)? What is an appropriate way for defining the S/N-ratio for images? I ran across many different methods,such as: mean(I)/std(I) or max(I)/std(I) or maybe applying a morphological operation, distinguishing signal and background and then mean(signal)/sqrt(mean(noise)+mean(signal)). Which one is a commonly accepted method for S/N calculation in image processing? With respect to appropriate Deconvolution algorithms, essentially all references say that iterative algorithms are superior to linear filters (due to applying constraints (i.e. non-negativity), not a simple high-pass filter, etc.). Because of the missing PSF in my particular case, Blind-Decon seems to be the right choice. Or does someone disagree on that one? Applying the blind-deconvolution in Matlab to various images of fluorescently stained cells, also having different noise levels, revealed rather disappointing results. The higher the iteration steps (more than 10 iterations), the more speckled the images became. Strangely, the magnitude of the fourier-transfered image always tends to form a weird symmetrical pattern (I´m happy to send to images for those interested). Varying the inital PSF-size doesn´t help either and the reconstructed image becomes increasingly noisy after more than 15 iterations. Does someone know about this problem? Did someone already successfully deconvolve 2D data with Matlab? The only reason I could think of is that my system is far away from being perfectly aligned and thus leading to "non-reconstructable" images. Due to the missing stage, I can´t check that issue. However, from my understanding the blind deconvolution should be able to deal with asymmetric PSFs (if it´s not too far off). Any further ideas? Another simple question that´s still in my head. When referring to the lateral size of the PSF, is it the distance of the 1st two minima around the AiryDisc, or the size with further minima, the FWHM, or simply by practical approach - the diameter in pixels of a subresolution bead in focus? Well, that post became quite long. Sorry for asking so many questions at once. Many, many thanks in advance for any input, it´ll be very much appreciated! Best regards, Steffen Steinert, Dipl.-Ing. --------------------------- Universität Stuttgart 3. Physikalisches Institut Pfaffenwaldring 57 70550 Stuttgart Tel.: 49/711/68565230 ---------------------------- |
Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal Steffen Steinert wrote: > Before seriously starting to investigate biological matters, I > remembered the statement, which I also came across with in James > Pawley´s recent book: "Deconvolve everything!!". Yes! > I´m not an expert in Deconvolution and never ran it myself before, so > would you experienced "Deconvolutionists" agree that it´s worthy for > 2D-images? Yes! > 2D-Deconvolution would clearly be limited compared to "ideal" > 3D-Deconvolution, since one does not have the information from > adjacent planes. Thus, it´s quite inaccurate if not even impossible to > remove out of focus light, isn´t it? Yes! > On the other hand, one could still recover high spatial frequencies > which were attenuated by the OTF (actually by the NA) and not to > forget removing Poisson noise. Is the noise removal due to the fact > that features smaller than the PSF size will be neglected during > Deconvolution or because the spatial frequency of noise is most likely > to be out of the OTF limit (2NA/lambda and NA²/(2*n*lambda)? Is there > a reference at what range of frequencies noise would be expected or is > it rather evenly distributed (i.e. when looking at the > Fourier-transform of an image)? Yes, decon. still provides the optimal filter for the data. The out of focus light cannot be removed (unless the sample is so thin there is none or you are using TIRF) without 3D data. The noise occupies the entire sampling frequency range. > Due to the missing automated stage I can´t record the PSF of my > system. Normally one records the PSF/OTF at the actual system, > basically to include eventual misalignment, underfilling of the back > aperture etc. If I remember correctly, I think the Deltavision guys > measure the objectives separately and use this measured PSF for > Deconvolution (leaving blind-decon aside for now). Is this because of > the high precision of their scope, hence eliminating eventual > misalignment effects on the OTF? What are general experiences in terms > of measuring an objective´s PSF at a different system compared to the > "real" one in the actual scope (spherical abberations, asymmetric > shapes)? illumination, the PSF for a high quality lens (say plan apo) is very close to the theoretical wide field prediction. If you don't pay attention to these factors then aberrated PSF always result. It should be noted that pure spherical abberation may not be as bad as you might think as it may be offset by defocus in thin samples. > As to deconvolution results. How can one judge whether an algorithm > produced a correct result? Obviously, an experienced eye will see most > of the artifacts, but what are objective, reliable and measurable > characteristics in order to say that this image has been improved or > that one is clearly ruined? Signal-to-noise-Ratio, Fourier analysis, > Image properties (i.e. speckles, ringing effects)? A very big question and one that has no simple answer. There is no 'correct' result in the presence of noise. The question is what result is 'closest' and your definition of 'closest' needs to be spelled out. Is 'close', the least noisy or the highest spatial frequencies etc.? > What is an appropriate way for defining the S/N-ratio for images? I > ran across many different methods,such as: mean(I)/std(I) or > max(I)/std(I) or maybe applying a morphological operation, > distinguishing signal and background and then > mean(signal)/sqrt(mean(noise)+mean(signal)). Which one is a commonly > accepted method for S/N calculation in image processing? s/n is just that, signal over sqrt varience(signal). Just make sure you know what the signal is (i.e. not background)... You can calculate the varience from repeated images. > > With respect to appropriate Deconvolution algorithms, essentially all > references say that iterative algorithms are superior to linear > filters (due to applying constraints (i.e. non-negativity), not a > simple high-pass filter, etc.). Because of the missing PSF in my > particular case, Blind-Decon seems to be the right choice. Or does > someone disagree on that one? Why don't you actually measure the psf? Failinmg that use the calculated wide field psf? > Applying the blind-deconvolution in Matlab to various images of > fluorescently stained cells, also having different noise levels, > revealed rather disappointing results. I am not a fan of blind deconvolution. It is unclear to me that in a complex sample with no 3D data that it could possibly work... Hope this helps. Cheers |
In reply to this post by Steffen Steinert
Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal
Another poster already answered various questions. See reviews by Conchello and Lichtman in the December 2005 issue of Nature Methods (Lichtman and Conchello in the same issue had a nice review on fluorescence microscopy). See especially Figure 5 of C&L. Conchello's XCOSM is (apparently) now available, see http://www.omrfcosm.omrf.org/joomla/philaform/Download_XCOSM/ Some thoughts: * Acquire multiple images (of fluorescent beads), deconvolve each, see how they compare. * Acquire multiple images at different exposure times (say 2x changes from oversaturation, near saturation, ... several down to dim), quantify (classic SNR calculations). * Compare to standard unsharp masking (i.e. in Photoshop), and to unsharp masking followed by median filtering. Alberto Diaspro has a web site, Power Up Your Microscope, with free deconvolution. You might try that. www.powermicroscope.com (looks like you have to register before the other links become active). Even if your microscope does not have a focus motor, you can use its tick marks to focus to 1.0 um (or better?), which is enough for a 40x high NA dry objective lens. Be sure to focus upwards, so that the moving part is being moved against gravity (so gear teeth are always engaged - minimizes slippage). best wishes, George At 12:58 PM 8/22/2007, you wrote: Search the CONFOCAL archive at George McNamara, Ph.D. University of Miami, Miller School of Medicine Image Core Miami, FL 33010 [hidden email] [hidden email] 305-243-8436 office |
In reply to this post by Steffen Steinert
Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal Steffen, first, the image formation in the microscope is inherently a 3D problem. Therefore, by limitation to lateral 2D you will have to expect a less 'good' approximation to the real dye intensity distribution of your sample. In my experience, lateral resolution can be improved at best only or less than moderately, provided using a nonlinear constrained iterative method and proper background correction (so the positivity constraint can work properly). The lateral 2D PSF is usually very small and therefore not much visual improvement can be expected. > On the other hand, one could still recover high > spatial frequencies which were attenuated by the OTF (actually by the NA) When those frequencies have a significant higher magnitude then your noise, then you have a good chance to reconstruct them with deconvolution. Don't expect wonders... > and not to forget removing Poisson noise. Is the noise removal due to the > fact that features smaller than the PSF size will be neglected during > Deconvolution or because the spatial frequency of noise is most likely to > be out of the OTF limit (2NA/lambda and NA²/(2*n*lambda)? None of the above: The noise removal is due to the model assumption that the data must be smooth. This is a necessary constraint for maintaining stability in the attempt of solving inverse ill-posed problems, which is usually also called regularization, relaxation or even penalty. This penalty is forcefully imposed to keep the solution more or less smooth, which contradicts with fitting the data to the PSF. Therefore many decon packages give the user a choice of choosing a desired balance (which is subjective to human perception). > Is there a > reference at what range of frequencies noise would be expected or is it > rather evenly distributed (i.e. when looking at the Fourier-transform of an > image)? I would assume its spatially even, however Poissonian noise is dependent on intensity. Therefore you get the typical 'salt and pepper' appearance that is absent in Gaussian noise. > What are general > experiences in terms of measuring an objective´s PSF at a different system > compared to the "real" one in the actual scope (spherical abberations, > asymmetric shapes)? This does not really make sense, because you want to know your PSF, not that of another system. You might as well use the theoretical PSF instead, perhaps modify it by changing the NA parameter slightly whilst observing the decon result of a small object. > As to deconvolution results. How can one judge whether an algorithm > produced a correct result? Obviously, an experienced eye will see most of > the artifacts, but what are objective, reliable and measurable As Mark Cannell said, there is no straight forward answer. Deconvolution is always a ill-posed problem, whether with or without noise. Therefore, the results are always approximations! You have to see decon as a tool that can restore your data to the best possible result the underlying (programmed) model can offer you. This is usually done by some strict means of numerical optimization. Decon packages normally have you make the decision of how smooth you want your result. Artefacts is a different issue - which would be too lengthy to get into in that reply. Perhaps you might want to get a copy of our paper that will get into that issue maybe more from your perspective: Wallace, W., Schaefer, L.H. & Swedlow, J. A Workingperson's Guide to Deconvolution in Light Microscopy, Biotechniques, 31 (2001), 1076-1097 > What is an appropriate way for defining the S/N-ratio for images? I ran > across many different methods,such as: mean(I)/std(I) or max(I)/std(I) or > maybe applying a morphological operation, distinguishing signal and > background and then mean(signal)/sqrt(mean(noise)+mean(signal)). Which one > is a commonly accepted method for S/N calculation in image processing? Again I agree with Mark. Determining the SNR from just a data set is not trivial. A statistical method called 'Generalized Cross Validation' can be used in conjunction with the imaging model to determine a good estimate of it. We use it to determine how much regularization is needed. > Blind-Decon seems to be the right choice. Or does someone disagree on that one? The non-blind decon is already an ill-posed problem. In blind decon, you have additionally a highly underdetermined numerical problem to solve - which results at best converge only under various, very strict constraints and a-priori assumptions. The choice of what to use is yours... > Another simple question that´s still in my head. When referring to the > lateral size of the PSF, is it the distance of the 1st two minima around > the AiryDisc, or the size with further minima, the FWHM, or simply by > practical approach - the diameter in pixels of a subresolution bead in focus? There are several standards. Most common for lateral resolution is the Rayleigh one (radius to first zero). Axial resolution often measured in FWHM. I believe in Jim Pawleys Handbook you will find a whole chapter on resolution and sampling as well. Hope it was of some help.... Regards Lutz ____________________________________ Lutz Schaefer Advanced Imaging Methodology Consultation Kitchener, Ontario, Canada Phone/Fax: (519) 894 8870 Website: http://home.golden.net/~lschafer/ ____________________________________ |
In reply to this post by Mark Cannell
Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal Hi, first of all, thanks for all the sophisticated answers. Helps a lot and I´ll definitely try all George´s practical advices. Thanks! As expected, kind of, I still have some questions remaining... First, noise removal by Decon. The explanation that noise is been removed as part of a constraint within the algorithm based on the smoothness makes sense to me. But is that indeed the only way used in Decon to remove noise? I dun want to be picky on that issue, but I´d like to cite from Jim´s recent book, p.468:"Data recorded using Nyquist sampling..., Decon suppresses data corresponding to features smaller than the point spread function...". Would that be due to the constraint or rather due to the fact that these "smaller" features will most probably have a very high spatial frequency which is not being covered by the OTF; and thus being removed almost automatically via Decon? This brings me to the next issue, background removal. Obviously, it´s best having as less noise within the image as possible. However, in live-imaging this will hardly be the case. I somehow have the feeling that a prior bg-removal wouldn´t do much good, since I may loose important informtion for subsequent Decon. Any comments on that issue? Assuming, I have obtained a representative PSF for my system, would the Lucy-Richardson algorithm be appropriate for 2D-Decon? A final thought on the SNR. It´s probably somewhere in J.Pawleys book, but I couldn´t find yet. The basic and perhaps naive question being; How to define the SNR of a single image? Is this noise related to the noise (variation) of the signal itself, thus the standard deviation of the signal as stated by Mark? Assuming so, one still has to distinguish signal and background (i.e. thresholding or by applying a morphological filter in order to find the cells with the signal). But isn´t the signal having an intrinsic variation in its intensity simply related to the amount of fluorophores located at a certain place? Therefore, I wouldn´t take this variation into account for a SNR calculation. Isn´t the main information one is interested in, how well the signal is separated from the background? Put in other words, how well the background is being suppressed? I understand noise as the unwanted background containing all different types of noise, like imaging a cell without any fluorescence as a negative "noise" control. The idea is still in my head to separate the signal, i.e. by histogram thresholding or by choosing characteristic ROIs, and to divide mean(signal) by mean(background). What´s wrong with that approach? Or am I completely on the wrong train of thought here? Due to current lack of money for proper commercial software packages, are there people who have successfully done 2D-Deconvolution with Matlab? Once again, many thanks for any input! Cheers, Steffen At 00:39 23.08.2007, you wrote: >Search the CONFOCAL archive at >http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal > >Steffen Steinert wrote: >>Before seriously starting to investigate biological matters, I remembered >>the statement, which I also came across with in James Pawley´s recent >>book: "Deconvolve everything!!". >Yes! >>I´m not an expert in Deconvolution and never ran it myself before, so >>would you experienced "Deconvolutionists" agree that it´s worthy for 2D-images? >Yes! >>2D-Deconvolution would clearly be limited compared to "ideal" >>3D-Deconvolution, since one does not have the information from adjacent >>planes. Thus, it´s quite inaccurate if not even impossible to remove out >>of focus light, isn´t it? >Yes! >>On the other hand, one could still recover high spatial frequencies which >>were attenuated by the OTF (actually by the NA) and not to forget >>removing Poisson noise. Is the noise removal due to the fact that >>features smaller than the PSF size will be neglected during Deconvolution >>or because the spatial frequency of noise is most likely to be out of the >>OTF limit (2NA/lambda and NA²/(2*n*lambda)? Is there a reference at what >>range of frequencies noise would be expected or is it rather evenly >>distributed (i.e. when looking at the Fourier-transform of an image)? > >Yes, decon. still provides the optimal filter for the data. The out of >focus light cannot be removed (unless the sample is so thin there is none >or you are using TIRF) without 3D data. The noise occupies the entire >sampling frequency range. >>Due to the missing automated stage I can´t record the PSF of my system. >>Normally one records the PSF/OTF at the actual system, basically to >>include eventual misalignment, underfilling of the back aperture etc. If >>I remember correctly, I think the Deltavision guys measure the objectives >>separately and use this measured PSF for Deconvolution (leaving >>blind-decon aside for now). Is this because of the high precision of >>their scope, hence eliminating eventual misalignment effects on the OTF? >>What are general experiences in terms of measuring an objective´s PSF at >>a different system compared to the "real" one in the actual scope >>(spherical abberations, asymmetric shapes)? >Provided you have a good RI match and proper rear aperture >illumination, the PSF for a high quality lens (say plan apo) is very >close to the theoretical wide field prediction. If you don't pay attention >to these factors then aberrated PSF always result. It should be noted that >pure spherical abberation may not be as bad as you might think as it may >be offset by defocus in thin samples. >>As to deconvolution results. How can one judge whether an algorithm >>produced a correct result? Obviously, an experienced eye will see most of >>the artifacts, but what are objective, reliable and measurable >>characteristics in order to say that this image has been improved or that >>one is clearly ruined? Signal-to-noise-Ratio, Fourier analysis, Image >>properties (i.e. speckles, ringing effects)? >A very big question and one that has no simple answer. There is no >'correct' result in the presence of noise. The question is what result is >'closest' and your definition of 'closest' needs to be spelled out. Is >'close', the least noisy or the highest spatial frequencies etc.? >>What is an appropriate way for defining the S/N-ratio for images? I ran >>across many different methods,such as: mean(I)/std(I) or max(I)/std(I) or >>maybe applying a morphological operation, distinguishing signal and >>background and then mean(signal)/sqrt(mean(noise)+mean(signal)). Which >>one is a commonly accepted method for S/N calculation in image processing? > >s/n is just that, signal over sqrt varience(signal). Just make sure you >know what the signal is (i.e. not background)... You can calculate the >varience from repeated images. >> >>With respect to appropriate Deconvolution algorithms, essentially all >>references say that iterative algorithms are superior to linear filters >>(due to applying constraints (i.e. non-negativity), not a simple >>high-pass filter, etc.). Because of the missing PSF in my particular >>case, Blind-Decon seems to be the right choice. Or does someone disagree >>on that one? >Why don't you actually measure the psf? Failinmg that use the calculated >wide field psf? >>Applying the blind-deconvolution in Matlab to various images of >>fluorescently stained cells, also having different noise levels, revealed >>rather disappointing results. >I am not a fan of blind deconvolution. It is unclear to me that in a >complex sample with no 3D data that it could possibly work... > >Hope this helps. > >Cheers Steffen Steinert, Dipl.-Ing. --------------------------- Universität Stuttgart 3. Physikalisches Institut Pfaffenwaldring 57 70550 Stuttgart Tel.: 49/711/68565230 ---------------------------- |
In reply to this post by George McNamara
Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal
A thought just occurred to me while reading a CNN news article about the planet Mars: Why not send an automated microscope with a robotic lander to look for life there in the soil? Surely more sophisticated instruments have been sent to Mars. Has NASA or any other space agency ever considered this kind of approach? Just curious.
John Oreopoulos, BSc, PhD Candidate University of Toronto Institute For Biomaterials and Biomedical Engineering Centre For Studies in Molecular Imaging Tel: W:416-946-5022 |
Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal > A thought just occurred to me while reading a CNN news article > about the planet Mars: Why not send an automated microscope with a > robotic lander to look for life there in the soil? The Mars rover included a Microscopic Imager (MI) system, specifically for the purpose of soil analysis. http://www.space.com/missionlaunches/opportunity_threads_040220.html http://en.wikipedia.org/wiki/Opportunity_rover Peter |
Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal Yes, and this is about the only way "life" would be detected on the surface: fossil organisms such as those used to describe prokaryotic evolution on early Earth (ca 4 billion years ago). If "living" life as we know it occurs on the surface of Mars, it would be possible only in regions where liquid wateror high humidity are present. This is why most of the excitement in Mars exploration revolves around whether visible geologic formations represent stream beds and whether polar ice caps (that DO consist of water ice) melt (this has been shown by the way). Mars may have had surface life at some point but if so and if it evolved in a manner similar to that on Earth, conditions on the surface would have become too hostile too rapidly for evolution much beyond simple bacteria (including phototrophic bacteria) or perhaps cyanobacteria. This does not preclude the possibility that subterranean liquid water exists and that simple life might be found there. >Search the CONFOCAL archive at >http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal > >> A thought just occurred to me while reading a CNN news article >> about the planet Mars: Why not send an automated microscope with a >> robotic lander to look for life there in the soil? > >The Mars rover included a Microscopic Imager (MI) system, specifically >for the purpose of soil analysis. > >http://www.space.com/missionlaunches/opportunity_threads_040220.html >http://en.wikipedia.org/wiki/Opportunity_rover > >Peter -- Robert J. Palmer Jr., Ph.D. Natl Inst Dental Craniofacial Res - Natl Insts Health Oral Infection and Immunity Branch Bldg 30, Room 310 30 Convent Drive Bethesda MD 20892 ph 301-594-0025 fax 301-402-0396 |
In reply to this post by John Oreopoulos
Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal
My prior job was with a NASA contractor, our project was to put a
full-featured microscope (brightfield, darkfield, phase contrast, DIC,
confocal, laser tweezers, spectrophotometry, interferometry, oil
immersion) onto the space station for (initially) 4 condensed-matter
experiments, and later for biology. Because of safety regulations,
the microscope could not be operated by astronauts- they were to simply
load a sample, close the door, and the machine would do its thing. You
cannot imagine how difficult it is to automate a microscope- the oil
immersion alone was a nightmare. In the end, we were unable to pull
it off.
There are some compact space-based microscopes (I think ESA has a phase contrast scope), but AFAIK, landers typically have chemical-based detectors as they are more sensitive and can be made more robust to the rigors of launch and landing. At 09:03 PM 8/23/2007, you wrote: Search the CONFOCAL archive at http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal A thought just occurred to me while reading a CNN news article about the planet Mars: Why not send an automated microscope with a robotic lander to look for life there in the soil? Surely more sophisticated instruments have been sent to Mars. Has NASA or any other space agency ever considered this kind of approach? Just curious. Instructor Department of Physiology and Biophysics Case Western Reserve University 216-368-6899 (V) 216-368-4223 (F) |
Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal
Well, automating oil
immersion in zero gravity doesn't really
bear thinking about! But automated
microscopes (non-immersion)
in which you simply 'post' in a slide and
look at the images on
a screen are available from several
manufacturers - wouldn't
one of those have been a good starting
point? So far as I know
none are confocal but implementing confocal
and spectrophotometry
would not be hard - laser tweezers might be more tricky. DIC, phase
darkfield etc would be
trivial.
Guy From: Confocal Microscopy List on behalf of Andrew Resnick Sent: Fri 24/08/2007 11:29 PM To: [hidden email] Subject: Re: A microscope on Mars? Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal My prior job was with a
NASA contractor, our project was to put a full-featured microscope (brightfield,
darkfield, phase contrast, DIC, confocal, laser tweezers, spectrophotometry,
interferometry, oil immersion) onto the space station for (initially) 4
condensed-matter experiments, and later for biology. Because of safety
regulations, the microscope could not be operated by astronauts- they were to
simply load a sample, close the door, and the machine would do its thing. You
cannot imagine how difficult it is to automate a microscope- the oil immersion
alone was a nightmare. In the end, we were unable to pull it
off.
There are some compact space-based microscopes (I think ESA has a phase contrast scope), but AFAIK, landers typically have chemical-based detectors as they are more sensitive and can be made more robust to the rigors of launch and landing. At 09:03 PM 8/23/2007, you wrote: Search the CONFOCAL archive at http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal A thought just occurred to me while reading a CNN news article about the planet Mars: Why not send an automated microscope with a robotic lander to look for life there in the soil? Surely more sophisticated instruments have been sent to Mars. Has NASA or any other space agency ever considered this kind of approach? Just curious. Andrew Resnick, Ph. D. |
Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal
I don't want to bore everyone on this list, so I'll take this
offline.
Andy At 08:50 AM 8/24/2007, you wrote: Search the CONFOCAL archive at http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal Instructor Department of Physiology and Biophysics Case Western Reserve University 216-368-6899 (V) 216-368-4223 (F) |
Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal
Please don't. It may not be what we think about every day but IMHO it's a very interesting thread.
Evelyn Ralston, Ph.D. Head, Light imaging Section, Office of Science and Technology, NIAMS, NIH Rm 1535, Bldg 50 Bethesda MD 20892-8023 tel 301-496-6164; FAX 301-402-2724 On Aug 24, 2007, at 11:54 AM, Andrew Resnick wrote: Search the CONFOCAL archive at http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal I don't want to bore everyone on this list, so I'll take this offline. |
In reply to this post by Steffen Steinert
Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal > From: Confocal Microscopy List [mailto:[hidden email]] On > Behalf Of Steffen Steinert > ... > First, noise removal by Decon. The explanation that noise is been removed > as part of a constraint within the algorithm based on the smoothness makes > sense to me. But is that indeed the only way used in Decon to remove > noise? > I dun want to be picky on that issue, but I´d like to cite from Jim´s > recent book, p.468:"Data recorded using Nyquist sampling..., Decon > suppresses data corresponding to features smaller than the point spread > function...". Would that be due to the constraint or rather due to the > fact > that these "smaller" features will most probably have a very high spatial > frequency which is not being covered by the OTF; and thus being removed > almost automatically via Decon? This is indeed a major way in that deconvolution will reduce high frequency noise. Some background: The PSF of an optical microscope expresses how the scope converts an imaged object into a signal. For 2D this is an inverse function going from 1 (spatial frequency of 0, average value for the image) to 0 at the highest frequency the aperture will accept. For a 2D case this doesn't have any zeros until you get to the highest frequency. 3D is rather more complex, and has lots of zeros in the transfer function. The simplest deconvolution is a single pass Wiener filter, intended to handle both zeros in the transfer function _and_ corrupting noise: Object O Signal D PSF P Wiener constant w, usually set to 1/(S/N) D = O * P, where '*' is a convolution O = D * P^-1, but this doesn't work in an ill-conditioned problem; 1/0.0 goes to infinity, and noise blows up. Since the inverse of the PSF can have zeros, and there is noise present even where there is no signal, the Wiener inverse is: O ~= D * [ P / (P^2 + w) ] This approximates P^-1 where there is a reasonable P, and is limited to 1/w at maximum, limiting the overamplification of noise. This is a theoretically optimal single pass filter for a low value transfer function in the presence of noise. In addition, in every implementation I am aware of anything in data D that exceeds the highest frequency the microscope will transmit is simply limited to 0, removing extremely high frequency noise entirely. Most implementations use a roll-off filter to limit sharp frequency cuts and associated ringing in the result. > This brings me to the next issue, background removal. Obviously, it´s best > having as less noise within the image as possible. However, in live- > imaging > this will hardly be the case. I somehow have the feeling that a prior > bg-removal wouldn´t do much good, since I may loose important informtion > for subsequent Decon. Any comments on that issue? Absolutely right. The data represents everything on the slide as blurred by the PSF - if you change that raw data it no longer really represents "O * P" and deconvolution will converge to something other than your object and associated backgrounds. I would suggest deconvolving fluorescent backgrounds, and using those to find background levels after deconvolving your sample. So: Decon(D) - Decon(Background image) > Assuming, I have obtained a representative PSF for my system, would the > Lucy-Richardson algorithm be appropriate for 2D-Decon? I believe the Lucy-Richardson algorithm makes an initial assumption of Poisson noise and (depending on the implementation) point sources. This works really well for astronomical images, but seems to give poorer results for complex fluorescent structures. Kevin Ryan Media Cybernetics, Inc. |
In reply to this post by Evelyn Ralston
Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal
I agree, this is an excellent thread… Alice Rodriguez, Ph.D. Duke University Rm 4331 French Family Science Center Science Drive Dept of Biology/DCMB Box 90338 Durham NC 27708 Mobile: 919 451-4682 From: Confocal Microscopy
List [mailto:[hidden email]] On Behalf Of Evelyn Ralston Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal Please don't. It may not be what we think about every day
but IMHO it's a very interesting thread. Evelyn Ralston, Ph.D. Head, Light imaging Section, Office of Science and Technology, NIAMS, NIH Rm 1535, Bldg 50 Bethesda MD 20892-8023 tel 301-496-6164; FAX 301-402-2724
On Aug 24, 2007, at 11:54 AM, Andrew Resnick wrote:
Search the CONFOCAL archive at http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal
I don't want to bore everyone on this list, so I'll take this offline. Search the CONFOCAL archive at http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal
From: Confocal Microscopy
List on behalf of Andrew Resnick Search the CONFOCAL archive at http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal
A thought just occurred to me while reading a CNN news article about the planet
Mars: Why not send an automated microscope with a robotic lander to look for
life there in the soil? Surely more sophisticated instruments have been sent to
Mars. Has NASA or any other space agency ever considered this kind of approach?
Just curious. Andrew Resnick, Ph. D. Andrew Resnick, Ph. D. |
Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal
Ok, here goes. I apologize in advance for the length; it's just a
sign of how committed I was to the project.
Let me first say that this project (the Light Microscopy Module, LMM) was an incredible experience for me, and I learned an amazing amount of stuff from some truly world-class engineers and technicians. In my opinion, the failure of this project was due to the fear of NASA and contractor managers to make timely and unpopular decisions. At it's peak, the LMM contractor team numbered about 50, there was another 10 NASA technical folks and the Principle Investigator (PI) teams added another 5 or 10 post-docs and grad students. Here's some background: around 1998, NASA approved a group of experiments involving colloids and fluids for spaceflight on the International Space Station. The first step in the process is for NASA to write a set of "requirements" for each experiment. A "requirement" is some capability that the flight instrument must have in order to produce valid and useful scientific results. For example, a requirement may be "obtain 30 images per second". How these requirements come about is an interesting story in itself, but in the end, NASA provides a list of requirements as part of a contract bid- contractors look at the requirements list and come up with a proposal to build *something that meets the requirements* at a certain cost. NASA then awards the contract to whomever it chooses. I was hired by the contractor in late 1999, when a contract was being generated to fly a microscope. That is to say, the contractor management convinced NASA management that a microscope was *something that met the requirements* for 4 flight experiments. Three of the experiments involved imaging and manipulating colloidal emulsions, while the fourth was a fluid heat transfer experiment, and the need was to image the wetting boundary. There were approximately 300 pages (single spaced) of requirements. At the time, none of us knew anything about microscopes specifically. To summarize the early planning stage, what was needed was a completely automated, tricked-out microscope. Not motorized, but *automated*. Once we invented this particular instrument, quite a bit of excitement was generated throughout NASA, for obvious reasons. It's cool, for one thing. Really sci-fi. The project rapidly blew up , attracting the NASA bio community as well. Lots of color drawings were made and distributed around. Websites were created. At the time, we knew of no imaging system that will *automatically* focus on an unknown target within any sort of reasonable timeframe, and maintain focus as the sample is moved around. We knew of no microscope system that will optimize (and automatically adjust) DIC shear for contrast. We knew of no microscope system that will automatically align the phase rings in a phase contrast system. We knew of no imaging system that would automatically establish Kohler illumination. I still am not aware of any microscope that will do any of this. If I am wrong, I would certainly like to know about it. When we did the project, we had to motorize the condenser turret and components, the epi- DIC prism, polarizer, and waveplate, the tube lens turret and Bertrand lens, the viewing prisms, all kinds of stuff. And any existing motors that did not meet NASA specs had to be removed and replaced, the gears de-greased and re-greased with NASA approved grease, the chassis de-painted, bolts changed, wiring re-done, integrated circuits had to be replaced with radiation-hardened components. I've disassembled and reassembled irises, stage slider bearings, and entire microscopes. Irises are the worst, by far. We obtained from the manufacturer complete blueprints for the microscope and lenses. Complete lens specifications and drawings. Let me tell you- those high NA immersion apochromats are absolute marvels. Here's how I would characterize the design phase: Imagine you work for GM, and GM wants a spiffy new car. You head the team that is in charge of the engine. LMM is the engine- the whole car is a larger structure that supplies the LMM with electricity, coolant, saves data, etc. etc. So you are to design an engine to certain specifications- torque, horsepower, compression ratio, whatever. Problem: you don't know how much room you have in the hood. You don't know where the transmission or exhaust will connect. You don't know where the electrical connections are. You don't know anything except the engine requirements. Your first step may be to decide on the piston arrangement- how many? inline? V? rotary? In any case, off you go. And as problems arise, you work to solve them. There will be meetings with the electrical team to decide where the alternator belt will go. Meetings with the fuel supply group to determine how much gas is needed. This goes along reasonably well until someone claims that the car will not meet "car specifications"- maybe not enough miles per gallon. Who's fault is that- the transmission team or the engine team? Maybe it's the body design- too much drag or weight? Design teams now compete to show the other team is at fault. Money and time is wasted trying to meet unrealistic and arbitrary requirements. Now we come to the safety concerns- astronauts are rare and delicate flowers, after all- touch temperatures (no convection to draw off heat) have to be below a certain value, the equipment must withstand "kick loads"- astronauts may kick off the microscope to propel themselves (!), anything breakable (including the samples and arc bulbs) has to be enclosed 3x.... chemicals, etc. There were "leak paths" that had to be plugged- the objective turret head was re-designed, we thought about enclosing the immersion objectives with latex at some point, it's insane what has to be done in the name of 'safety'. There was a 200+ page document with a list of safety concerns: for example, we calculated at most 2000 ml of immersion oil was required over the life of the 3 colloid experiments- so one question was "what happens if all 2000 ml is dispensed at once?" Each concern required at least 3 pages of reply- a plan, a backup plan, a backup to the backup, and then a test, validation and verification procedure to ensure that one of the three plans would work. Then the microscope is launched (up to 9 g's) in pieces and assembled on-orbit, without gravity loading things or keeping them in place. This, after sitting in Florida in a non-environmentally controlled crate for several months. And the space station is incredibly (mechanically) noisy. And there are huge thermal swings, some from the microscope and light sources- and the fans we needed to move the heat off add to the vibrational environment. This is why the microscope needed to be able to align the optical elements. Remember, there is no user interface- it's all done by software. We had no access to "diagnostic" images to determine if there is a problem. There was no astronaut interface- no eyepiece, no computer screen to view what was being acquired. Again, imagine writing down instructions on how to set up a microscope from the box, giving the instructions to an undergraduate who has never done that before, and having the expectation that everything will work perfectly the first time (because you only get one chance to run the experiment- once the experiment is over, all you get is a stack of data tapes with all your images when the shuttle is able to bring them down). In reality, it's even worse than that- you don't get to write the instructions, someone unfamiliar with how microscopes work writes down the instructions, occasionally asking for your input. Confocal was (surprisingly) the easy one- we used a Yokogawa spinning disk- incredibly rugged, and the issue of "finding focus" is eliminated. Confocal imaging would have met the overwhelming majority of imaging requirements for the colloid experiments. Oil immersion is do-able, actually- the oil preferentially clings to the glass and objective, and we found some non-wetting coatings we could apply to control where the oil went. The trick is dispensing the oil without bubbles. We proved this out by flying a microscope on the KC-135 "vomit comet". That was awesome. And the data- we estimated 1 TB of information was going to be generated *per day* (the requirements were 30 frames/sec, 1k x 1k 12-bit images) for 12 months. Where does the data go? Downlinking provided about 5 MB at best per day, IIRC. Here's an example of what is right about NASA technical folks- one of the PIs wanted to fly about 1500 samples. How on earth could we accommodate this? The solution was wafer-scale integration of sample cells- a glass disk 1 mm thick, just like a huge microscope slide, was bonded to a silicon wafer, which was then ground down to 100 microns thickness, so we could image the whole thickness. The silicon was etched in the pattern we needed: rows and columns of holes shaped like '=0=', each hole/cell holding about 50 microliters of colloid. Each hole got a small magnetic stir bar, a fraction of a mm long, so each sample could be freshly mixed prior to imaging. Then, a glass plate was bonded to the top and ground down to 150 microns thick: a # 1 1/2 coverslip. The silicon provided an excellent reflective surface- i.e. something to focus on, and we could pack in about 300 cells per sample tray. We knew the exact position of everything, and the exact thickness of everything, so we solved quite a few problems in one fell swoop. Of course, there is then the problem of filling and sealing 1500 50-microliter cells (each with different compositions- and the composition had to be verified somehow) that have to maintain their integrity for months, but we put that back onto the PI. Anything that goes up in space is using 10-year old technology at best. For gears and fans, that's not a big deal. But it is for cameras. And data interlinks. And software. Again, going back to the car analogy, once the piston layout is designed, a lot of things are constrained. And as the project moves along, it's harder and harder to change the piston layout. Once a particular microscope was selected, we could not go back and even update to a newer model- we bought 5 sets of everything to ensure we had spare parts. In the end, the NASA expectations (as sold to the PIs) were totally unrealistic, and the budget was likewise ridiculous. NASA management honestly believed that we could simply buy what we needed, that there were no problems to be solved. There was no money for testing and evaluation, among other things. I really thought, until I left 5 years later, that we could deliver something that would meet about 80% of the requirements. Unfortunately, management's only input was to periodically yell "failure is not an option!" and so we spent 80% of our time on the least important 20%. At 10:42 AM 8/24/2007, you wrote: Search the CONFOCAL archive at http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal Instructor Department of Physiology and Biophysics Case Western Reserve University 216-368-6899 (V) 216-368-4223 (F) |
Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal This is a great story. Thanks, Andrew! Andrew Resnick wrote: > Search the CONFOCAL archive at > http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal Ok, here goes. > I apologize in advance for the length; it's just a sign of how committed > I was to the project. > > Let me first say that this project (the Light Microscopy Module, LMM) > was an incredible experience for me, and I learned an amazing amount of > stuff from some truly world-class engineers and technicians. In my > opinion, the failure of this project was due to the fear of NASA and > contractor managers to make timely and unpopular decisions. At it's > peak, the LMM contractor team numbered about 50, there was another 10 > NASA technical folks and the Principle Investigator (PI) teams added > another 5 or 10 post-docs and grad students. > > Here's some background: around 1998, NASA approved a group of > experiments involving colloids and fluids for spaceflight on the > International Space Station. The first step in the process is for NASA > to write a set of "requirements" for each experiment. A "requirement" > is some capability that the flight instrument must have in order to > produce valid and useful scientific results. For example, a requirement > may be "obtain 30 images per second". How these requirements come about > is an interesting story in itself, but in the end, NASA provides a list > of requirements as part of a contract bid- contractors look at the > requirements list and come up with a proposal to build *something that > meets the requirements* at a certain cost. NASA then awards the > contract to whomever it chooses. > > I was hired by the contractor in late 1999, when a contract was being > generated to fly a microscope. That is to say, the contractor management > convinced NASA management that a microscope was *something that met the > requirements* for 4 flight experiments. Three of the experiments > involved imaging and manipulating colloidal emulsions, while the fourth > was a fluid heat transfer experiment, and the need was to image the > wetting boundary. There were approximately 300 pages (single spaced) of > requirements. At the time, none of us knew anything about microscopes > specifically. > > To summarize the early planning stage, what was needed was a completely > automated, tricked-out microscope. Not motorized, but *automated*. > Once we invented this particular instrument, quite a bit of excitement > was generated throughout NASA, for obvious reasons. It's cool, for one > thing. Really sci-fi. The project rapidly blew up , attracting the NASA > bio community as well. Lots of color drawings were made and distributed > around. Websites were created. > > At the time, we knew of no imaging system that will *automatically* > focus on an unknown target within any sort of reasonable timeframe, and > maintain focus as the sample is moved around. We knew of no microscope > system that will optimize (and automatically adjust) DIC shear for > contrast. We knew of no microscope system that will automatically align > the phase rings in a phase contrast system. We knew of no imaging > system that would automatically establish Kohler illumination. I still > am not aware of any microscope that will do any of this. If I am wrong, > I would certainly like to know about it. When we did the project, we > had to motorize the condenser turret and components, the epi- DIC prism, > polarizer, and waveplate, the tube lens turret and Bertrand lens, the > viewing prisms, all kinds of stuff. And any existing motors that did > not meet NASA specs had to be removed and replaced, the gears de-greased > and re-greased with NASA approved grease, the chassis de-painted, bolts > changed, wiring re-done, integrated circuits had to be replaced with > radiation-hardened components. I've disassembled and reassembled > irises, stage slider bearings, and entire microscopes. Irises are the > worst, by far. > > We obtained from the manufacturer complete blueprints for the microscope > and lenses. Complete lens specifications and drawings. Let me tell > you- those high NA immersion apochromats are absolute marvels. > > Here's how I would characterize the design phase: Imagine you work for > GM, and GM wants a spiffy new car. You head the team that is in charge > of the engine. LMM is the engine- the whole car is a larger structure > that supplies the LMM with electricity, coolant, saves data, etc. etc. > So you are to design an engine to certain specifications- torque, > horsepower, compression ratio, whatever. Problem: you don't know how > much room you have in the hood. You don't know where the transmission > or exhaust will connect. You don't know where the electrical > connections are. You don't know anything except the engine > requirements. Your first step may be to decide on the piston > arrangement- how many? inline? V? rotary? In any case, off you go. And > as problems arise, you work to solve them. There will be meetings with > the electrical team to decide where the alternator belt will go. > Meetings with the fuel supply group to determine how much gas is > needed. This goes along reasonably well until someone claims that the > car will not meet "car specifications"- maybe not enough miles per > gallon. Who's fault is that- the transmission team or the engine team? > Maybe it's the body design- too much drag or weight? Design teams now > compete to show the other team is at fault. Money and time is wasted > trying to meet unrealistic and arbitrary requirements. > > Now we come to the safety concerns- astronauts are rare and delicate > flowers, after all- touch temperatures (no convection to draw off heat) > have to be below a certain value, the equipment must withstand "kick > loads"- astronauts may kick off the microscope to propel themselves (!), > anything breakable (including the samples and arc bulbs) has to be > enclosed 3x.... chemicals, etc. There were "leak paths" that had to be > plugged- the objective turret head was re-designed, we thought about > enclosing the immersion objectives with latex at some point, it's insane > what has to be done in the name of 'safety'. There was a 200+ page > document with a list of safety concerns: for example, we calculated at > most 2000 ml of immersion oil was required over the life of the 3 > colloid experiments- so one question was "what happens if all 2000 ml is > dispensed at once?" Each concern required at least 3 pages of reply- a > plan, a backup plan, a backup to the backup, and then a test, validation > and verification procedure to ensure that one of the three plans would work. > > Then the microscope is launched (up to 9 g's) in pieces and assembled > on-orbit, without gravity loading things or keeping them in place. > This, after sitting in Florida in a non-environmentally controlled crate > for several months. And the space station is incredibly (mechanically) > noisy. And there are huge thermal swings, some from the microscope and > light sources- and the fans we needed to move the heat off add to the > vibrational environment. This is why the microscope needed to be able > to align the optical elements. > > Remember, there is no user interface- it's all done by software. We had > no access to "diagnostic" images to determine if there is a problem. > There was no astronaut interface- no eyepiece, no computer screen to > view what was being acquired. Again, imagine writing down instructions > on how to set up a microscope from the box, giving the instructions to > an undergraduate who has never done that before, and having the > expectation that everything will work perfectly the first time (because > you only get one chance to run the experiment- once the experiment is > over, all you get is a stack of data tapes with all your images when the > shuttle is able to bring them down). In reality, it's even worse than > that- you don't get to write the instructions, someone unfamiliar with > how microscopes work writes down the instructions, occasionally asking > for your input. > > Confocal was (surprisingly) the easy one- we used a Yokogawa spinning > disk- incredibly rugged, and the issue of "finding focus" is > eliminated. Confocal imaging would have met the overwhelming majority > of imaging requirements for the colloid experiments. > > Oil immersion is do-able, actually- the oil preferentially clings to the > glass and objective, and we found some non-wetting coatings we could > apply to control where the oil went. The trick is dispensing the oil > without bubbles. We proved this out by flying a microscope on the > KC-135 "vomit comet". That was awesome. > > And the data- we estimated 1 TB of information was going to be generated > *per day* (the requirements were 30 frames/sec, 1k x 1k 12-bit images) > for 12 months. Where does the data go? Downlinking provided about 5 MB > at best per day, IIRC. > > Here's an example of what is right about NASA technical folks- one of > the PIs wanted to fly about 1500 samples. How on earth could we > accommodate this? The solution was wafer-scale integration of sample > cells- a glass disk 1 mm thick, just like a huge microscope slide, was > bonded to a silicon wafer, which was then ground down to 100 microns > thickness, so we could image the whole thickness. The silicon was > etched in the pattern we needed: rows and columns of holes shaped like > '=0=', each hole/cell holding about 50 microliters of colloid. Each hole > got a small magnetic stir bar, a fraction of a mm long, so each sample > could be freshly mixed prior to imaging. Then, a glass plate was bonded > to the top and ground down to 150 microns thick: a # 1 1/2 coverslip. > The silicon provided an excellent reflective surface- i.e. something to > focus on, and we could pack in about 300 cells per sample tray. We > knew the exact position of everything, and the exact thickness of > everything, so we solved quite a few problems in one fell swoop. Of > course, there is then the problem of filling and sealing 1500 > 50-microliter cells (each with different compositions- and the > composition had to be verified somehow) that have to maintain their > integrity for months, but we put that back onto the PI. > > Anything that goes up in space is using 10-year old technology at best. > For gears and fans, that's not a big deal. But it is for cameras. And > data interlinks. And software. Again, going back to the car analogy, > once the piston layout is designed, a lot of things are constrained. And > as the project moves along, it's harder and harder to change the piston > layout. Once a particular microscope was selected, we could not go back > and even update to a newer model- we bought 5 sets of everything to > ensure we had spare parts. > > In the end, the NASA expectations (as sold to the PIs) were totally > unrealistic, and the budget was likewise ridiculous. NASA management > honestly believed that we could simply buy what we needed, that there > were no problems to be solved. There was no money for testing and > evaluation, among other things. I really thought, until I left 5 years > later, that we could deliver something that would meet about 80% of the > requirements. Unfortunately, management's only input was to > periodically yell "failure is not an option!" and so we spent 80% of our > time on the least important 20%. > > |
In reply to this post by Andrew Resnick
Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal
This is all quite interesting from the standpoint of what NASA
(and DOE and DOD) can dream up as tasks, and how they and the
associated contractors approach them. This particular example
looks like a materials science track (?). The thread originally
dealt with a biological application. I heard a talk from a
person at Carnegie on his experience with designing an automated
epifluorescence microscope intended for rover deployment. This
thing was used to photograph lichens in the Atacama desert.
There are no lichens (or fungi or algae) on the surface of Mars -
MAYBE some fossilized bacteria or, if one is really really lucky, some
bacterial spores (viability undetermined) near the poles. So,
the only way to hunt for these things is to do geological thin
sections that can be examined using high resolution light microscopy
(EM doesn't work, but Raman has!), then have an AI system (or
real-time interpretation by a living scientist) to recognize things as
bacteria - things that are not terribly different than abundant
abiological geological "artifacts".
I would not be surprised to find out that a system like this has
at least been proposed to NASA, if not gone to feasibility tests.
Anyway, that's the view from a professional pessimist......
PS - for those interested, a long-standing research route at NASA
and ESA has been to look for bacteria in space (collection of
particles and subsequent analysis/growth epxts.
Search the CONFOCAL archive at http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal Ok, here goes. I apologize in advance for the length; it's just a sign of how committed I was to the project.
At 10:42 AM 8/24/2007, you wrote: Andrew Resnick, Ph. D. -- Robert J. Palmer Jr., Ph.D.
Natl Inst Dental Craniofacial Res - Natl Insts Health Oral Infection and Immunity Branch Bldg 30, Room 310 30 Convent Drive Bethesda MD 20892 ph 301-594-0025 fax 301-402-0396 |
Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal
This is really fascinating,
and it's a crying shame that the project didn't get off the
ground. It's also a classic example
of how NASA is brilliant in spending huge sums
of money just to stuff things up. (At
the time of writing this I don't know whether the
latest Shuttle crew has safely returned to
Earth or not, knowing that their heat shield
tiles have been holed by foam from the fuel
tanks) In terms of the design features, widefield autofocus has been
available for years but
I can't vouch for the situation back in 1999. I would have
thought that for such a
tightly specified sample, Koehler illumination could have been
preset, and likewise
I'd have thought that a suitably rugged phase contrast condenser
would not require
re-aligning in orbit. But 2 litres of immersion oil
still worries me!
Andrew, this is a most extraorinarily fascinating story, and I do
hope that even
if the microscope didn't fly some useful land-based improvements to
automated
microscopy flowed on from it.
Guy
From: Confocal Microscopy List on behalf of Robert J. Palmer Jr. Sent: Sat 25/08/2007 4:54 AM To: [hidden email] Subject: Re: A microscope on Mars? Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal
This is all quite interesting from the standpoint of what NASA (and DOE and
DOD) can dream up as tasks, and how they and the associated contractors approach
them. This particular example looks like a materials science track
(?). The thread originally dealt with a biological application. I
heard a talk from a person at Carnegie on his experience with designing an
automated epifluorescence microscope intended for rover deployment. This
thing was used to photograph lichens in the Atacama desert. There are no
lichens (or fungi or algae) on the surface of Mars - MAYBE some fossilized
bacteria or, if one is really really lucky, some bacterial spores (viability
undetermined) near the poles. So, the only way to hunt for these things is
to do geological thin sections that can be examined using high resolution light
microscopy (EM doesn't work, but Raman has!), then have an AI system (or
real-time interpretation by a living scientist) to recognize things as bacteria
- things that are not terribly different than abundant abiological geological
"artifacts".
I would not be surprised to find out that a system like this has at least
been proposed to NASA, if not gone to feasibility tests. Anyway, that's
the view from a professional pessimist......
PS - for those interested, a long-standing research route at NASA and ESA
has been to look for bacteria in space (collection of particles and subsequent
analysis/growth epxts.
Search the CONFOCAL archive at http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal Ok, here goes. I apologize in advance for the length; it's just a sign of how committed I was to the project.
At 10:42 AM 8/24/2007, you wrote: Andrew Resnick, Ph. D. -- Robert J. Palmer Jr., Ph.D. Natl Inst Dental Craniofacial Res - Natl Insts Health Oral Infection and Immunity Branch Bldg 30, Room 310 30 Convent Drive Bethesda MD 20892 ph 301-594-0025 fax 301-402-0396 |
In reply to this post by Steffen Steinert
Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal In case some other people fancy to try out the free Decon on powermicroscope.com, the site will probably be back online at the end of september. For further details pls refer to Alberto´s email below... Cheers, Steffen Dear George Dear Steffen unfortunately from home my e-mail is rejected from the list. I would be greateful if you could pass this message to the MIcroscopy Community.. Due to hackers' attacks our University asked us to stop the service. As well, when we tried to restore everything the hard disk containing all the most recent codes was stolen in our Department. Now we are rebuilding everything and I hope that for the end of Septmebre www.powermicrscope.com will be running again. We had approx. 250 users that used our deconvolution system for free and we hope to be in soon. Thank you Alby Il giorno 28/ago/07, alle ore 12:22, George McNamara ha scritto: >Hi Steffen, > >I get the same thing when clicking the www.powermicroscope.com >Registration page - just the powermicroscope logo at top, row of >buttons of which only Registration works (and brings back to same >screen), small empty area, and then a copyright notice. > >I am CC:ing Alberto Diaspro, the originator of the site. Maybe he >can tell us how to register for power up your microscope. > >George >Search the CONFOCAL archive at >http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal > >Hi everyone, > >After reading many articles, books and lots of playing around with >algorithms, the unknowns about 2D-Deconvolution seem to accumulate rather >than to disappear. >Since I´m not sure whether I´m still on the right track, I thought I drop >a few lines in this excellent forum, hopefully someone can help. > >Leaving aside other fancy features and for the sake of simpleness; I set >up a widefield scope for live-imaging with pulsed dual laser excitation by >AOMs, an EM-CCD camera, no piezo driven stage and a 60x water objective. >Before seriously starting to investigate biological matters, I remembered >the statement, which I also came across with in James Pawley´s recent >book: "Deconvolve everything!!". >I´m not an expert in Deconvolution and never ran it myself before, so >would you experienced "Deconvolutionists" agree that it´s worthy for 2D-images? >2D-Deconvolution would clearly be limited compared to "ideal" >3D-Deconvolution, since one does not have the information from adjacent >planes. Thus, it´s quite inaccurate if not even impossible to remove out >of focus light, isn´t it? On the other hand, one could still recover high >spatial frequencies which were attenuated by the OTF (actually by the NA) >and not to forget removing Poisson noise. Is the noise removal due to the >fact that features smaller than the PSF size will be neglected during >Deconvolution or because the spatial frequency of noise is most likely to >be out of the OTF limit (2NA/lambda and NA²/(2*n*lambda)? Is there a >reference at what range of frequencies noise would be expected or is it >rather evenly distributed (i.e. when looking at the Fourier-transform of >an image)? > >Due to the missing automated stage I can´t record the PSF of my system. >Normally one records the PSF/OTF at the actual system, basically to >include eventual misalignment, underfilling of the back aperture etc. If I >remember correctly, I think the Deltavision guys measure the objectives >separately and use this measured PSF for Deconvolution (leaving >blind-decon aside for now). Is this because of the high precision of their >scope, hence eliminating eventual misalignment effects on the OTF? What >are general experiences in terms of measuring an objective´s PSF at a >different system compared to the "real" one in the actual scope (spherical >abberations, asymmetric shapes)? > >As to deconvolution results. How can one judge whether an algorithm >produced a correct result? Obviously, an experienced eye will see most of >the artifacts, but what are objective, reliable and measurable >characteristics in order to say that this image has been improved or that >one is clearly ruined? Signal-to-noise-Ratio, Fourier analysis, Image >properties (i.e. speckles, ringing effects)? >What is an appropriate way for defining the S/N-ratio for images? I ran >across many different methods,such as: mean(I)/std(I) or max(I)/std(I) or >maybe applying a morphological operation, distinguishing signal and >background and then mean(signal)/sqrt(mean(noise)+mean(signal)). Which one >is a commonly accepted method for S/N calculation in image processing? > >With respect to appropriate Deconvolution algorithms, essentially all >references say that iterative algorithms are superior to linear filters >(due to applying constraints (i.e. non-negativity), not a simple high-pass >filter, etc.). Because of the missing PSF in my particular case, >Blind-Decon seems to be the right choice. Or does someone disagree on that one? >Applying the blind-deconvolution in Matlab to various images of >fluorescently stained cells, also having different noise levels, revealed >rather disappointing results. The higher the iteration steps (more than 10 >iterations), the more speckled the images became. Strangely, the magnitude >of the fourier-transfered image always tends to form a weird symmetrical >pattern (I´m happy to send to images for those interested). Varying the >inital PSF-size doesn´t help either and the reconstructed image becomes >increasingly noisy after more than 15 iterations. Does someone know about >this problem? Did someone already successfully deconvolve 2D data with >Matlab? The only reason I could think of is that my system is far away >from being perfectly aligned and thus leading to "non-reconstructable" >images. Due to the missing stage, I can´t check that issue. However, from >my understanding the blind deconvolution should be able to deal with >asymmetric PSFs (if it´s not too far off). Any further ideas? > >Another simple question that´s still in my head. When referring to the >lateral size of the PSF, is it the distance of the 1st two minima around >the AiryDisc, or the size with further minima, the FWHM, or simply by >practical approach - the diameter in pixels of a subresolution bead in focus? > >Well, that post became quite long. Sorry for asking so many questions at once. > >Many, many thanks in advance for any input, it´ll be very much appreciated! > > >Best regards, > >Steffen Steinert, Dipl.-Ing. > >--------------------------- >Universität Stuttgart >3. Physikalisches Institut >Pfaffenwaldring 57 >70550 Stuttgart > >Tel.: 49/711/68565230 >---------------------------- Steffen Steinert, Dipl.-Ing. --------------------------- Universität Stuttgart 3. Physikalisches Institut Pfaffenwaldring 57 70550 Stuttgart Tel.: 49/711/68565230 ---------------------------- |
In reply to this post by Guy Cox
Search the CONFOCAL archive at
http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal
As far as autofocus capabilities, there are some manufacturers that claim
to have it- Nikon's setup for TIRF also comes to mind. I've been on
a personal mission at trade shows to (gently) berate the manufacturers to
come up with a solution for focus drift due to thermal fluctuations- the
hardware exists, the missing link is a decent software feedback
loop. None have expressed interest, the typical excuse being that
thermal fluctuations have nothing to do with the optical performance of
the microscope, and thus is not their problem.
Another capability I lobby for is the ability to acquire fast z-stacks- the camera should be the limiting rate. Again, it's primarily a software issue and it's seen as a niche application. As for pre-setting the alignment of the optics, my point wasn't that it is/was not possible, my point was that we didn't have the time to solve that particular issue in addition to the thousands of other issues. Like any large project, what was needed was to recognize that not every problem could be solved, and to prioritize the issues. That wasn't done, so no problem got solved. Andy At 07:52 AM 8/26/2007, you wrote: Search the CONFOCAL archive at http://listserv.acsu.buffalo.edu/cgi-bin/wa?S1=confocal Instructor Department of Physiology and Biophysics Case Western Reserve University 216-368-6899 (V) 216-368-4223 (F) |
Free forum by Nabble | Edit this page |