George McNamara |
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Dear confocal listserv, While training a new user on our Zeiss LSM710 confocal microscope, I tried explaining the differences between Kalman filtering, arithmetic averaging, and my suggestion (see the archive for posts) of using the median for each pixel (assuming no photobleaching). The main intent being that under situations with most of the noise being from the PMT, that the probability of the PMT having a high value in 3 of 5 time points for a given pixel would be low (unless the PMT high voltage was cranked really high). It occurred to me that the Zeiss ZEN software includes median filtering. I acquired a 5 timepoint series of a single plane, went to the ZEN (2010B SP1) selected median filter, X=1, Y=1, Z=5, selected the image (since Zeiss is too stupid to figure out that the current image is probably what the user wants to filter - especially if it is the only open image), clicked apply, then after the result image series was created, changed to plane 3. Result was: nice (planes 1, 2, 4 and 5 were not - this also shows that Zeiss does not know Z from time). This particular user's specimen autofluorescence and/or non-specific antibodies was a more significant factor in their experiment than the PMT gain, but I suggest median filtering will be useful in some situations. I encourage confocal vendors (EMCCd data might also benefit from something like it, median over an odd number of consecutive frames) make this an option. Certainly more sensible in 2013 than Kalman filtering. Enjoy, George |
Jeremy Adler-4 |
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** I dislike median filtering since by taking the median (only the central value) value most the data is thrown away. Normally Poisson noise is the problem and the sum of the timeseries will have a lower uncertainty than the median. In the instance George cites, of occasional substantial noise from the PMT, a better option is decide whether the distribution of five values is likely to include aberrant high value(s) and if so take the arithmetic of the remaining values. The problem is to decide if the distribution of the 5 values is consistent with Poisson noise, which is tricky. However over the whole image there is a population of pixels and for each mean the likely population distribution could be generated - aberrant values could then be detected with higher probability. Quoting George McNamara <[hidden email]>: > ***** > To join, leave or search the confocal microscopy listserv, go to: > http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy > ***** > > Dear confocal listserv, > > While training a new user on our Zeiss LSM710 confocal microscope, I > tried explaining the differences between Kalman filtering, > arithmetic averaging, and my suggestion (see the archive for posts) > of using the median for each pixel (assuming no photobleaching). The > main intent being that under situations with most of the noise being > from the PMT, that the probability of the PMT having a high value in > 3 of 5 time points for a given pixel would be low (unless the PMT > high voltage was cranked really high). It occurred to me that the > Zeiss ZEN software includes median filtering. I acquired a 5 > timepoint series of a single plane, went to the ZEN (2010B SP1) > selected median filter, X=1, Y=1, Z=5, selected the image (since > Zeiss is too stupid to figure out that the current image is probably > what the user wants to filter - especially if it is the only open > image), clicked apply, then after the result image series was > created, changed to plane 3. Result was: nice (planes 1, 2, 4 and 5 > were not - this also shows that Zeiss does not know Z from time). > This particular user's specimen autofluorescence and/or non-specific > antibodies was a more significant factor in their experiment than > the PMT gain, but I suggest median filtering will be useful in some > situations. I encourage confocal vendors (EMCCd data might also > benefit from something like it, median over an odd number of > consecutive frames) make this an option. Certainly more sensible in > 2013 than Kalman filtering. > > Enjoy, > > George > Jeremy Adler IGP Rudbeckslaboratoriet Daghammersköljdsväg 20 751 85 Uppsala Sweden 0046 (0)18 471 4607 |
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** I agree with the general tenor of this post. But I disagree with the statement that a median filter 'throws away' most of the data. All the pixels within the kernel contribute to the final result - and then the next set contributes to the next pixel, and so on. It is in fact quite a good way of doing just what Jeremy wants - getting rid of outliers. This is not to deny that more complex algorithms might not do even better. But it does not throw away data. Guy -----Original Message----- From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Jeremy Adler Sent: Monday, 25 February 2013 4:09 AM To: [hidden email] Subject: Re: median filtering confocal microscope data at the instrument ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** I dislike median filtering since by taking the median (only the central value) value most the data is thrown away. Normally Poisson noise is the problem and the sum of the timeseries will have a lower uncertainty than the median. In the instance George cites, of occasional substantial noise from the PMT, a better option is decide whether the distribution of five values is likely to include aberrant high value(s) and if so take the arithmetic of the remaining values. The problem is to decide if the distribution of the 5 values is consistent with Poisson noise, which is tricky. However over the whole image there is a population of pixels and for each mean the likely population distribution could be generated - aberrant values could then be detected with higher probability. Quoting George McNamara <[hidden email]>: > ***** > To join, leave or search the confocal microscopy listserv, go to: > http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy > ***** > > Dear confocal listserv, > > While training a new user on our Zeiss LSM710 confocal microscope, I > tried explaining the differences between Kalman filtering, arithmetic > averaging, and my suggestion (see the archive for posts) of using the > median for each pixel (assuming no photobleaching). The main intent > being that under situations with most of the noise being from the PMT, > that the probability of the PMT having a high value in > 3 of 5 time points for a given pixel would be low (unless the PMT high > voltage was cranked really high). It occurred to me that the Zeiss ZEN > software includes median filtering. I acquired a 5 timepoint series of > a single plane, went to the ZEN (2010B SP1) selected median filter, > X=1, Y=1, Z=5, selected the image (since Zeiss is too stupid to figure > out that the current image is probably what the user wants to filter - > especially if it is the only open image), clicked apply, then after > the result image series was created, changed to plane 3. Result was: > nice (planes 1, 2, 4 and 5 were not - this also shows that Zeiss does > not know Z from time). > This particular user's specimen autofluorescence and/or non-specific > antibodies was a more significant factor in their experiment than the > PMT gain, but I suggest median filtering will be useful in some > situations. I encourage confocal vendors (EMCCd data might also > benefit from something like it, median over an odd number of > consecutive frames) make this an option. Certainly more sensible in > 2013 than Kalman filtering. > > Enjoy, > > George > Jeremy Adler IGP Rudbeckslaboratoriet Daghammersköljdsväg 20 751 85 Uppsala Sweden 0046 (0)18 471 4607 |
Johannes Schindelin |
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Dear Guy, On Mon, 25 Feb 2013, Guy Cox wrote: > I agree with the general tenor of this post. But I disagree with the > statement that a median filter 'throws away' most of the data. While I am not an enemy of the median filter, it has to be stated that it throws away most of the data, even to the point of introducing artifacts. > All the pixels within the kernel contribute to the final result - and > then the next set contributes to the next pixel, and so on. The problem is not so much that the pixels contribute or not, the problem with the Median filter in particular is that it operates on an ordinal scale, not an interval scale. In other words, the neighboring pixels' contribution is *qualitative*, not *quantitative*. > It is in fact quite a good way of doing just what Jeremy wants - getting > rid of outliers. The Median filter is indeed a quite good way to filter out outliers of data in a quick and robust manner. The problem is that what you *should* want to do after acquiring images is to analyze them *quantitatively*. And since the Median filter is a non-linear filter, it is inappropriate to apply it before any quantitative analysis. > This is not to deny that more complex algorithms might not do even > better. But it does not throw away data. Sorry to re-iterate: it does throw away data. Just as almost every image processing filter, the information (as can be measured by the entropy) is reduced. The hope, of course, is that information was lost that is of no interest (in this particular case, values from outliers), but there is almost always also information lost that one wanted (in this particular case, the linear relation between the original and the processed data). Having said that, the Median filter might still be the most appropriate filter for Jeremy's application -- unfortunately. Ciao, Johannes |
George McNamara |
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Hi Johannes, So if a single pixel scanned five times has values of 40, 38, 42, 3800, 37, (not necessarily acquired in that order), you would prefer the arithmetic mean 791.4 (not that any of the vendors can give you the 0.4, and for anyone who is a fan of Kalman, the Kalman value would depend on which order the five were acquired in, rather than 40 (which is the "digital offset" I usually use on the Zeiss LSM710 I manage when operating in 12-bit mode). Pesonally, I would like to see the point scanning confocal microscope (and EMCCD software) vendors implement median and even more PMT and similar noisy data appropriate methods to provide the best possible data to my users and I (and as of April: my colleagues and I at MDACC, Houston). Sincerely, George On 2/25/2013 2:41 PM, Johannes Schindelin wrote: > ***** > To join, leave or search the confocal microscopy listserv, go to: > http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy > ***** > > Dear Guy, > > On Mon, 25 Feb 2013, Guy Cox wrote: > > >> I agree with the general tenor of this post. But I disagree with the >> statement that a median filter 'throws away' most of the data. >> > While I am not an enemy of the median filter, it has to be stated that it > throws away most of the data, even to the point of introducing artifacts. > > >> All the pixels within the kernel contribute to the final result - and >> then the next set contributes to the next pixel, and so on. >> > The problem is not so much that the pixels contribute or not, the problem > with the Median filter in particular is that it operates on an ordinal > scale, not an interval scale. > > In other words, the neighboring pixels' contribution is *qualitative*, not > *quantitative*. > > >> It is in fact quite a good way of doing just what Jeremy wants - getting >> rid of outliers. >> > The Median filter is indeed a quite good way to filter out outliers of > data in a quick and robust manner. > > The problem is that what you *should* want to do after acquiring images is > to analyze them *quantitatively*. And since the Median filter is a > non-linear filter, it is inappropriate to apply it before any quantitative > analysis. > > >> This is not to deny that more complex algorithms might not do even >> better. But it does not throw away data. >> > Sorry to re-iterate: it does throw away data. Just as almost every image > processing filter, the information (as can be measured by the entropy) is > reduced. The hope, of course, is that information was lost that is of no > interest (in this particular case, values from outliers), but there is > almost always also information lost that one wanted (in this particular > case, the linear relation between the original and the processed data). > > Having said that, the Median filter might still be the most appropriate > filter for Jeremy's application -- unfortunately. > > Ciao, > Johannes > > |
Johannes Schindelin |
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Hi George, On Mon, 25 Feb 2013, George McNamara wrote: > So if a single pixel scanned five times has values of 40, 38, 42, 3800, > 37, (not necessarily acquired in that order), you would prefer the > arithmetic mean 791.4 (not that any of the vendors can give you the 0.4, > and for anyone who is a fan of Kalman, the Kalman value would depend on > which order the five were acquired in, rather than 40 (which is the > "digital offset" I usually use on the Zeiss LSM710 I manage when > operating in 12-bit mode). Please do not pretend that I said that. When I talked about "linear" filters, I did not imply a simple averaging. > Pesonally, I would like to see the point scanning confocal microscope > (and EMCCD software) vendors implement median and even more PMT and > similar noisy data appropriate methods to provide the best possible data > to my users and I (and as of April: my colleagues and I at MDACC, > Houston). What data would be best recorded depends highly on the application. In the general case, recording the values 40, 38, 42, 3800 and 37 in your above example would be better than recording just "40". But recording more than one value per pixel is often not practical. To reiterate: The Median filter *can* be the optimal filter. You should just not go around and tell everybody that it *is* the optimal filter, because it certainly is not. And in particular when you want to quantify your data after acquisition, it is inappropriate to use the Median filter (remember: a filter should not be applied just because the processed image "looks good", but it should only be applied if it helps the analysis). That is all I said. (I certainly did not claim that you should always take a simple arithmetic mean. I am not that stupid.) Ciao, Johannes |
James Pawley |
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Hi all, It seems that we are discussing the best ways of eliminating the effects of what are sometimes called "single-pixel" noise events. Although it is fair to ask "What other kind of noise is there?", the term is often used to refer to pixels with recorded intensity values that are "unreasonably large" and seem to have nothing to do with the presence of dye molecules at a certain location in the specimen. Such values can come from a number of possible sources: cosmic rays pass through the photocathode every few seconds; alpha particles from radioactive elements in the PMT somewhat more often. If the PMT is used at high gain, as is often the case when looking at living specimens and signal levels must be kept low, single-photoelectron dark counts may produce fast pulses from the PMT or EM-CCD that approach the size of those representing signal in a "stained" pixel. As these signals seem obvious artifacts when viewed by eye, it would be convenient if they could be removed automatically. By definition, filters take things out. That is both their aim and their curse. One can argue for hours on whether or not the resulting data is better or not. In addition, filters are very fast. However, as computers get ever faster and cheaper one would expect that this advantage would become less important. In contrast to filters, deconvolution puts things in. Traditionally, it reimposes the limits known to have been placed on the data by the optics used to obtain it. Single-pixel events are "impossible" because, assuming that Nyquist has been satisfied, the smallest "real" feature in the data should be at least 4 pixels wide (or 12-16 pixels in area, 50-100 voxels in volume), not one pixel. Because the spatial frequency of a noise pulse singularity is at least 4x higher than that of the highest spatial frequency that the optical system is capable of having transmitted, the offending value can be tagged and then either replaced or averaged down. Indeed, some EM-CCDs can now be set to detect and remove single-pixel noise based on this recognition. More generally however, the most reliable and robust method of obtaining the most accurate pixel-intensity information from a series of sequentially-obtained data sets is to deconvolve them in time as well as space. This just means that we put into the process not just the PSF (which sets the limits on possible spatial frequencies) but also our knowledge that real changes in specimen brightness can only occur so fast and not any faster. George has postulated a series of intensity measurements from a single pixel. Depending on the time delay between the measurements in this series, we may (or may not) be justified in assuming that no real change in the specimen could justify a sudden, 100x intensity change that is only one scan-time in duration. Again, this allows us to tag outliers and then dispose of them either by replacement or averaging. So much for what can now be conveniently done using computers. Let us not forget that every effort should also be taken to reduce the number of single-pixel anomalies present in the raw data to begin with. With the PMT, this means keeping the photocathode cool and small and monitoring its no-real-signal output over time (i.e., dark count rate). Store a reference image of a single scan with all the lasers and room lights turned off, and look for changes in its general appearance as the weeks pass. More quantitative measures are also wise. (And while you are at it, compare this zero-light result with one obtained when the level of room illumination present is similar to that which you use when actually collecting data. Stray light is often a more serious problem than we we expect.) When employing an EM-CCD, a similar no-signal image can be used to assess changes in dark-count and coupling-induced charge over time. These may slowly drift up over time (months) and are always very sensitive to chip temperature. I am less familiar with the anomalous, single-pixel behavior of sCMOS cameras, but I would guess that, with the exception of hot pixels, they are less common simply because charge amplification is not involved and events associated with the emergence of a single, errant photoelectron cannot be seen above the general read-noise level. As hot-pixels tend to reoccur at the same exact location in the image, they can be automatically identified and averaged out using data from their 4 or 8 nearest neighbours. In all these cases the most efficient way of removing single-pixel anomalies from your final data is to take all the precautions needed to prevent them from occurring. Sorry for being so long-winded. Jim Pawley >***** >To join, leave or search the confocal microscopy listserv, go to: >http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy >***** > >Hi George, > >On Mon, 25 Feb 2013, George McNamara wrote: > >> So if a single pixel scanned five times has values of 40, 38, 42, 3800, >> 37, (not necessarily acquired in that order), you would prefer the >> arithmetic mean 791.4 (not that any of the vendors can give you the 0.4, >> and for anyone who is a fan of Kalman, the Kalman value would depend on >> which order the five were acquired in, rather than 40 (which is the >> "digital offset" I usually use on the Zeiss LSM710 I manage when >> operating in 12-bit mode). > >Please do not pretend that I said that. When I talked about "linear" >filters, I did not imply a simple averaging. > >> Pesonally, I would like to see the point scanning confocal microscope >> (and EMCCD software) vendors implement median and even more PMT and >> similar noisy data appropriate methods to provide the best possible data >> to my users and I (and as of April: my colleagues and I at MDACC, >> Houston). > >What data would be best recorded depends highly on the application. In the >general case, recording the values 40, 38, 42, 3800 and 37 in your above >example would be better than recording just "40". But recording more than >one value per pixel is often not practical. > >To reiterate: The Median filter *can* be the optimal filter. You should >just not go around and tell everybody that it *is* the optimal filter, >because it certainly is not. And in particular when you want to quantify >your data after acquisition, it is inappropriate to use the Median filter >(remember: a filter should not be applied just because the processed image >"looks good", but it should only be applied if it helps the analysis). > >That is all I said. (I certainly did not claim that you should always take >a simple arithmetic mean. I am not that stupid.) > >Ciao, >Johannes -- James and Christine Pawley, 5446 Burley Place (PO Box 2348), Sechelt, BC, Canada, V0N3A0, Phone 604-885-0840, email <[hidden email]> NEW! NEW! AND DIFFERENT Cell (when I remember to turn it on!) 1-604-989-6146 |
George McNamara |
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Hi Jim et al, I like your point of being able to operate in time as well as in XY and Z. For that matter, if FLIM detectors become widespread, they operate at relatively high gain, so should have a very different noise signature than fluorescence lifetimes. This could - and should - be cleaned up! dead time and other issues could also be modelled and dealt with). Except for some EMCCD "hot pixel' noise removal - which sounds to me like a marketing feature, none of these denoising concepts have been implemented by any of the big four laser scanning confocal microscope vendors. I am pretty confident that there are over 1500 Leica SP5's in the field (and a few SP8's, and some unfortunate souls still nursing SP1's and SP2's), and similar number (or more) of Zeiss LSM 510+700+710+780's. My guess is Nikon + Olympus together add up to around 1500, maybe more. So, market size of around 5,000 systems. I know Leica does not have online median (or any other) pixel cleanup in the LAS AF software I have on my SP5's (and the "batch deconvolution" is so awful even Leica's applications scientists have never been able to show me it working). I discovered that Zeiss ZEN could be tricked into doing a median filter on time series data (post acquisition, and tedious number of steps to do it). I doubt Nikon or Olympus software has it. Upshot: none of the major vendors has made pixel cleanup a consideration. Come to think of it, I don't know of any of the minor vendors having clean up either. I do want to remind readers that the PMT noise I am referring to is most relevant when specimen autofluorescence is very low - or when the lasers are blocked so you can evaluate the detection side without pesky photons messing up your head. Enjoy, George On 2/26/2013 11:45 AM, James Pawley wrote: > ***** > To join, leave or search the confocal microscopy listserv, go to: > http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy > ***** > > Hi all, > > It seems that we are discussing the best ways of eliminating the > effects of what are sometimes called "single-pixel" noise events. > Although it is fair to ask "What other kind of noise is there?", the > term is often used to refer to pixels with recorded intensity values > that are "unreasonably large" and seem to have nothing to do with the > presence of dye molecules at a certain location in the specimen. Such > values can come from a number of possible sources: cosmic rays pass > through the photocathode every few seconds; alpha particles from > radioactive elements in the PMT somewhat more often. If the PMT is > used at high gain, as is often the case when looking at living > specimens and signal levels must be kept low, single-photoelectron > dark counts may produce fast pulses from the PMT or EM-CCD that > approach the size of those representing signal in a "stained" pixel. > As these signals seem obvious artifacts when viewed by eye, it would > be convenient if they could be removed automatically. > > By definition, filters take things out. That is both their aim and > their curse. One can argue for hours on whether or not the resulting > data is better or not. In addition, filters are very fast. However, as > computers get ever faster and cheaper one would expect that this > advantage would become less important. > > In contrast to filters, deconvolution puts things in. Traditionally, > it reimposes the limits known to have been placed on the data by the > optics used to obtain it. Single-pixel events are "impossible" > because, assuming that Nyquist has been satisfied, the smallest "real" > feature in the data should be at least 4 pixels wide (or 12-16 pixels > in area, 50-100 voxels in volume), not one pixel. > > Because the spatial frequency of a noise pulse singularity is at least > 4x higher than that of the highest spatial frequency that the optical > system is capable of having transmitted, the offending value can be > tagged and then either replaced or averaged down. Indeed, some EM-CCDs > can now be set to detect and remove single-pixel noise based on this > recognition. > > More generally however, the most reliable and robust method of > obtaining the most accurate pixel-intensity information from a series > of sequentially-obtained data sets is to deconvolve them in time as > well as space. This just means that we put into the process not just > the PSF (which sets the limits on possible spatial frequencies) but > also our knowledge that real changes in specimen brightness can only > occur so fast and not any faster. > > George has postulated a series of intensity measurements from a single > pixel. Depending on the time delay between the measurements in this > series, we may (or may not) be justified in assuming that no real > change in the specimen could justify a sudden, 100x intensity change > that is only one scan-time in duration. Again, this allows us to tag > outliers and then dispose of them either by replacement or averaging. > > So much for what can now be conveniently done using computers. > > Let us not forget that every effort should also be taken to reduce the > number of single-pixel anomalies present in the raw data to begin > with. With the PMT, this means keeping the photocathode cool and small > and monitoring its no-real-signal output over time (i.e., dark count > rate). Store a reference image of a single scan with all the lasers > and room lights turned off, and look for changes in its general > appearance as the weeks pass. More quantitative measures are also > wise. (And while you are at it, compare this zero-light result with > one obtained when the level of room illumination present is similar to > that which you use when actually collecting data. Stray light is often > a more serious problem than we we expect.) > > When employing an EM-CCD, a similar no-signal image can be used to > assess changes in dark-count and coupling-induced charge over time. > These may slowly drift up over time (months) and are always very > sensitive to chip temperature. > > I am less familiar with the anomalous, single-pixel behavior of sCMOS > cameras, but I would guess that, with the exception of hot pixels, > they are less common simply because charge amplification is not > involved and events associated with the emergence of a single, errant > photoelectron cannot be seen above the general read-noise level. As > hot-pixels tend to reoccur at the same exact location in the image, > they can be automatically identified and averaged out using data from > their 4 or 8 nearest neighbours. > > In all these cases the most efficient way of removing single-pixel > anomalies from your final data is to take all the precautions needed > to prevent them from occurring. > > Sorry for being so long-winded. > > Jim Pawley > > >> ***** >> To join, leave or search the confocal microscopy listserv, go to: >> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy >> ***** >> >> Hi George, >> >> On Mon, 25 Feb 2013, George McNamara wrote: >> >>> So if a single pixel scanned five times has values of 40, 38, 42, >>> 3800, >>> 37, (not necessarily acquired in that order), you would prefer the >>> arithmetic mean 791.4 (not that any of the vendors can give you the >>> 0.4, >>> and for anyone who is a fan of Kalman, the Kalman value would >>> depend on >>> which order the five were acquired in, rather than 40 (which is the >>> "digital offset" I usually use on the Zeiss LSM710 I manage when >>> operating in 12-bit mode). >> >> Please do not pretend that I said that. When I talked about "linear" >> filters, I did not imply a simple averaging. >> >>> Pesonally, I would like to see the point scanning confocal microscope >>> (and EMCCD software) vendors implement median and even more PMT and >>> similar noisy data appropriate methods to provide the best possible >>> data >>> to my users and I (and as of April: my colleagues and I at MDACC, >>> Houston). >> >> What data would be best recorded depends highly on the application. >> In the >> general case, recording the values 40, 38, 42, 3800 and 37 in your above >> example would be better than recording just "40". But recording more >> than >> one value per pixel is often not practical. >> >> To reiterate: The Median filter *can* be the optimal filter. You should >> just not go around and tell everybody that it *is* the optimal filter, >> because it certainly is not. And in particular when you want to quantify >> your data after acquisition, it is inappropriate to use the Median >> filter >> (remember: a filter should not be applied just because the processed >> image >> "looks good", but it should only be applied if it helps the analysis). >> >> That is all I said. (I certainly did not claim that you should always >> take >> a simple arithmetic mean. I am not that stupid.) >> >> Ciao, >> Johannes > > |
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** When it comes to software to deal with confocal datasets there is one that we rely on for filtering in time and space and doing quantification and much of image display in a highly flexible programmable environment: ImageJ. _________________________________________ Michael Cammer, Assistant Research Scientist Skirball Institute of Biomolecular Medicine Lab: (212) 263-3208 Cell: (914) 309-3270 ________________________________________ From: Confocal Microscopy List [[hidden email]] on behalf of George McNamara [[hidden email]] Sent: Tuesday, February 26, 2013 9:06 PM To: [hidden email] Subject: Re: median filtering confocal microscope data at the instrument ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Hi Jim et al, I like your point of being able to operate in time as well as in XY and Z. For that matter, if FLIM detectors become widespread, they operate at relatively high gain, so should have a very different noise signature than fluorescence lifetimes. This could - and should - be cleaned up! dead time and other issues could also be modelled and dealt with). Except for some EMCCD "hot pixel' noise removal - which sounds to me like a marketing feature, none of these denoising concepts have been implemented by any of the big four laser scanning confocal microscope vendors. I am pretty confident that there are over 1500 Leica SP5's in the field (and a few SP8's, and some unfortunate souls still nursing SP1's and SP2's), and similar number (or more) of Zeiss LSM 510+700+710+780's. My guess is Nikon + Olympus together add up to around 1500, maybe more. So, market size of around 5,000 systems. I know Leica does not have online median (or any other) pixel cleanup in the LAS AF software I have on my SP5's (and the "batch deconvolution" is so awful even Leica's applications scientists have never been able to show me it working). I discovered that Zeiss ZEN could be tricked into doing a median filter on time series data (post acquisition, and tedious number of steps to do it). I doubt Nikon or Olympus software has it. Upshot: none of the major vendors has made pixel cleanup a consideration. Come to think of it, I don't know of any of the minor vendors having clean up either. |
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Dear List could someone tell me what's difference between opening pinhole and median filter. thanks 2013/2/27 Cammer, Michael <[hidden email]>ddi > ***** > To join, leave or search the confocal microscopy listserv, go to: > http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy > ***** > > When it comes to software to deal with confocal datasets there is one that > we rely on for filtering in time and space and doing quantification and > much of image display in a highly flexible programmable environment: > ImageJ. > > _________________________________________ > Michael Cammer, Assistant Research Scientist > Skirball Institute of Biomolecular Medicine > Lab: (212) 263-3208 Cell: (914) 309-3270 > > ________________________________________ > From: Confocal Microscopy List [[hidden email]] on > behalf of George McNamara [[hidden email]] > Sent: Tuesday, February 26, 2013 9:06 PM > To: [hidden email] > Subject: Re: median filtering confocal microscope data at the instrument > > ***** > To join, leave or search the confocal microscopy listserv, go to: > http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy > ***** > > Hi Jim et al, > > I like your point of being able to operate in time as well as in XY and > Z. For that matter, if FLIM detectors become widespread, they operate at > relatively high gain, so should have a very different noise signature > than fluorescence lifetimes. This could - and should - be cleaned up! > dead time and other issues could also be modelled and dealt with). > > Except for some EMCCD "hot pixel' noise removal - which sounds to me > like a marketing feature, none of these denoising concepts have been > implemented by any of the big four laser scanning confocal microscope > vendors. I am pretty confident that there are over 1500 Leica SP5's in > the field (and a few SP8's, and some unfortunate souls still nursing > SP1's and SP2's), and similar number (or more) of Zeiss LSM > 510+700+710+780's. My guess is Nikon + Olympus together add up to around > 1500, maybe more. So, market size of around 5,000 systems. I know Leica > does not have online median (or any other) pixel cleanup in the LAS AF > software I have on my SP5's (and the "batch deconvolution" is so awful > even Leica's applications scientists have never been able to show me it > working). I discovered that Zeiss ZEN could be tricked into doing a > median filter on time series data (post acquisition, and tedious number > of steps to do it). I doubt Nikon or Olympus software has it. > > Upshot: none of the major vendors has made pixel cleanup a > consideration. Come to think of it, I don't know of any of the minor > vendors having clean up either. > -- Best, 赵学林 |
Chris Tully |
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Xuelin, A pinhole restricts the light coming from the sample, allowing only the zero order diffraction peak to reach the sensor. A median filter is a post acquisition process that looks at some neighborhood of pixels and assigns the median value of that neighborhood to the center pixel. Chris Tully, M.S., Image Analysis Expert t 240.475.9753 f 419.831.0527 | [hidden email] Sent from my iPhone please excuse typos. On Feb 27, 2013, at 4:41 AM, xuelin zhao <[hidden email]> wrote: > ***** > To join, leave or search the confocal microscopy listserv, go to: > http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy > ***** > > Dear List > > could someone tell me what's difference between opening pinhole and median > filter. > thanks > > 2013/2/27 Cammer, Michael <[hidden email]>ddi > >> ***** >> To join, leave or search the confocal microscopy listserv, go to: >> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy >> ***** >> >> When it comes to software to deal with confocal datasets there is one that >> we rely on for filtering in time and space and doing quantification and >> much of image display in a highly flexible programmable environment: >> ImageJ. >> >> _________________________________________ >> Michael Cammer, Assistant Research Scientist >> Skirball Institute of Biomolecular Medicine >> Lab: (212) 263-3208 Cell: (914) 309-3270 >> >> ________________________________________ >> From: Confocal Microscopy List [[hidden email]] on >> behalf of George McNamara [[hidden email]] >> Sent: Tuesday, February 26, 2013 9:06 PM >> To: [hidden email] >> Subject: Re: median filtering confocal microscope data at the instrument >> >> ***** >> To join, leave or search the confocal microscopy listserv, go to: >> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy >> ***** >> >> Hi Jim et al, >> >> I like your point of being able to operate in time as well as in XY and >> Z. For that matter, if FLIM detectors become widespread, they operate at >> relatively high gain, so should have a very different noise signature >> than fluorescence lifetimes. This could - and should - be cleaned up! >> dead time and other issues could also be modelled and dealt with). >> >> Except for some EMCCD "hot pixel' noise removal - which sounds to me >> like a marketing feature, none of these denoising concepts have been >> implemented by any of the big four laser scanning confocal microscope >> vendors. I am pretty confident that there are over 1500 Leica SP5's in >> the field (and a few SP8's, and some unfortunate souls still nursing >> SP1's and SP2's), and similar number (or more) of Zeiss LSM >> 510+700+710+780's. My guess is Nikon + Olympus together add up to around >> 1500, maybe more. So, market size of around 5,000 systems. I know Leica >> does not have online median (or any other) pixel cleanup in the LAS AF >> software I have on my SP5's (and the "batch deconvolution" is so awful >> even Leica's applications scientists have never been able to show me it >> working). I discovered that Zeiss ZEN could be tricked into doing a >> median filter on time series data (post acquisition, and tedious number >> of steps to do it). I doubt Nikon or Olympus software has it. >> >> Upshot: none of the major vendors has made pixel cleanup a >> consideration. Come to think of it, I don't know of any of the minor >> vendors having clean up either. > > > > -- > Best, 赵学林 |
In reply to this post by James Pawley
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Jim, OK, we are probably going to come to blows over this. I just trust the buffer of the Pacific Ocean between us. The term 'filter' applied to digital operations is a bit unfortunate. An optical filter removes light according to its specification. A digital, so called, filter does nothing of the sort. It processes pixels according to the values of other pixels. Deconvolution does EXACTLY the same thing - just with a more sophisticated algorithm. Fundamentally there is no difference. I really wish the term 'filter' had never been used in the digital world. Guy -----Original Message----- From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of James Pawley Sent: Wednesday, 27 February 2013 3:45 AM To: [hidden email] Subject: Re: median filtering confocal microscope data at the instrument ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Hi all, It seems that we are discussing the best ways of eliminating the effects of what are sometimes called "single-pixel" noise events. Although it is fair to ask "What other kind of noise is there?", the term is often used to refer to pixels with recorded intensity values that are "unreasonably large" and seem to have nothing to do with the presence of dye molecules at a certain location in the specimen. Such values can come from a number of possible sources: cosmic rays pass through the photocathode every few seconds; alpha particles from radioactive elements in the PMT somewhat more often. If the PMT is used at high gain, as is often the case when looking at living specimens and signal levels must be kept low, single-photoelectron dark counts may produce fast pulses from the PMT or EM-CCD that approach the size of those representing signal in a "stained" pixel. As these signals seem obvious artifacts when viewed by eye, it would be convenient if they could be removed automatically. By definition, filters take things out. That is both their aim and their curse. One can argue for hours on whether or not the resulting data is better or not. In addition, filters are very fast. However, as computers get ever faster and cheaper one would expect that this advantage would become less important. In contrast to filters, deconvolution puts things in. Traditionally, it reimposes the limits known to have been placed on the data by the optics used to obtain it. Single-pixel events are "impossible" because, assuming that Nyquist has been satisfied, the smallest "real" feature in the data should be at least 4 pixels wide (or 12-16 pixels in area, 50-100 voxels in volume), not one pixel. Because the spatial frequency of a noise pulse singularity is at least 4x higher than that of the highest spatial frequency that the optical system is capable of having transmitted, the offending value can be tagged and then either replaced or averaged down. Indeed, some EM-CCDs can now be set to detect and remove single-pixel noise based on this recognition. More generally however, the most reliable and robust method of obtaining the most accurate pixel-intensity information from a series of sequentially-obtained data sets is to deconvolve them in time as well as space. This just means that we put into the process not just the PSF (which sets the limits on possible spatial frequencies) but also our knowledge that real changes in specimen brightness can only occur so fast and not any faster. George has postulated a series of intensity measurements from a single pixel. Depending on the time delay between the measurements in this series, we may (or may not) be justified in assuming that no real change in the specimen could justify a sudden, 100x intensity change that is only one scan-time in duration. Again, this allows us to tag outliers and then dispose of them either by replacement or averaging. So much for what can now be conveniently done using computers. Let us not forget that every effort should also be taken to reduce the number of single-pixel anomalies present in the raw data to begin with. With the PMT, this means keeping the photocathode cool and small and monitoring its no-real-signal output over time (i.e., dark count rate). Store a reference image of a single scan with all the lasers and room lights turned off, and look for changes in its general appearance as the weeks pass. More quantitative measures are also wise. (And while you are at it, compare this zero-light result with one obtained when the level of room illumination present is similar to that which you use when actually collecting data. Stray light is often a more serious problem than we we expect.) When employing an EM-CCD, a similar no-signal image can be used to assess changes in dark-count and coupling-induced charge over time. These may slowly drift up over time (months) and are always very sensitive to chip temperature. I am less familiar with the anomalous, single-pixel behavior of sCMOS cameras, but I would guess that, with the exception of hot pixels, they are less common simply because charge amplification is not involved and events associated with the emergence of a single, errant photoelectron cannot be seen above the general read-noise level. As hot-pixels tend to reoccur at the same exact location in the image, they can be automatically identified and averaged out using data from their 4 or 8 nearest neighbours. In all these cases the most efficient way of removing single-pixel anomalies from your final data is to take all the precautions needed to prevent them from occurring. Sorry for being so long-winded. Jim Pawley >***** >To join, leave or search the confocal microscopy listserv, go to: >http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy >***** > >Hi George, > >On Mon, 25 Feb 2013, George McNamara wrote: > >> So if a single pixel scanned five times has values of 40, 38, 42, 3800, >> 37, (not necessarily acquired in that order), you would prefer the >> arithmetic mean 791.4 (not that any of the vendors can give you the 0.4, >> and for anyone who is a fan of Kalman, the Kalman value would depend on >> which order the five were acquired in, rather than 40 (which is the >> "digital offset" I usually use on the Zeiss LSM710 I manage when >> operating in 12-bit mode). > >Please do not pretend that I said that. When I talked about "linear" >filters, I did not imply a simple averaging. > >> Pesonally, I would like to see the point scanning confocal microscope >> (and EMCCD software) vendors implement median and even more PMT and >> similar noisy data appropriate methods to provide the best possible data >> to my users and I (and as of April: my colleagues and I at MDACC, >> Houston). > >What data would be best recorded depends highly on the application. In the >general case, recording the values 40, 38, 42, 3800 and 37 in your above >example would be better than recording just "40". But recording more than >one value per pixel is often not practical. > >To reiterate: The Median filter *can* be the optimal filter. You should >just not go around and tell everybody that it *is* the optimal filter, >because it certainly is not. And in particular when you want to quantify >your data after acquisition, it is inappropriate to use the Median filter >(remember: a filter should not be applied just because the processed image >"looks good", but it should only be applied if it helps the analysis). > >That is all I said. (I certainly did not claim that you should always take >a simple arithmetic mean. I am not that stupid.) > >Ciao, >Johannes -- James and Christine Pawley, 5446 Burley Place (PO Box 2348), Sechelt, BC, Canada, V0N3A0, Phone 604-885-0840, email <[hidden email]> NEW! NEW! AND DIFFERENT Cell (when I remember to turn it on!) 1-604-989-6146 |
Jeremy Adler-4 |
In reply to this post by George McNamara
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** A detailed look at median filtering is quite surprising. Assuming 5 images with an average photon count of 24 (mean intensity 23.5) and Poisson noise. Standard deviation average projection 2.21 median projection 2.64 so the median projection has a wider distribution than average projection, but not too bad and the mean is almost identical. The unexpected effect comes when very high values are included (Guy's PMT noise) - simulated by adding 100 to one of the 5 images. median standard deviation increases to 2.95, a trend you might expect But more surprisingly the mean intensity of the median filtered image goes up from 23.5 to 24.96, because all the outliers are above the mean. This effect does not depend on the magnitude of the simulated PMT noise - an offset of 200 has the same effect. In this simulation the projected mean is very badly affected by the PMT noise and the median filter Guy proposes is effective. However the best result comes from rejecting the values that include PMT noise, which is easy to detect and taking the mean of the remaining values - the mean is fine and the SD then widens from 2.21 to 2.47. On a wider note, a very good way of showing the levels of image noise is to take pairs of images and display their similarity in a scattergram (J. Microscopy 230(1),121-133). If anyone can get this very simple and effective way of showing noise into commercial acquisition sofware I will be very impressed - implementation would not be difficult and the scattergram is very easy to understand. |
James Pawley |
In reply to this post by Guy Cox-2
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** >***** >To join, leave or search the confocal microscopy listserv, go to: >http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy >***** > >Jim, > > OK, we are probably going to come to blows over this. I just >trust the buffer of the Pacific Ocean between us. The term 'filter' >applied to digital operations is a bit unfortunate. An optical >filter removes light according to its specification. A digital, so >called, filter does nothing of the sort. It processes pixels >according to the values of other pixels. Deconvolution does EXACTLY >the same thing - just with a more sophisticated algorithm. >Fundamentally there is no difference. I really wish the term >'filter' had never been used in the digital world. > > Guy Well, not quite blows. And I agree that "filtering" and "deconvolution" do have some similarities. But I would like to point out the following: The rationale for deconvolution is that, to the extent that one can mathematically model the blurring effect of an imaging system as a convolution, one should be able to reduce its blurring effect by deconvolving the raw data. The one assumption is that the array of point emitters in the specimen are blurred the the same PSF to produce the blurred data that we detect. In the case of deconvolving 3D microscope data, the main limitations on this process are image noise (Poisson, as well as others) and the possibility that the PSF is not perfectly known and may not remain constant over the sampled volume. Therefore, the assumptions that are put into any acceptable spatial deconvolution system should be traceable to verifiable measurements of, for instance, the optical and sampling parameters being used. Deconvolving in time should be based on knowledge of how the fluorescent signal is expected to change with time: the simplest version being that it doesn't change during the acquisition period. By contrast, the strongest support that I have heard for using, for instance, a particular median filter is that it makes the image somehow look better by suppressing occasional bright pixels. As far as I know, one doesn't even have to input any sampling/PSF data although the effects of such filters obviously vary with spatial frequency. Therefore, we don't even know the size of the bright pixel in real space. Though the images that result from some filters may resemble those produced by some deconvolution procedures, I feel that the former inspires less confidence than the latter. As a compromise I have suggested in the past that, as the process by which the microscope "convolves" structural data is a 3D process, we restrict the use of the term deconvolution to 3D data sets while procedures that are applied to only a single plane of data (at any one time) be called filters. Cheers, Jim Pawley >-----Original Message----- >From: Confocal Microscopy List >[mailto:[hidden email]] On Behalf Of James Pawley >Sent: Wednesday, 27 February 2013 3:45 AM >To: [hidden email] >Subject: Re: median filtering confocal microscope data at the instrument > >***** >To join, leave or search the confocal microscopy listserv, go to: >http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy >***** > >Hi all, > >It seems that we are discussing the best ways of eliminating the >effects of what are sometimes called "single-pixel" noise events. >Although it is fair to ask "What other kind of noise is there?", the >term is often used to refer to pixels with recorded intensity values >that are "unreasonably large" and seem to have nothing to do with the >presence of dye molecules at a certain location in the specimen. Such >values can come from a number of possible sources: cosmic rays pass >through the photocathode every few seconds; alpha particles from >radioactive elements in the PMT somewhat more often. If the PMT is >used at high gain, as is often the case when looking at living >specimens and signal levels must be kept low, single-photoelectron >dark counts may produce fast pulses from the PMT or EM-CCD that >approach the size of those representing signal in a "stained" pixel. >As these signals seem obvious artifacts when viewed by eye, it would >be convenient if they could be removed automatically. > >By definition, filters take things out. That is both their aim and >their curse. One can argue for hours on whether or not the resulting >data is better or not. In addition, filters are very fast. However, >as computers get ever faster and cheaper one would expect that this >advantage would become less important. > >In contrast to filters, deconvolution puts things in. Traditionally, >it reimposes the limits known to have been placed on the data by the >optics used to obtain it. Single-pixel events are "impossible" >because, assuming that Nyquist has been satisfied, the smallest >"real" feature in the data should be at least 4 pixels wide (or 12-16 >pixels in area, 50-100 voxels in volume), not one pixel. > >Because the spatial frequency of a noise pulse singularity is at >least 4x higher than that of the highest spatial frequency that the >optical system is capable of having transmitted, the offending value >can be tagged and then either replaced or averaged down. Indeed, some >EM-CCDs can now be set to detect and remove single-pixel noise based >on this recognition. > >More generally however, the most reliable and robust method of >obtaining the most accurate pixel-intensity information from a series >of sequentially-obtained data sets is to deconvolve them in time as >well as space. This just means that we put into the process not just >the PSF (which sets the limits on possible spatial frequencies) but >also our knowledge that real changes in specimen brightness can only >occur so fast and not any faster. > >George has postulated a series of intensity measurements from a >single pixel. Depending on the time delay between the measurements in >this series, we may (or may not) be justified in assuming that no >real change in the specimen could justify a sudden, 100x intensity >change that is only one scan-time in duration. Again, this allows us >to tag outliers and then dispose of them either by replacement or >averaging. > >So much for what can now be conveniently done using computers. > >Let us not forget that every effort should also be taken to reduce >the number of single-pixel anomalies present in the raw data to begin >with. With the PMT, this means keeping the photocathode cool and >small and monitoring its no-real-signal output over time (i.e., dark >count rate). Store a reference image of a single scan with all the >lasers and room lights turned off, and look for changes in its >general appearance as the weeks pass. More quantitative measures are >also wise. (And while you are at it, compare this zero-light result >with one obtained when the level of room illumination present is >similar to that which you use when actually collecting data. Stray >light is often a more serious problem than we we expect.) > >When employing an EM-CCD, a similar no-signal image can be used to >assess changes in dark-count and coupling-induced charge over time. >These may slowly drift up over time (months) and are always very >sensitive to chip temperature. > >I am less familiar with the anomalous, single-pixel behavior of sCMOS >cameras, but I would guess that, with the exception of hot pixels, >they are less common simply because charge amplification is not >involved and events associated with the emergence of a single, errant >photoelectron cannot be seen above the general read-noise level. As >hot-pixels tend to reoccur at the same exact location in the image, >they can be automatically identified and averaged out using data from >their 4 or 8 nearest neighbours. > >In all these cases the most efficient way of removing single-pixel >anomalies from your final data is to take all the precautions needed >to prevent them from occurring. > >Sorry for being so long-winded. > >Jim Pawley > > >>***** >>To join, leave or search the confocal microscopy listserv, go to: >>http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy > >***** >> >>Hi George, >> >>On Mon, 25 Feb 2013, George McNamara wrote: >> >>> So if a single pixel scanned five times has values of 40, 38, 42, 3800, >>> 37, (not necessarily acquired in that order), you would prefer the >>> arithmetic mean 791.4 (not that any of the vendors can give you the 0.4, >>> and for anyone who is a fan of Kalman, the Kalman value would depend on >>> which order the five were acquired in, rather than 40 (which is the >>> "digital offset" I usually use on the Zeiss LSM710 I manage when >>> operating in 12-bit mode). >> >>Please do not pretend that I said that. When I talked about "linear" >>filters, I did not imply a simple averaging. >> >>> Pesonally, I would like to see the point scanning confocal microscope > >> (and EMCCD software) vendors implement median and even more PMT and >>> similar noisy data appropriate methods to provide the best possible data >>> to my users and I (and as of April: my colleagues and I at MDACC, >>> Houston). >> >>What data would be best recorded depends highly on the application. In the >>general case, recording the values 40, 38, 42, 3800 and 37 in your above >>example would be better than recording just "40". But recording more than >>one value per pixel is often not practical. >> >>To reiterate: The Median filter *can* be the optimal filter. You should >>just not go around and tell everybody that it *is* the optimal filter, >>because it certainly is not. And in particular when you want to quantify >>your data after acquisition, it is inappropriate to use the Median filter >>(remember: a filter should not be applied just because the processed image >>"looks good", but it should only be applied if it helps the analysis). >> >>That is all I said. (I certainly did not claim that you should always take >>a simple arithmetic mean. I am not that stupid.) >> >>Ciao, >>Johannes > > >-- >James and Christine Pawley, 5446 Burley Place (PO Box 2348), Sechelt, >BC, Canada, V0N3A0, >Phone 604-885-0840, email <[hidden email]> >NEW! NEW! AND DIFFERENT Cell (when I remember to turn it on!) 1-604-989-6146 -- James and Christine Pawley, 5446 Burley Place (PO Box 2348), Sechelt, BC, Canada, V0N3A0, Phone 604-885-0840, email <[hidden email]> NEW! NEW! AND DIFFERENT Cell (when I remember to turn it on!) 1-604-989-6146 |
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Most digital filters (not median filters) are convolutions. Many deconvolution systems are filters (for example Wiener filters). Confocal and 4-pi images can be effectively deconvolved by an inverse filter. There is no basis for drawing a line here, it is a continuum. Nor can I support your idea that filters should only be applied on a 2D basis - a long time ago I showed that filters should be applied in as many dimensions as the dataset possessed. G.C. Cox and Colin Sheppard, 1999 Appropriate Image Processing for Confocal Microscopy. In: P.C. Cheng, P P Hwang, J L. Wu, G Wang & H Kim (eds) Focus on Multidimensional Microscopy. World Scientific Publishing, Singapore, New Jersey, London & Hong Kong. Volume 2, pp 42-54 ISBN 981-02-3992-0. Applying filters plane by plane gave hugely worse results. Also, this paper showed that sampling slightly above Nyquist (3 pixels per resel) and then median filtering with a minimal (circular, or face contact only) kernel reduced noise very effectively while not impacting at all on resolution. So it cannot be throwing away information. Isn't that what we need? Guy >Jim, > > OK, we are probably going to come to blows over this. I just >trust the buffer of the Pacific Ocean between us. The term 'filter' >applied to digital operations is a bit unfortunate. An optical filter >removes light according to its specification. A digital, so called, >filter does nothing of the sort. It processes pixels according to the >values of other pixels. Deconvolution does EXACTLY the same thing - >just with a more sophisticated algorithm. >Fundamentally there is no difference. I really wish the term 'filter' >had never been used in the digital world. > > Guy Well, not quite blows. And I agree that "filtering" and "deconvolution" do have some similarities. But I would like to point out the following: The rationale for deconvolution is that, to the extent that one can mathematically model the blurring effect of an imaging system as a convolution, one should be able to reduce its blurring effect by deconvolving the raw data. The one assumption is that the array of point emitters in the specimen are blurred the the same PSF to produce the blurred data that we detect. In the case of deconvolving 3D microscope data, the main limitations on this process are image noise (Poisson, as well as others) and the possibility that the PSF is not perfectly known and may not remain constant over the sampled volume. Therefore, the assumptions that are put into any acceptable spatial deconvolution system should be traceable to verifiable measurements of, for instance, the optical and sampling parameters being used. Deconvolving in time should be based on knowledge of how the fluorescent signal is expected to change with time: the simplest version being that it doesn't change during the acquisition period. By contrast, the strongest support that I have heard for using, for instance, a particular median filter is that it makes the image somehow look better by suppressing occasional bright pixels. As far as I know, one doesn't even have to input any sampling/PSF data although the effects of such filters obviously vary with spatial frequency. Therefore, we don't even know the size of the bright pixel in real space. Though the images that result from some filters may resemble those produced by some deconvolution procedures, I feel that the former inspires less confidence than the latter. As a compromise I have suggested in the past that, as the process by which the microscope "convolves" structural data is a 3D process, we restrict the use of the term deconvolution to 3D data sets while procedures that are applied to only a single plane of data (at any one time) be called filters. Cheers, Jim Pawley >-----Original Message----- >From: Confocal Microscopy List >[mailto:[hidden email]] On Behalf Of James Pawley >Sent: Wednesday, 27 February 2013 3:45 AM >To: [hidden email] >Subject: Re: median filtering confocal microscope data at the >instrument > >***** >To join, leave or search the confocal microscopy listserv, go to: >http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy >***** > >Hi all, > >It seems that we are discussing the best ways of eliminating the >effects of what are sometimes called "single-pixel" noise events. >Although it is fair to ask "What other kind of noise is there?", the >term is often used to refer to pixels with recorded intensity values >that are "unreasonably large" and seem to have nothing to do with the >presence of dye molecules at a certain location in the specimen. Such >values can come from a number of possible sources: cosmic rays pass >through the photocathode every few seconds; alpha particles from >radioactive elements in the PMT somewhat more often. If the PMT is used >at high gain, as is often the case when looking at living specimens and >signal levels must be kept low, single-photoelectron dark counts may >produce fast pulses from the PMT or EM-CCD that approach the size of >those representing signal in a "stained" pixel. >As these signals seem obvious artifacts when viewed by eye, it would be >convenient if they could be removed automatically. > >By definition, filters take things out. That is both their aim and >their curse. One can argue for hours on whether or not the resulting >data is better or not. In addition, filters are very fast. However, as >computers get ever faster and cheaper one would expect that this >advantage would become less important. > >In contrast to filters, deconvolution puts things in. Traditionally, it >reimposes the limits known to have been placed on the data by the >optics used to obtain it. Single-pixel events are "impossible" >because, assuming that Nyquist has been satisfied, the smallest "real" >feature in the data should be at least 4 pixels wide (or 12-16 pixels >in area, 50-100 voxels in volume), not one pixel. > >Because the spatial frequency of a noise pulse singularity is at least >4x higher than that of the highest spatial frequency that the optical >system is capable of having transmitted, the offending value can be >tagged and then either replaced or averaged down. Indeed, some EM-CCDs >can now be set to detect and remove single-pixel noise based on this >recognition. > >More generally however, the most reliable and robust method of >obtaining the most accurate pixel-intensity information from a series >of sequentially-obtained data sets is to deconvolve them in time as >well as space. This just means that we put into the process not just >the PSF (which sets the limits on possible spatial frequencies) but >also our knowledge that real changes in specimen brightness can only >occur so fast and not any faster. > >George has postulated a series of intensity measurements from a single >pixel. Depending on the time delay between the measurements in this >series, we may (or may not) be justified in assuming that no real >change in the specimen could justify a sudden, 100x intensity change >that is only one scan-time in duration. Again, this allows us to tag >outliers and then dispose of them either by replacement or averaging. > >So much for what can now be conveniently done using computers. > >Let us not forget that every effort should also be taken to reduce the >number of single-pixel anomalies present in the raw data to begin with. >With the PMT, this means keeping the photocathode cool and small and >monitoring its no-real-signal output over time (i.e., dark count rate). >Store a reference image of a single scan with all the lasers and room >lights turned off, and look for changes in its general appearance as >the weeks pass. More quantitative measures are also wise. (And while >you are at it, compare this zero-light result with one obtained when >the level of room illumination present is similar to that which you use >when actually collecting data. Stray light is often a more serious >problem than we we expect.) > >When employing an EM-CCD, a similar no-signal image can be used to >assess changes in dark-count and coupling-induced charge over time. >These may slowly drift up over time (months) and are always very >sensitive to chip temperature. > >I am less familiar with the anomalous, single-pixel behavior of sCMOS >cameras, but I would guess that, with the exception of hot pixels, they >are less common simply because charge amplification is not involved and >events associated with the emergence of a single, errant photoelectron >cannot be seen above the general read-noise level. As hot-pixels tend >to reoccur at the same exact location in the image, they can be >automatically identified and averaged out using data from their 4 or 8 >nearest neighbours. > >In all these cases the most efficient way of removing single-pixel >anomalies from your final data is to take all the precautions needed to >prevent them from occurring. > >Sorry for being so long-winded. > >Jim Pawley > > >>***** >>To join, leave or search the confocal microscopy listserv, go to: >>http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy > >***** >> >>Hi George, >> >>On Mon, 25 Feb 2013, George McNamara wrote: >> >>> So if a single pixel scanned five times has values of 40, 38, 42, 3800, >>> 37, (not necessarily acquired in that order), you would prefer the >>> arithmetic mean 791.4 (not that any of the vendors can give you the 0.4, >>> and for anyone who is a fan of Kalman, the Kalman value would depend on >>> which order the five were acquired in, rather than 40 (which is the >>> "digital offset" I usually use on the Zeiss LSM710 I manage when >>> operating in 12-bit mode). >> >>Please do not pretend that I said that. When I talked about "linear" >>filters, I did not imply a simple averaging. >> >>> Pesonally, I would like to see the point scanning confocal >>> microscope > >> (and EMCCD software) vendors implement median and even more PMT > and >>> similar noisy data appropriate methods to provide the best possible data >>> to my users and I (and as of April: my colleagues and I at MDACC, >>> Houston). >> >>What data would be best recorded depends highly on the application. In >>the general case, recording the values 40, 38, 42, 3800 and 37 in your >>above example would be better than recording just "40". But recording >>more than one value per pixel is often not practical. >> >>To reiterate: The Median filter *can* be the optimal filter. You >>should just not go around and tell everybody that it *is* the optimal >>filter, because it certainly is not. And in particular when you want >>to quantify your data after acquisition, it is inappropriate to use >>the Median filter >>(remember: a filter should not be applied just because the processed >>image "looks good", but it should only be applied if it helps the analysis). >> >>That is all I said. (I certainly did not claim that you should always >>take a simple arithmetic mean. I am not that stupid.) >> >>Ciao, >>Johannes > > >-- >James and Christine Pawley, 5446 Burley Place (PO Box 2348), Sechelt, >BC, Canada, V0N3A0, Phone 604-885-0840, email <[hidden email]> NEW! >NEW! AND DIFFERENT Cell (when I remember to turn it on!) 1-604-989-6146 -- James and Christine Pawley, 5446 Burley Place (PO Box 2348), Sechelt, BC, Canada, V0N3A0, Phone 604-885-0840, email <[hidden email]> NEW! NEW! AND DIFFERENT Cell (when I remember to turn it on!) 1-604-989-6146 |
Johannes Schindelin |
In reply to this post by Guy Cox-2
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** [Sorry, reposting with the subscribed mail address] On Fri, 1 Mar 2013, Johannes Schindelin wrote: > ***** > To join, leave or search the confocal microscopy listserv, go to: > http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy > ***** > > Hi Guy, > > On Wed, 27 Feb 2013, Guy Cox wrote: > > > The term 'filter' applied to digital operations is a bit unfortunate. > > An optical filter removes light according to its specification. A > > digital, so called, filter does nothing of the sort. It processes > > pixels according to the values of other pixels. Deconvolution does > > EXACTLY the same thing - just with a more sophisticated algorithm. > > Fundamentally there is no difference. I really wish the term 'filter' > > had never been used in the digital world. > > I still like to call it a "filter", and here is why: in digital images, we > do not have photons, but we have information. Information is measured as > entropy (the unit is "bits"). And no digital filter can increase the > information. They can at most retain the same amount of information. But > mostly they reduce information. > > So what about your deconvolution example? > > Let's go back first to the term "information" as per information theory. > The amount of information in something like an image can be described as > the average number of yes/no questions that have to be asked (given > optimally efficient questioning) to describe it fully. > > Of course, this implies that we *already* know something, e.g. that it is > a collection of pixels, in a certain geometric arrangement, the pixel > values are in a certain range, etc. Without such a context, the > information would be infinite and we would not be able to store it in a > file. > > With deconvolution, we basically use additional knowledge about the image > that is based on our assumption that the image formation happened a > certain way, with a given point spread function. It is crucial to keep in > mind that we reduce the amount of information in the original image using > the knowledge about how the experiment works physically. It is even > possible to put that information reduction into laymen's terms: we strip > away the information about what the camera saw and retain only the > information about the structures that gave rise to the acquired image. > > Sure, you could regenerate that image, but again you would need to use the > knowledge about the optics; without that knowledge, the information is no > longer in the deconvolved image. (And even with the knowledge, the > reconstruction would be imperfect due to boundary effects, but that's > beside the point.) > > Keep in mind that information always lives in a context. If you knew > nothing about the bytes that make up this email, there would be no way to > compress it. But since you know that it is written in English, using the > ASCII encoding, you could compress it rather well. Even if you knew only > that a human wrote it using a common computer, you could exploit the > common knowledge that language is highly redundant, and compress it e.g. > into a .zip file. (I like the compression example because it explains the > unit "bits" and it illustrates the need for a context: .zip files compress > rather poorly because the context "contains redundant and repetitive > byte sequences" does not apply.) > > The same is happening with deconvolution: you have the context that you > know a lot about the physics of image formation, and only that allows you > to strip away the blurriness. Now, from the point of view of information > theory, you strip away information. Which is good, because it is (mostly) > information you do not care about. > > With this reasoning in mind, I hope that it is less offensive that I like > the term "filter" in digital image processing. > > Ciao, > Johannes > |
Lutz Schaefer |
In reply to this post by George McNamara
Johannes
verbal interpretation of a mathematical matter is almost always incorrect. One has to be very careful not to be misunderstood. For example when you say that information gets stripped out of a widefield image is incorrect for deconvolution. In fact the inverse of a convolution will add intensities back into point objects. I cannot see how that reduces information content. My point is just to be careful when mathematical expressions become interpreted verbally. You likely loose information there too especially if the context isnt understood. My 2 cents Regards Lutz Sent from Samsung Mobile -------- Original message -------- Subject: Re: median filtering confocal microscope data at the instrument From: Johannes Schindelin <[hidden email]> To: [hidden email] CC: ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** [Sorry, reposting with the subscribed mail address] On Fri, 1 Mar 2013, Johannes Schindelin wrote: > ***** > To join, leave or search the confocal microscopy listserv, go to: > http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy > ***** > > Hi Guy, > > On Wed, 27 Feb 2013, Guy Cox wrote: > > > The term 'filter' applied to digital operations is a bit unfortunate. > > An optical filter removes light according to its specification. A > > digital, so called, filter does nothing of the sort. It processes > > pixels according to the values of other pixels. Deconvolution does > > EXACTLY the same thing - just with a more sophisticated algorithm. > > Fundamentally there is no difference. I really wish the term 'filter' > > had never been used in the digital world. > > I still like to call it a "filter", and here is why: in digital images, we > do not have photons, but we have information. Information is measured as > entropy (the unit is "bits"). And no digital filter can increase the > information. They can at most retain the same amount of information. But > mostly they reduce information. > > So what about your deconvolution example? > > Let's go back first to the term "information" as per information theory. > The amount of information in something like an image can be described as > the average number of yes/no questions that have to be asked (given > optimally efficient questioning) to describe it fully. > > Of course, this implies that we *already* know something, e.g. that it is > a collection of pixels, in a certain geometric arrangement, the pixel > values are in a certain range, etc. Without such a context, the > information would be infinite and we would not be able to store it in a > file. > > With deconvolution, we basically use additional knowledge about the image > that is based on our assumption that the image formation happened a > certain way, with a given point spread function. It is crucial to keep in > mind that we reduce the amount of information in the original image using > the knowledge about how the experiment works physically. It is even > possible to put that information reduction into laymen's terms: we strip > away the information about what the camera saw and retain only the > information about the structures that gave rise to the acquired image. > > Sure, you could regenerate that image, but again you would need to use the > knowledge about the optics; without that knowledge, the information is no > longer in the deconvolved image. (And even with the knowledge, the > reconstruction would be imperfect due to boundary effects, but that's > beside the point.) > > Keep in mind that information always lives in a context. If you knew > nothing about the bytes that make up this email, there would be no way to > compress it. But since you know that it is written in English, using the > ASCII encoding, you could compress it rather well. Even if you knew only > that a human wrote it using a common computer, you could exploit the > common knowledge that language is highly redundant, and compress it e.g. > into a .zip file. (I like the compression example because it explains the > unit "bits" and it illustrates the need for a context: .zip files compress > rather poorly because the context "contains redundant and repetitive > byte sequences" does not apply.) > > The same is happening with deconvolution: you have the context that you > know a lot about the physics of image formation, and only that allows you > to strip away the blurriness. Now, from the point of view of information > theory, you strip away information. Which is good, because it is (mostly) > information you do not care about. > > With this reasoning in mind, I hope that it is less offensive that I like > the term "filter" in digital image processing. > > Ciao, > Johannes > |
Johannes Schindelin |
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Hi Lutz, On Fri, 1 Mar 2013, Lutz wrote: > verbal interpretation of a mathematical matter is almost always incorrect. Given that I am a mathematician myself, one would hope that my command of the mathematically-flavored language is good enough to deliver a correct interpretation. > One has to be very careful not to be misunderstood. For example when you > say that information gets stripped out of a widefield image is incorrect > for deconvolution. In fact the inverse of a convolution will add > intensities back into point objects. You say it will "add ... back". I think it would be better to leave the "back" out since the information was never there! Just imagine deconvolving a grayscale photograph of a bus. You quite literally add information, and in this case it becomes quite obvious that the information you put "back" was never there. The same, however, is true for deconvolving a digital image acquired using a CCD. The information about the point spread function was never in the original image. > I cannot see how that reduces information content. But it does reduce the original information, that is what you use the deconvolution for! After all, you do not want to see all that signal that is caused by structures *and* point spread function. You only want to see structures! So what you do is to add more information (your knowledge of the physical process) and reduce the combined information into a new, deconvolved image that mostly contains the information you are interested in. Leaving out the rest of the information you are no longer interested in. > My point is just to be careful when mathematical expressions become > interpreted verbally. You likely loose information there too especially > if the context isnt understood. Hence my attempt to clarify again. Ciao, Johannes |
In reply to this post by Lutz Schaefer
Lutz,
Are you really equating intensity with information? In a post about using accurate language to describe mathematical operations, this seems surprising. Guy -----Original Message----- From: Confocal Microscopy List [mailto:[hidden email]] On Behalf Of Lutz Sent: Saturday, 2 March 2013 6:22 AM To: [hidden email] Subject: Re: median filtering confocal microscope data at the instrument Johannes verbal interpretation of a mathematical matter is almost always incorrect. One has to be very careful not to be misunderstood. For example when you say that information gets stripped out of a widefield image is incorrect for deconvolution. In fact the inverse of a convolution will add intensities back into point objects. I cannot see how that reduces information content. My point is just to be careful when mathematical expressions become interpreted verbally. You likely loose information there too especially if the context isnt understood. My 2 cents Regards Lutz Sent from Samsung Mobile -------- Original message -------- Subject: Re: median filtering confocal microscope data at the instrument From: Johannes Schindelin <[hidden email]> To: [hidden email] CC: ***** To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** [Sorry, reposting with the subscribed mail address] On Fri, 1 Mar 2013, Johannes Schindelin wrote: > ***** > To join, leave or search the confocal microscopy listserv, go to: > http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy > ***** > > Hi Guy, > > On Wed, 27 Feb 2013, Guy Cox wrote: > > > The term 'filter' applied to digital operations is a bit unfortunate. > > An optical filter removes light according to its specification. A > > digital, so called, filter does nothing of the sort. It processes > > pixels according to the values of other pixels. Deconvolution does > > EXACTLY the same thing - just with a more sophisticated algorithm. > > Fundamentally there is no difference. I really wish the term 'filter' > > had never been used in the digital world. > > I still like to call it a "filter", and here is why: in digital > images, we do not have photons, but we have information. Information > is measured as entropy (the unit is "bits"). And no digital filter can > increase the information. They can at most retain the same amount of > information. But mostly they reduce information. > > So what about your deconvolution example? > > Let's go back first to the term "information" as per information theory. > The amount of information in something like an image can be described > as the average number of yes/no questions that have to be asked (given > optimally efficient questioning) to describe it fully. > > Of course, this implies that we *already* know something, e.g. that it > is a collection of pixels, in a certain geometric arrangement, the > pixel values are in a certain range, etc. Without such a context, the > information would be infinite and we would not be able to store it in > a file. > > With deconvolution, we basically use additional knowledge about the > image that is based on our assumption that the image formation > happened a certain way, with a given point spread function. It is > crucial to keep in mind that we reduce the amount of information in > the original image using the knowledge about how the experiment works > physically. It is even possible to put that information reduction into > laymen's terms: we strip away the information about what the camera > saw and retain only the information about the structures that gave rise to the acquired image. > > Sure, you could regenerate that image, but again you would need to use > the knowledge about the optics; without that knowledge, the > information is no longer in the deconvolved image. (And even with the > knowledge, the reconstruction would be imperfect due to boundary > effects, but that's beside the point.) > > Keep in mind that information always lives in a context. If you knew > nothing about the bytes that make up this email, there would be no way > to compress it. But since you know that it is written in English, > using the ASCII encoding, you could compress it rather well. Even if > you knew only that a human wrote it using a common computer, you could > exploit the common knowledge that language is highly redundant, and compress it e.g. > into a .zip file. (I like the compression example because it explains > the unit "bits" and it illustrates the need for a context: .zip files > compress rather poorly because the context "contains redundant and > repetitive byte sequences" does not apply.) > > The same is happening with deconvolution: you have the context that > you know a lot about the physics of image formation, and only that > allows you to strip away the blurriness. Now, from the point of view > of information theory, you strip away information. Which is good, > because it is (mostly) information you do not care about. > > With this reasoning in mind, I hope that it is less offensive that I > like the term "filter" in digital image processing. > > Ciao, > Johannes > |
George McNamara |
In reply to this post by Lutz Schaefer
*****
To join, leave or search the confocal microscopy listserv, go to: http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy ***** Hi Lutz, Thanks for posting to the listserv on this topic. Are you going to take the information from this thread and incorporate it into improvements in Zeiss ZEN confocal and widefield software (and documentation, application notes, marketing bullet points, salespeoples talking points, applications people training points)? George On 3/1/2013 2:21 PM, Lutz wrote: > Johannes > verbal interpretation of a mathematical matter is almost always incorrect. One has to be very careful not to be misunderstood. For example when you say that information gets stripped out of a widefield image is incorrect for deconvolution. In fact the inverse of a convolution will add intensities back into point objects. I cannot see how that reduces information content. My point is just to be careful when mathematical expressions become interpreted verbally. You likely loose information there too especially if the context isnt understood. > > My 2 cents > Regards > Lutz > > Sent from Samsung Mobile > > -------- Original message -------- > Subject: Re: median filtering confocal microscope data at the instrument > From: Johannes Schindelin<[hidden email]> > To: [hidden email] > CC: > > ***** > To join, leave or search the confocal microscopy listserv, go to: > http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy > ***** > > [Sorry, reposting with the subscribed mail address] > > On Fri, 1 Mar 2013, Johannes Schindelin wrote: > > >> ***** >> To join, leave or search the confocal microscopy listserv, go to: >> http://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy >> ***** >> >> Hi Guy, >> >> On Wed, 27 Feb 2013, Guy Cox wrote: >> >> >>> The term 'filter' applied to digital operations is a bit unfortunate. >>> An optical filter removes light according to its specification. A >>> digital, so called, filter does nothing of the sort. It processes >>> pixels according to the values of other pixels. Deconvolution does >>> EXACTLY the same thing - just with a more sophisticated algorithm. >>> Fundamentally there is no difference. I really wish the term 'filter' >>> had never been used in the digital world. >>> >> I still like to call it a "filter", and here is why: in digital images, we >> do not have photons, but we have information. Information is measured as >> entropy (the unit is "bits"). And no digital filter can increase the >> information. They can at most retain the same amount of information. But >> mostly they reduce information. >> >> So what about your deconvolution example? >> >> Let's go back first to the term "information" as per information theory. >> The amount of information in something like an image can be described as >> the average number of yes/no questions that have to be asked (given >> optimally efficient questioning) to describe it fully. >> >> Of course, this implies that we *already* know something, e.g. that it is >> a collection of pixels, in a certain geometric arrangement, the pixel >> values are in a certain range, etc. Without such a context, the >> information would be infinite and we would not be able to store it in a >> file. >> >> With deconvolution, we basically use additional knowledge about the image >> that is based on our assumption that the image formation happened a >> certain way, with a given point spread function. It is crucial to keep in >> mind that we reduce the amount of information in the original image using >> the knowledge about how the experiment works physically. It is even >> possible to put that information reduction into laymen's terms: we strip >> away the information about what the camera saw and retain only the >> information about the structures that gave rise to the acquired image. >> >> Sure, you could regenerate that image, but again you would need to use the >> knowledge about the optics; without that knowledge, the information is no >> longer in the deconvolved image. (And even with the knowledge, the >> reconstruction would be imperfect due to boundary effects, but that's >> beside the point.) >> >> Keep in mind that information always lives in a context. If you knew >> nothing about the bytes that make up this email, there would be no way to >> compress it. But since you know that it is written in English, using the >> ASCII encoding, you could compress it rather well. Even if you knew only >> that a human wrote it using a common computer, you could exploit the >> common knowledge that language is highly redundant, and compress it e.g. >> into a .zip file. (I like the compression example because it explains the >> unit "bits" and it illustrates the need for a context: .zip files compress >> rather poorly because the context "contains redundant and repetitive >> byte sequences" does not apply.) >> >> The same is happening with deconvolution: you have the context that you >> know a lot about the physics of image formation, and only that allows you >> to strip away the blurriness. Now, from the point of view of information >> theory, you strip away information. Which is good, because it is (mostly) >> information you do not care about. >> >> With this reasoning in mind, I hope that it is less offensive that I like >> the term "filter" in digital image processing. >> >> Ciao, >> Johannes >> >> |
Free forum by Nabble | Edit this page |